Researchers from the Universidad Carlos III de Madrid have begun developing a new system capable of reading human emotions. The system created by these researchers can be used to automatically adapt the dialogue to the user’s situation, so that the machine’s response is adequate to the person’s emotional state. According to David Grill, one of its creators a professor in UC3M’s Computer Science Department, through to this new development, the machine will be able to determine how the user feels (emotions) and how intends to continue the dialogue (intentions). To achieve this researchers looked at a total of 60 acoustic parameters, including the tenor of a user’s voice, the speed at which one speaks, and the length of any pauses.
They also implemented controls to account for any endogenous reactions, and enabled the adaptable device to modify its speech accordingly, based on predictions of where the conversation may lead. In the end, they found that users responded more positively whenever the computer spoke in objective terms.