The system has been analyzed as a whole 60 acoustic parameters of voice users, including tone, speaking rate, pause duration and energy of the sound signal. Scientists from those institutions specifically designed the system to look for negative emotions, which is then able to indicate anger, boredom or doubt.
The system was also not only able to draw conclusions as to whether the voice of users, but also of how the development of their conversation. If the system repeatedly does not recognize what was said by someone, or keep asking for repeat information previously provided, it seems someone is likely to be angry or feel bored. Based on a statistical method derived from previous conversations, the system is also able to guess the direction of the conversation with the user and what actions they would do.
After recognizing intentions and mood of the person identified, the system will adapt to the appropriate dialogue from the initial identification earlier. If a user’s system sounded doubtful, then the system will also offer assistance. If they sound bored or angry, then the offer of assistance was most likely also will increase the level of aggravation callers.
Scientists from Madrid and Granada have been testing the prototype system on human test subjects and found that the system is capable of producing a shorter conversations and successful. During its development, it is possible at all if one day this discovery will be combined with a system that is being developed by Binghamton University, which is able to recognize the emotional state of computer users from facial expressions.Gadget, News