Many people walk around wearing step trackers and heartbeat monitors in the 21st century, but MIT's experiment involved crafting an artificial intelligence (AI) application housed in a wearable band, that detects conversational tones based on speech patterns, and can classify "the overall emotional nature of the subject's historic narration."
In essence, the band can detect the full gamut of human emotions — including anger, boredom and sadness — based merely on a person's voice. Human movement, heart rate, blood pressure, blood flow, skin temperature, tone, pitch, energy, and vocabulary are all considered.
The project is one of many designed to help people with chronic social disorders become more comfortable with communication's numerous unspoken complexities. While the linguistic elements of communication are learned via repetition of words and phrases, tonality and intonation often make it harder to determine a person's mood in a conversation.
After all, the tone of a conversation can drastically affect its emotional intent and meaning — a variety of different emotions can be conveyed via nuanced changes in pitch and intonation, even if the words employed don't vary. Many people learn to detect these subtle clues and indications, but some don't — particularly those afflicted by social disorders such as autism or aspergers.
48 Things British People Say And What They Actually Mean This table highlights the British trait of being too… https://t.co/iekfSQw6wc— Angmohdan (@angmohdan) February 2, 2017
Social disorder sufferers can struggle to determine the emotional intent of a conversation, particularly if it has multiple levels of emotion and nuance. A sad story with elements of happiness could be alien and bewildering to an individual with a personality disorder. Likewise, machines with artificial intelligence capabilities cannot navigate social cues. Headway in this area could improve the life of millions, and optimize tech far beyond its current abilities.
In tests, professors gave participants a Samsung Simband, which collects so-called "high-resolution" physiological changes, and can track a large amount of data. When combined with audio collected via smartphones, the basis for recording the emotions in conversations between the volunteers was established.
Researchers recorded physiological changes among participants, from facial expressions to changes in speech, to discover how different emotions were expressed. Using this data, the AI and researchers categorized the results into positive, negative and neutral. These groups were then expanded to contain further variations. For example, "negative" speech could contain a mix of sadness, loathing, fear or boredom.
The project is unique in its use of so-called "natural data." Instead of asking participants to act out various emotions, they were instead allowed to tell a story of their own, choosing which facial expressions and vocal tones were collected.
While the project is still in its development phase, the professors behind the test claim AI now allows for real-time emotional classification of natural conversation — in high fidelity.