MIT researchers are using AI and wearables to detect conversational tone

For most, wearables are little more than pedometers – ways of gauging how much one’s moved during the day and, hopefully, challenging them to do more in the future. It’s one of the factors in the space’s seeming plateau of late. But a wrist full of sensors can do a heck of a lot more than what they’re currently be utilized for.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have been experimenting with Samsung’s Simband — a non-commercial concept wrist-worn health device – in an attempt to detect the conversational tones of speakers. If effective, such technologies can benefit people with conditions like Asperger’s, who have difficulty detecting social cues.

The wearable gathers movement, heart rate, blood flow and pressure and skin temperature. That data, coupled with deep learning and analysis of audio and text are used to determine the speaker’s intentions and emotional states like happy, sad or neutral.

According to C-SAIL,

[The] algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and a monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

The researchers say their current findings are 7.5 percent more accurate than existing approaches. The team is also looking to attempt the approach with more commercial devices, so the Apple Watch could one day be used to translate such social cues.