With EmoWatch your Apple Watch knows if you are :) or :(

Siri is good for a lot of things, but she generally doesn’t care much whether you sound your usual perky self. Launching today is EmoWatch, a new app for your iPhone and Apple Watch that will take care of that side of things, helping you track your emotional state.

The EmoWatch app identifies and tracks users's emotions from their voices, regardless of language, by analyzing vocal properties

The EmoWatch app identifies and tracks users’s emotions from their voices, regardless of language, by analyzing vocal properties.

EmoWatch is basically a technical demo app for Smartmedical Corp‘s Empath. Well, I say ‘app’; it’s not a very advanced one: The iPhone app is sparse to the point of being just barely on the right side of a minimum viable product. Frankly, I’m surprised they were able to get it past the iTunes App Store reviewers, but it does do what it needs to: offer a conduit to the Apple watch, and showing a read-out (in the form of a graph) of your moods throughout the day. And literally nothing else. If you don’t have an Apple Watch, you’ll be mightily confused, because the iPhone app doesn’t give you the slightest hint about what you’re meant to do.

Extreme minimalism didn’t stop me from being absolutely fascinated, however.

Most of the magic is happening server-side, behind the scenes. It works by taking a short sample of your voice. The app appears to be content after just 2-3 seconds, barely enough time to get a full sentence out. It then analyzes the audio sample for a number of parameters, including intonation, pitch, speed, and volume.

The technology is related purely to a person’s voice, rather than the words they are using. The app attempts to measure four “dimensions” of emotion: Joy, sorrow, anger, and calmness. In theory, the company claims, this means that users of all linguistic backgrounds should be able to use it.

As interesting as the app is, I found it far more compelling that the Empath API is available to developers. The API is a tool for getting real-time feedback on the mood of a speaker, which has any number of interesting use cases. If the tech is accurate, this API could be a great addition to the numerous mental health tracking apps out there, and a ton of other use-cases besides. The technology is also being used in robotics (to help robots understand sarcasm, presumably) and call centers (possibly to detect the proximity of the customer to the terminus of their tether).

Of course, Empath isn’t the only horse in this game; Good Vibrations and Beyond Verbal offer similar functionality, but as far as I can tell, neither offer the same real-time feedback loop as the Empath API.

On the topic of accuracy — I wasn’t able to test in depth how well the mood analyzing tech actually works, but it generally appears to be roughly in the right ballpark: I felt mildly stressed earlier today, and the app picked up that I was less than chipper. As I’m writing this, I have a bottle of Grapefruit Sculpin, I’m relaxed, listening to some music and writing. And once I hit “publish” on this article, it’s the weekend. No wonder the app reports I’m hunky-dory.