Figure Eight & EMOS

Learn how EMOS improved their emotional detection model by 30% with the help of Figure Eight

The Company

EMOS is a Hong Kong-based emotion analytics company creating products that understand and react to emotions. They blend deep learning with automatic speech recognition, natural language understanding and sentiment analysis, allowing their customers to predict trends, reduce costs, design great experiences, create product attachments, boost loyalty, analyze reviews, and more, all by interpreting emotion and sentiment through audio and textual conversations.

The Challenge

Emotion plays a big role in the consumer world. We form attachments to products, swear off companies with bad customer service, and make buying decisions based on how we feel.

But, on some level, emotion is subjective. Sentiment is personal. What might seem angry to one person may simply be curt to another. Emotions have ranges, nuances, and subtleties that can be difficult to define. And that’s just for people. For a machine learning model, understanding emotions is even harder.

That hasn’t deterred EMOS. They’ve spent years training models to do just that: to understand emotions. They started off using academic datasets but found these simply didn’t work in the real world. The data was too sterile and there wasn’t nearly enough of it.

That’s not to mention that academic data didn’t work in-domain. EMOS has a product that aims to understand emotion in call centers and academic data didn’t contain the particular conversational audio they needed to create a model that would work in that setting. Put together, the algorithms trained on academic data were topping out at around 50-60% accuracy. EMOS needed high-quality training data to create a better model. They turned to Figure Eight.

The Solution

Human-in-the-loop machine learning works especially well for analyzing emotional content. After all, humans don’t need training data to understand when a person is happy or angry. They simply know. EMOS pushed their data through Figure Eight, creating workflows where annotators rated the emotion of speakers in audio clips, hoping to augment the models and boost their overall accuracy.

But as EMOS started collecting labels on audio to train their model, they ran into an interesting challenge: because emotion is so subjective and because it’s important for EMOS to understand the intensity of these emotions, though their model was improving, it wasn’t improving as dramatically as they hoped.

EMOS worked with Figure Eight to redesign the workflow for annotators. Instead of looking at audio with multiple speakers, they identified selections with a single speaker to reduce any confusion. They also prioritized the rows with strong, intense emotions, as those had the most agreement among annotators and could function as definitive examples (even if annotators sometimes disagreed about intensity, they’d agree on the underlying emotion, which was a crucial part of improving their models).

This new approach helped reduce some of the inherent subjectivity attendant to judging emotions and gave their model the ingredients it needed to beat their benchmark.

The result? EMOS has achieved 80% accuracy in their emotional detection algorithms, an improvement of an additional 30% to their model’s performance. That’s a massive gain, not to mention one that beats the typical 70% performance seen in academia. It’s one that can give their clients unique insight into every conversation, the ability to analyze how emotion informs the customer service and sales world, and to identify what makes customers truly happy, all without having to do painstaking, fine-tune auditing of individual calls.