In January of 2018, Annette Zimmermann, vice president of research at Gartner, proclaimed: “By 2022, your personal device will know more about your emotional state than your own family.” Just two months later, a landmark study from the University of Ohio claimed that their algorithm was now better at detecting emotions than people are.
AI systems and devices will soon recognize, interpret, process, and simulate human emotions. A combination of facial analysis, voice pattern analysis, and deep learning can already decode human emotions for market research and political polling purposes. With companies like Affectiva, BeyondVerbal and Sensay providing plug-and-play sentiment analysis software, the affective computing market is estimated to grow to $41 billion by 2022, as firms like Amazon, Google, Facebook, and Apple race to decode their users’ emotions.
Insight Center
-
Adopting AI
Sponsored by SAS
How companies are using artificial intelligence in their business operations.
Emotional inputs will create a shift from data-driven IQ-heavy interactions to deep EQ-guided experiences, giving brands the opportunity to connect to customers on a much deeper, more personal level. But reading people’s emotions is a delicate business. Emotions are highly personal, and users will have concerns about fear privacy invasion and manipulation. Before companies dive in, leaders should consider questions like:
- What are you offering? Does your value proposition naturally lend itself to the involvement of emotions? And can you credibly justify the inclusion of emotional clues for the betterment of the user experience?
- What are your customers’ emotional intentions when interacting with your brand? What is the nature of the interaction?
- Has the user given you explicit permission to analyze their emotions? Does the user stay in control of their data, and can they revoke their permission at any given time?
- Is your system smart enough to accurately read and react to a user’s emotions?
- What is the danger in any given situation if the system should fail — danger for the user, and/or danger for the brand?
Keeping those concerns in mind, business leaders should be aware of current applications for Emotional AI. These fall roughly into three categories:
Systems that use emotional analysis to adjust their response.
In this application, the AI service acknowledges emotions and factors them into its decision making process. However, the service’s output is completely emotion-free.
Conversational IVRs (interactive voice response) and chatbots promise to route customers to the right service flow faster and more accurately when factoring in emotions. For example, when the system detects a user to be angry, they are routed to a different escalation flow, or to a human.
AutoEmotive, Affectiva’s Automotive AI, and Ford are racing to get emotional car software market-ready to detect human emotions such as anger or lack of attention, and then take control over or stop the vehicle, preventing accidents or acts of road rage.
The security sector also dabbles in Emotion AI to detect stressed or angry people. The British government, for instance, monitors its citizens’ sentiments on certain topics over social media.
In this category, emotions play a part in the machine’s decision-making process. However, the machine still reacts like a machine — essentially, as a giant switchboard routing people in the right direction.
Systems that provide a targeted emotional analysis for learning purposes.
In 2009, Philips teamed up with a Dutch bank to develop the idea of a “rationalizer” bracelet to stop traders from making irrational decisions by monitoring their stress levels, which it measures by monitoring the wearer’s pulse. Making traders aware of their heightened emotional states made them pause and think before making impulse decisions.
Brain Power’s smart glasses help people with autism better understand emotions and social cues. The wearer of this Google Glass type device sees and hears special feedback geared to the situation — for example coaching on facial expressions of emotions, when to look at people, and even feedback on the user’s own emotional state.
These targeted emotional analysis systems acknowledge and interpret emotions. The insights are communicated to the user for learning purposes. On a personal level, these targeted applications will act like a Fitbit for the heart and mind, aiding in mindfulness, self-awareness, and ultimately self-improvement, while maintaining a machine-person relationship that keeps the user in charge.
Targeted emotional learning systems are also being tested for group settings, such as by analyzing the emotions of students for teachers, or workers for managers. Scaling to group settings can have an Orwellian feeling: Concerns about privacy, creativity, and individuality have these experiments playing on the edge of ethical acceptance. More importantly, adequate psychological training for the people in power is required to interpret the emotional results, and to make adequate adjustments.
Systems that mimic and ultimately replace human-to- human interactions.
When smart speakers entered the American living room in 2014, we started to get used to hearing computers refer to themselves as “I.” Call it a human error or an evolutionary shortcut, but when machines talk, people assume relationships.
There are now products and services that use conversational UIs and the concept of “computers as social actors” to try to alleviate mental-health concerns. These applications aim to coach users through crises using techniques from behavioral therapy. Ellie helps treat soldiers with PTSD. Karim helps Syrian refugees overcome trauma. Digital assistants are even tasked with helping alleviate loneliness among the elderly.
Casual applications like Microsoft’s XiaoIce, Google Assistant, or Amazon’s Alexa use social and emotional cues for a less altruistic purpose — their aim is to secure users’ loyalty by acting like new AI BFFs. Futurist Richard van Hooijdonk quips: “If a marketer can get you to cry, he can get you to buy.”
The discussion around addictive technology is starting to examine the intentions behind voice assistants. What does it mean for users if personal assistants are hooked up to advertisers? In a leaked Facebook memo, for example, the social media company boasted to advertisers that it could detect, and subsequently target, teens’ feelings of “worthlessness” and “insecurity,” among other emotions.
Judith Masthoff of the University of Aberdeen says, “I would like people to have their own guardian angel that could support them emotionally throughout the day.” But in order to get to that ideal, a series of (collectively agreed upon) experiments will need to guide designers and brands toward the appropriate level of intimacy, and a series of failures will determine the rules for maintaining trust, privacy, and emotional boundaries.
The biggest hurdle to finding the right balance might not be achieving more effective forms of emotional AI, but finding emotionally intelligent humans to build them.
from HBR.org https://ift.tt/2LO1kGc