LOUD AND BRIGHT When your tech knows you better than you know Quartz

0
6

For more information about new technology that can read human emotions, go to the third episode of Does this have to exist? the podcast that debates how new technologies will affect humanity.

If we were sitting opposite each other in a cafe in a cafe and I asked about your day, you could give a polite answer answer, such as, "Great." But if you lie, I would know about your expression, tone, shocks and tics. Because most of our communication is not linguistic.

In the meantime, we read unspoken directions – to figure out the truth, to cut through what people say to understand what they mean. And now, with so many of our exchanges taking place online in text, many of our messages, traditionally delivered through subtext, tell us less than ever before.

Rana el Kaliouby, the co-founder of Affectiva, a company that analyzes machinery sentiment analysis, wants to improve the tools we use and make exchanges great again. "Much of today's communication – 93% – is essentially lost in cyberspace," she tells Quartz. "We are blind to emotions and therefore we see less compassion in the world." According to her, the solution is no longer to use technology that robs us of our humanity, but instead to design tools that truly understand people.

Sentimental technology

El Kaliouby"The company creates tools to navigate through the space between language and meaning.

Technically speaking, she and her colleagues are putting together a database of the world's facial expressions to get a good picture of human communication. So far, they have collected 7.7 million faces in 87 countries and a total of 5 billion face frames. The idea is that if machines can read subtext, they can better meet our needs in certain contexts.

Take online learning, for example. Imagine taking a lesson and getting lost. In theory, your frowning, wandering gaze, and frustration would be transferred to the computer through the camera, alerting the system so that the course could respond accordingly. Maybe it offers you more examples or simpler problems. It might even change topics to prevent frustration, just as a live instructor can change activities or tactics in a classroom, depending on how students respond to the material.

If machines can read subtext, they will better meet our needs in certain contexts.

The work of El Kaliouby has already been put into use in interesting ways. Automated sentiment analysis can help people with autism who have difficulty interpreting the emotional subtext in exchanges, understanding communication better by interpreting the data of a conversation partner and providing insight. A device that is worn like glasses and looks like the Google Glass tool that is now defunct can send a signal to a user when they ignore important unspoken directions, so that they do not rely solely on language to assess a situation.

El Kaliouby has used her own tool to measure listener reception when doing webinars. Normally, when speaking with a group online, a presenter cannot tell if someone is paying attention. But with technological help, even an online teacher can get a sense of engagement from the audience and thereby convey his message more effectively, she says. By having information on the speaker's screen that warns them of audience involvement based on their expressions, she has been able to give better presentations, she says.

Advertisers have also used the tool to test public responses to a potential campaign. Viewers look at an advertisement because Affectiva's technology evaluates their expressions. By quantifying the unspoken responses in real time, marketers get a better picture of the potential success of their ad.

Or, if a car was equipped with technology that follows the gaze and expressions of a driver, according to el Kaliouby, it could tell the driver when they are not paying attention to the road. Cars could prevent accidents before they occur, simply by being aware of the driver's state of mind and alerting them when they are distracted or sleepy.

From the perspective of el Kaliouby the possibilities for the technology are endless. The longer she works on it and the more she reflects on all the unspoken information in our exchanges, the more she wonders about important conversations in her own life. How often, she missed, did she take the word from people when – if she had been more aware of how little language is happening – she could have understood what she was said and what they do meant were two different things?

Sinister things

Of course, endless possibilities for affective technology also include dangerous ones. In the wrong hands, a tool that reads and interprets human emotions can be used to discriminate, manipulate and benefit from data about our feelings.

El Kaliouby and her colleagues at Affectiva have vowed not to allow their tools to be used for security and surveillance purposes, for example. And their intentions have been tested: they have rejected lucrative licensing offers based on their principles and she says Affectiva rejects investors who are interested in developing technology for police work almost weekly. She sees the argument of those companies that by providing security industry tools to better understand humanity, she can help make the world a safer place, but Gr Kaliouby is concerned that there is too much potential for abuse, especially if the technology is not sufficiently nuanced and it is not yet.

Technical ethics is not just a conversation for people in development.

The technologist keeps no secrets. She wants us all to be aware of the dangers of her work. She believes that we should think about how these tools are developed and used, and what that means for the future. Because she is convinced that this is just the beginning of affective technology and that it will inevitably affect all of us when systems such as the one they are working on are integrated into the many devices we use. Technical ethics is not just a conversation for people in development, but for everyone who ultimately uses products without always understanding the implications.

A treasure chest of data

First and foremost, El Kaliouby argues, users must understand their face data and agree if such tools are used. Companies need to be transparent about whether they collect the information and for what purposes – information that is now presented in small print can be made much more explicit. So, for example, in cars that now use Affectiva technology, face data is not recorded. But if it were, insurers could start summoning descriptions of expressions to determine the liability of accidents. The police could use it for investigation.

A lot can be done with data. Not all is well. Tech intended to improve communication could be used for sinister purposes, just as Facebook, a company with a mission to connect the world, was used to manipulate elections.

A lot can be done with data. Not all is well.

As companies increasingly gather information, not just about what we buy or read or talk about, but how we make our noses wrinkle-free, what makes us laugh, when we frown our eyebrows, we become more and more vulnerable. Companies can get to know us better than we know ourselves, and that is problematic.

Collect the faces of the whole world

Another potential pitfall of sentiment analysis technology is that it can be reductive. A nuanced tool must get to know the whole range of faces in all places of countless individuals to provide meaningful insight. Algorithms based on limited data collection are biased and only recognize the faces to which they are exposed, which may mean that machines generate inaccurate or incorrect information. To train a machine to read all faces, a lot of data collection is required from many people in different cultures, and this means that you have to understand the range of expressions in different places.

"Progress is a function of how much data we can use and how diverse the data is."

The faces that we make are, to a certain extent, determined by culture. El Kaliouby and her colleagues have discovered that there are universal expressions – smiles, frowns – but that cultural influences reinforce or dampen certain tendencies. For example, they know that Latin Americans are more expressive than Eastern Asians, and that women generally smile more than men, Kaliouby says.

But they still need much more information. She explains, "Progress is a function of the amount of data that we can use and how diverse the data is. We want algorithms to be able to identify more expressions, emotions, genders, everything." able to capture human expression, there will always be limits to the tool’s interpretation capabilities.

The holy grail

Then there is what El Kaliouby & # 39; the holy grail & # 39; in her field: an algorithm that detects sarcasm.

Although it is accused of being the lowest form of humor, sarcasm, that tone used to intentionally convey a conflicting message, is a very sophisticated type of reporting. Sarcasm is a tonal wink. And when a tool understands this layered communication mode, along with a real wink, it will be considered a triumph of machine learning. But how it will know or show its understanding is not yet clear to humans.

Affectiva has integrated voice telality in the last two years and el Kaliouby hesitates to guess how long it may take to reach the holy grail. But she says a tool Which good – a technology that accurately interprets tone and expression, in all cultures and all personality types, is still far away.

However, what is certain about El Kaliouby is that we must be on our guard for this work. She does it with love and good intentions, but that doesn't necessarily mean that we just have to trust her.

"I think you should be a little scared," she advises. "Every technology has potential for good and for abuse."

Correction: An earlier version of this post stated that Kia cars are equipped with Affectiva technology. They are not. However, the tool could be seen in a concept car designed by Kia.

Does this have to exist? is a podcast hosted by Caterina Fake, which debates how new technologies will affect humanity. For a more in-depth conversation about evaluating the human side of technology, subscribe to Does this have to exist? on Apple Podcast.