In 1998, while looking for topics for her Master’s thesis at the American University in Cairo, [Rana] el Kaliouby stumbled upon a book by MIT researcher Rosalind Picard. It argued that, since emotions play a large role in human decision-making, machines will require emotional intelligence if they are to truly understand human needs. El Kaliouby was captivated by the idea that feelings could be measured, analyzed, and used to design systems that can genuinely connect with people.
Today, el Kaliouby is the CEO of Affectiva, a company that’s building the type of emotionally intelligent AI systems Picard envisioned two decades ago. Affectiva’s software measures a user’s emotional response through algorithms that identify key facial landmarks and analyze pixels in those regions to classify facial expressions. Combinations of those facial expressions are then mapped to any of seven different emotions as well as some complex cognitive states such as drowsiness and distraction. Separate algorithms also analyze voice patterns and inflections.
Affectiva’s software allows market researchers to gauge a response to ads and TV shows. It powers furry social robots that help children stay engaged in learning. And, in the near future, it will allow cars to detect when drivers are dozing off.
[E]l Kaliouby and others in the affective computing field envision a world where technologies respond to user frustration, boredom, or even help alleviate human suffering.
Read full, original post: Can AI Learn to Understand Emotions?