32.1 C
New Delhi
Sunday, October 6, 2024

New Technology Connects AI with Emotions—Yours and Its Own

More from Author

In Short:

Hume AI, a NYC startup, launched an “empathic voice interface” that adds emotional expression to AI models from companies like Google and OpenAI. Co-founder Alan Cowen explains that their technology can adapt its tone to match users’ emotions, responding sympathetically to sad news. While not as polished as OpenAI’s, it shows promise for creating more humanlike AI interactions.


A new empathic voice interface has been launched today by Hume AI, a startup based in New York. This innovative technology enables the integration of a variety of emotionally expressive voices, along with an emotionally attuned ear, into large language models developed by notable companies such as Anthropic, Google, Meta, Mistral, and OpenAI. This advancement signals the onset of a potential era where AI assistants may exhibit more human-like emotional responses.

The Vision Behind Hume AI

According to Alan Cowen, co-founder of Hume AI and a psychologist with extensive research on AI and emotion, “We specialize in building empathic personalities that speak in ways people would speak, rather than stereotypes of AI assistants.” Cowen has a background in emotional technologies through his previous roles at Google and Facebook.

Insights from WIRED Testing

WIRED conducted tests on Hume’s latest voice technology, known as EVI 2, and found its performance comparable to OpenAI‘s output for ChatGPT. Notably, when OpenAI introduced a flirtatious voice for ChatGPT in May, CEO Sam Altman remarked that it felt “like AI from the movies.” Following this, actress Scarlett Johansson alleged that her voice had been misappropriated.

Emotional Responsiveness

In contrast to conventional voice interfaces, Hume demonstrates a significantly higher level of emotional expressiveness. For instance, if a user communicates the loss of a pet, Hume responds with a suitable somber and sympathetic tone. Similar to ChatGPT, users can also interrupt Hume during a response, which will prompt the system to pause and adjust accordingly.

Measuring Emotions

While OpenAI has yet to reveal the extent to which its voice interface captures user emotions, Hume’s technology is specifically designed for this purpose. During user interactions, Hume’s developer interface provides metrics related to user emotions, including “determination,” “anxiety,” and “happiness.” Notably, Hume can detect variations in users’ emotional tones, such as sadness, which is an area where ChatGPT has limitations.

Customization of Emotional Tone

Hume allows users to easily deploy voices with specific emotional tones through prompts in its user interface. Examples include:

Hume AI’s “sexy and flirtatious” message

Hume AI’s “sad and morose” message

Hume AI’s “angry and rude” message

Potential and Challenges

While the technology is promising, it did not always exhibit the same level of refinement as OpenAI’s offerings and sometimes produced unexpected results, such as rapid speech that devolved into gibberish. However, with further refinement, Hume has the potential to foster more humanlike and diverse voice interfaces.

Research Background

The concept of recognizing, measuring, and simulating human emotion in technology has been studied for decades, under a field known as affective computing, a term popularized by Rosalind Picard, a MIT Media Lab professor, in the 1990s.

Academic Perspectives

Albert Salah, a professor at Utrecht University in the Netherlands specializing in affective computing, expressed admiration for Hume AI’s technology, noting that it assigns emotional valence and arousal values to users and adjusts the agent’s speech accordingly. He remarked, “It is a very interesting twist on LLMs.”


- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article