,

Hume AI

AI voice assistant platform.

Description

Hume AI develops advanced AI Voice Assistants solutions that focus on understanding and responding to human emotions. By leveraging insights from emotion science, Hume AI aims to create technology that can interpret and react to emotional cues, thereby improving user experiences.

  1. Empathic Voice Interface (EVI):
    • Conversational Voice API: This API is powered by empathic AI, enabling applications to respond with empathy. It measures nuanced vocal modulations and guides language and speech generation.
    • Empathic Large Language Model (eLLM): Trained on millions of human interactions, this model combines language modeling and text-to-speech with enhanced emotional intelligence, prosody, end-of-turn detection, and interruptibility.
  2. Expression Measurement API:
    • Vocal and Facial Expression Interpretation: This API captures nuances in vocal and facial expressions from audio, video, and images. It can detect subtle emotional cues like awkward laughter, sighs of relief, and nostalgic glances.
  3. Custom Model API:
    • Predictive Capabilities: Using transfer learning from Hume AI’s expression measurement models and eLLMs, this API allows for the creation of customizable insights. It can predict outcomes more accurately than language alone.