You: Quantified
Quick StartEducational ResourcesPlatform
  • 👋Introduction to You: Quantified
  • âš¡Quick Start
  • 📖Educational resources
  • For Developers
    • Add a new visual
    • Event markers
    • MIDI Signals
  • Devices
    • File Upload
    • Muse
    • EMOTIV
    • Face Landmarks
    • Video Heart Rate
    • Voice Emotion
Powered by GitBook
On this page
  1. Devices

Voice Emotion

PreviousVideo Heart Rate

Last updated 2 months ago

This data source uses a demo based on the formantanalyzer JavaScript library. This algorithm extracts syllables from the Mel-spectrogram of the input and feeds relevant statistical features into a single-layer neural network that classifies. This model was trained on datasets of actors going through dialogues or improvising. It predicts the following categories:

  • Happiness

  • Sadness

  • Anger

  • Neutral

From these labels, we output valence and arousal scores by using the relative happiness probability (for valence) and neutral probability (for arousal).

Attribution

Speech Emotion Analyzer
GitHub - tabahi/WebSpeechAnalyzer: JS speech analyzer for fast speech analysis and labelingGitHub
Logo