• SUMMER 2018

    We move to the Cloud and add social signal processing to our Emotion Recognition technology

  • SPRING 2018

    Multimodal neural networks created and enhanced with speech separation technology

  • WINTER 2018

    Emotion Miner Corpus dataset is collected, with 140 hours of emotionally colored video content and more than 110,000 annotated fragments

  • AUTUMN 2017

    Emotion Miner, a platform for emotion annotation, is launched

  • SUMMER 2017

    Improved eye & body trackers

  • SPRING 2017

    First prototypes of a new generation speech recognition tech, face detector & identificator, codification tech of emotionally colored movements

  • WINTER 2017

    We create RAMAS, first multimodal affective dataset in Russian language

  • AUTUMN 2016
    We include 22 affective states and social signals into our emotion recognition unit
  • SPRING 2016

    Neurodata Lab is born

See how it works: