Science
  • Robust algorithm for remote photoplethysmography in realistic conditions (In press)

    M. Artemyev, M. Churikova, M. Grinenko, O. Perepelkina. Robust algorithm for remote photoplethysmography in realistic conditions.

  • Manual annotations of emotional videos: The effects of annotators’ moods (In press)

    O. Perepelkina, M. Konstantinova, D. Lyusin. Manual annotations of emotional videos: The effects of annotators’ moods.

  • Social and Emotion AI: The potential for Industry Impact (for ACII 2019)

    The goal of this paper is to provide an account of the current progress of Social and Emotion AI, from their earliest pioneering stages to the maturity necessary to attract industrial interest.

  • End-to-End Emotion Recognition From Speech With Deep Frame Embeddings And Neutral Speech Handling

    In this paper we present a novel approach to improve machine learning techniques in emotion recognition from speech.

  • Multimodal Approach to Engagement and Disengagement Detection with Highly Imbalanced In-the-Wild Data (for ICMI 2018)

    In this paper we describe different approaches to building engagement/disengagement models working with highly imbalanced multimodal data from natural conversations.

  • Automatic detection of multi-speaker fragments with high time resolution (for Interspeech 2018)

    The proposed method demonstrates highly accurate results and may be used for speech segmentation, speaker track- ing, content analysis such as conflict detection, and other practical purposes.

  • RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing (for SPECOM 2018)

    RAMAS is an open database that provides research community with multimodal data of faces, speech, gestures and physiology interrelation. Such material is useful for various investigations and automatic affective systems development.

  • Recognition of mixed facial emotion has correlates in eye movement parameters (for ESCAN 2018)

    The aim of this study was to investigate specificity of eye movement’s parameters during mixed facial emotion recognition task.

  • Classification of affective and social behaviors in public interaction for affective computing and social signal processing

    There are numerous models for affective states classification and social behavior description. Despite proving their reliability, some of these classifications turn out to be redundant, while others — insufficient for certain practical purposes. In this paper we propose a classification describing human behavior in the course of public interaction.

Demo
See how it works: