Publications
  • 09/03/2019
    Social and Emotion AI: The potential for Industry Impact (for ACII 2019)

    The goal of this paper is to provide an account of the current progress of Social and Emotion AI, from their earliest pioneering stages to the maturity necessary to attract industrial interest.

  • 12/10/2019
    Robust algorithm for remote photoplethysmography in realistic conditions (In press)

    M. Artemyev, M. Churikova, M. Grinenko, O. Perepelkina. Robust algorithm for remote photoplethysmography in realistic conditions.

  • 12/10/2019
    Manual annotations of emotional videos: The effects of annotators’ moods (In press)

    O. Perepelkina, M. Konstantinova, D. Lyusin. Manual annotations of emotional videos: The effects of annotators’ moods.

  • 02/02/2019
    End-to-End Emotion Recognition From Speech With Deep Frame Embeddings And Neutral Speech Handling

    In this paper we present a novel approach to improve machine learning techniques in emotion recognition from speech.

  • 03/19/2018
    Classification of affective and social behaviors in public interaction for affective computing and social signal processing

    There are numerous models for affective states classification and social behavior description. Despite proving their reliability, some of these classifications turn out to be redundant, while others — insufficient for…

  • 10/16/2018
    Multimodal Approach to Engagement and Disengagement Detection with Highly Imbalanced In-the-Wild Data (for ICMI 2018)

    In this paper we describe different approaches to building engagement/disengagement models working with highly imbalanced multimodal data from natural conversations.

  • 07/19/2018
    Recognition of mixed facial emotion has correlates in eye movement parameters (for ESCAN 2018)

    The aim of this study was to investigate specificity of eye movement’s parameters during mixed facial emotion recognition task.

  • 09/02/2018
    Automatic detection of multi-speaker fragments with high time resolution (for Interspeech 2018)

    The proposed method demonstrates highly accurate results and may be used for speech segmentation, speaker track- ing, content analysis such as conflict detection, and other practical purposes.

  • our client
    08/25/2018
    RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing (for SPECOM 2018)

    RAMAS is an open database that provides research community with multimodal data of faces, speech, gestures and physiology interrelation. Such material is useful for various investigations and automatic affective systems…