Neurodata Lab LLC informs about the creation of the first prototype of the EyeCatcher software tracker, which allows retrieving eye and head movements from video files recorded on a regular camera. 

This technology breaks new ground in the study of human eye movements in natural conditions and considerably expands the research capabilities.

As of today, the devices for the video registration of eye movements have a number of significant drawbacks. Wearing eye trackers causes inconvenience to the subjects; stationary ones, mounted under the monitor, have a limited range of permissible head movements. All eye trackers based on video cameras with infrared illumination are sensitive to lighting, which narrows the space for maneuver during experiments. In addition, a relevant factor is the high cost of most devices available on the market for recording eye movements.

Additional complications arise when carrying out studies of people's communication, since wearable eye trackers distort the perception of the interlocutor wearing the device; when presenting a video on the monitor, the subject demonstrates a very different behavior than in the natural environment (Reader, Holmes, 2016). However, some researchers have found a way out of this situation: they record the subjects using a video camera, and the gaze direction is determined with the help of annotators (Campos et al., 2015; Jarick, Kingstone, 2015). Such a design of the experiment allows to describe a number of relevant phenomena, but a more detailed study of eye movements is impossible.

Software eye tracker EyeCatcher is a new generation of innovative solutions for oculography that meets all the requirements of quality, flexibility of the interface, reliability and accessibility for the consumer. Currently, there are some web-camera tracking developments (see, for example, Track Eye, GazeTracker, Eyezag, Sticky, etc.), but we went further and set the task of creating a tracking solution capable of allocating eye movements from an already filmed video that does not require additional equipment and have no time limitations of the recording or mandatory restraint of the subject.

EyeCatcher certainly provides interesting perspectives for future research. It allows to plan the collection of voluminous and diverse databases of human oculomotor behavior in natural / daily conditions, as well as the analysis of its own multimodal RAMAS database.

In the coming months, work will continue to be done on the improving of the solution to the next versions: in 2017, we are planning to significantly expand the capability for processing video recorded in different lighting conditions, with different camera angles and frame rates.

Cited literature:

Jarick М., Kingstone A. The duality of gaze: eyes extract and signal social information during sustained cooperative and competitive dyadic gaze// Front. Psychol. 2015. V.6. Article 14231.

Joana Campos J., Alves-Oliveira P., Paiva A. Looking for Conflict: Gaze Dynamics in a Dyadic Mixed-Motive Game// Auton. Agen.t Multi-Agent Syst. 2016. V.30. P.112–135.

Reader A.T., Holmes N.P. Examining ecological validity in social interaction: problems of visual fidelity, gaze, and social potential// Cult. Brain. 2016. V.4. P.134–146.