What is a Multimodal Humanoid Companion for Customer Services: Chatbot — Virtual Assistant — Avatar — Human Robot

You are welcome to comment this article on our blog on Medium .


Everyone knows that we live in the age of Digital Transformation, Big Data, IoT, AI, etc. Whatever conference you’re at, whatever business magazines you read — you cannot escape. Every business is currently trying to figure out what trajectory will take the digital trend and how it will be transformed when applied to real-life scenarios and cases.


At Neurodata Lab we develop Emotion Recognition software for high-tech industries. We also analyzed some of the trends we were interested in, mostly focusing on certain changes in robotics. Here’s why.


Let’s have a look at Gartner’s report: “Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017”. At the peak of Expectations, we find Virtual Assistants and Smart Robots. Same for “Gartner Market Guide for Conversational Platforms, 20 June 2018”. There is a lot of research like that going on right now. So, what does this all mean?


The work with Big Data and “smart systems” has become more and more salient. And just like with every other trend, the bottleneck has been soon revealed. The interface. However smart the system is, however “big” the data or whatever the quality is, the result of its work must be clear to the user. The interaction with such system all in all must be convenient to the user. Together with the Digital Transformation, Big Data, and IoT systems, the business industry will surely demand a new highly intuitive ergonomic interface for communication with people. The problem of the interface solutions will come up more often and more critically. Especially when, unlike a trained specialist, a simple user is engaged.

We asked ourselves — so what is this interface solution exactly?


We searched for the answer in different case studies, CES 2018 in particular, and here they are:


- https://www.youtube.com/watch?v=YUQ-UO42FGQ&t=528s — Yup, first and foremost,the video about the conversation with LG Cloi (not the only interesting thing about this video)


- https://www.youtube.com/watch?v=eSvku7qDqwQ — Virtual assistants once again dominate CES 2018


- https://www.youtube.com/watch?v=RNCfzURD31U&t=81s — Honda Brings Robotic Devices to CES 2018


We can go through these cases forever. First, they show how much these interface solutions vary, and how hard it is to find how to classify and range them. Sometimes it is even harder to find what business task they address and how it is reflected in the functionality.


The document from the PwC became a great step towards the systematization of the current situation in the industry: “The Relevance and Emotional Lace” PwC Digital Services, Holobotics Practice”. It unites Chatbots, Virtual Assistants, and Humanoid Interactive Robots into one type of the interface solution called Humanoid Companion.


Automated Network Intelligence for Multiple Access. Credit: PWC. Available at: https://www.pwc.com/it/it/services/consulting/holobotics/docs/holobotics.pdf


Then the solution is subdivided into two subtypes: Physical Robotics and Online Robotics.


It should be noted that the subdivided segments of the market are basically dealing with the same agenda — the creation of a high-quality ergonomic interface solution — are now severely divided.


Specialized manufacturers, suppliers, analysts are often dealing with this. The division into subtypes in the PwC is based on “what hardware is the core of the solution”: a robot, a computer, etc.


We suggest introducing another aspect that might help us get a clearer picture and provide a more transparent vision for this type of solution: the type (modality) of human-machine interaction that is included in the interface solution. Let’s dive into details: It’s hard to communicate just with words! There are 4 levels of interaction (which are convenient and ergonomic for a person: words, voice, face, body. When we communicate we engage either all of the modalities, or just a part of them). The same thing goes for the interface systems: there are lots of solutions that are focused on handling the same agenda, yet they differ based on modalities they include.

Credit: Neurodata Lab.


Here are the examples:


Chatbot — in such solutions only one modality is used — words, text interaction.


- https://www.youtube.com/watch?v=Q8skLVLpKg4– solutions from Next IT


- https://www.youtube.com/watch?v=ngFNefOVt58–Nanorep’s chatbots


- https://www.youtube.com/watch?v=bR85Ob1Yipo-chatbot for building chatbots from Reply.ai


Assistant — assumes the presence of a second modality — voice, speech interaction.


- Sometimes it is simply a talking chatbot (IPsoft) — https://www.youtube.com/watch?v=rPURXuavMSk


- But first of all, they include solutions that allow having a full-fledged dialogue (Nuance) — https://www.youtube.com/watch?v=zJnqvo2kzxk


Avatar — these are more complex and rare solutions, now they can be seen mainly in HR.


- https://vimeo.com/232671912-Andi the Interview Coach — Multi-Modal Skype Bot, interesting solution from Botanic Technologies


- https://www.youtube.com/watch?v=w9PFVIHZ_AY– a solution for HR from a Russian start-up (Robot-recruiter Vera), that’s how it looks in operation: https://www.youtube.com/watch?v=W5gl-meCKmE


There are also some, let’s say, intermediate solutions, for example, a user writes a text, while an assistant speaks and moves:


- https://www.youtube.com/watch?v=k31W34IMmB8 – Amelia from IPsoft


- https://www.youtube.com/watch?v=X5CaXaCS6L0 – Lisa from Artificial Solutions


Robot — and our favorite Human Robots, multimodal communication agents working with all the 4 modalities: text, voice, face and body.


- https://www.youtube.com/watch?v=Nm5KQrn9gxs&t=6s – Pepper from SoftBank Robotics


- https://www.youtube.com/watch?v=6Zt5u22BQCQ – Promobot from Promobot


- https://www.youtube.com/watch?v=qAEfKRSBtpw – Sanbot from QIHAN Robotics


- https://www.youtube.com/watch?v=20Qh9hYavIo – Cruzr from UBTECH Robotics


Vector by Anki. Credit: Anki.


What’s important is that the inclusion of new modalities, on the one hand, allows new opportunities, and on the other hand, requires extra tech like new pioneering AI systems namely in the field of Emotion AI that deal with NLP and interface solutions of the same type.


This is exactly what Neurodata Lab deals with. We provide tech products in the sphere of Emotion AI for interface solutions (chatbots, virtual assistants, avatars, and robots).


In the next article we will discuss How Emotion AI solves NLP problem and why it should be included in interface solutions.


If you want to receive the articles directly, if you’re interested in the topic and want to get access to our demo-gallery with our Emotion AI tech products, contact the author directly:


Olga Serdiukova, Chief Marketing Officer and Partner Development Director Neurodata Lab LLC


Email: o.serdiukova@neurodatalab.comLinkedIn:

https://www.linkedin.com/in/olgaserdiukova/


Skype: i-olgase

http://www.neurodatalab.com/