Living Actor™ designs the future of Virtual Assistants!

Machine learning, speech recognition and emotions in human- machine interactions are the most promising technologies related to artificial intelligence, especially for virtual assistants of tomorrow. Living Actor™ is involved in several research projects and collaborative development around these topics to maintain high-end technology and keep a competitive advantage.
Digital Assistants-Future
In the Top 10 strategic technology trends for 2016 and after, Gartner identified artificial intelligence and digital assistants as key to developing digital business opportunities.
The market of Intelligent Virtual Assistants (IVA) is growing considerably. Having simple tools to create IVAs is a priority for companies that want to have a dedicated virtual assistant that they can easily modify and maintain independently.
Since our origin, Living Actor™ has participated in several collaborative research projects aimed at designing and developing a platform that automates the process of IVA creation, improvement, and sustainment.
For example, the project My Presenting Avatar was funded by the DGCIS in partnership with TC-Telecom ParisTech and Lingway. One of the results of this project was the creation of a document text analysis tool that automatically generated a presentation for a Living Actor™ IVA.
The second topic of research underway at Living Actor™ explores the contribution of emotions in human – machine interactions, focusing on animated avatars. Living Actor™ has participated in many research projects on this topic such as Ilhaire, HUMAINE, and Affective Avatars. According to Luca Rigazio, Director of Engineering at Panasonic Silicon Valley Laboratory, the ability for Virtual Assistants to express emotions and maintain discussions will establish sensational long-term relations between humans and machines.
This year, Living Actor ™ is a partner in two other research projects that allow us to imagine what tomorrow’s Virtual Assistant will be:
ODISAE project: Design self-learning IVAs
ODISAE (Optimizing Digital Interaction with a Social and Automated Environment) is a collaborative project labeled by Cap Digital.
The goal of this project is to create a linguistic analysis tool for online conversations using the results to enhance a customer relationship management system with semantic features. The project is based on Natural Language Processing that analyses and qualifies the conversations. Then, the system collects the data to improve the knowledge base.
This project provides a glimpse into the possibilities offered by machine learning, which is a type of AI that allows computers to learn independently from data, without explicit programming. With these Intelligent Virtual Assistants (IVA), it will be possible to understand context and focus on the important aspects of a conversation to automatically improve the knowledge base. Virtual Assistants will become more efficient and proactive, without requiring the intervention of a technical team.
ARIA-VALUSPA project: Make IVAs more empathic
ARIA-VALUSPA (Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalized Aspects) is a collaborative European project, and part of the Horizon 2020 program, led by the University of Notthingham, UK in partnership with five other universities and Living Actor™.
The purpose of this project is to facilitate the creation of a virtual assistant called “ARIA” capable of multimodal social interactions. By using audio and video input signals, ARIA is able to detect verbal dialog, as well as nonverbal signals of the user. For example, ARIA can automatically detect the gender of the person, the level of stress, and mood by analyzing the voice, postures, or facial expressions. Then ARIA is able to respond appropriately using a sophisticated dialog management system and a realistic model of emotions and personality.
Tomorrow, IVAs will be more realistic and capable of reproducing emotions. Beyond their ability to speak naturally, IVAs can manage the turn taking of a conversation with adaptive verbal and nonverbal behaviors. The result will establish trust, due in a large part to the empathy that a Living Actor™ IVA brings to interactions with users.
Research is important to Living Actor™
By utilizing the full capability of machine learning and incorporating more emotions into their interactions, future Living Actor™ IVAs meet the growing demands of our clients.
Our collaboration with universities, laboratories, and other companies are real opportunities to develop the best man-machine interactions using Virtual Agents. According to Laurent Durieu, CTO of Living Actor™ “Our collaborative research projects enable us to stay ahead, be on the leading edge of innovation, and be an active member of the growth of the IVA market. Research enhances the value of our products and we maintain our competitiveness”.
Living Actor™ continues to improve and anticipate the needs of tomorrow!