Artificial Intelligence Glossary: Terms You Should Know

Deep learning, machine learning, chatbot, neural networks… Is the vocabulary of artificial intelligence (AI) full of terms and expressions you’ve never heard before? Since AI is all around us and global revenues stemming from it are expected to reach 46 billion dollars between now and 2020 (1), it’s high time to demystify this complex jargon. Emmanuel Amouretti, Executive Vice President of Living Actor, decodes 12 key expressions to help you better understand the concept of artificial intelligence, its technology and its applications.

 

Chatbot / Bot

A chatbot, also known as a conversational agent or virtual assistant, is a system capable of carrying on a dialogue with users based on conversations that have been scripted upstream. Its role is to respond with maximum relevance to questions that are frequently asked by internet users, clients or personnel. As a result, recurring tasks can be automated, permitting employees to make better use of their time.
For example, in the area of Human Resources, a chatbot can respond to inquiries regarding paid time off or expense reports, allowing the HR team to focus on tasks with stronger value added.


For more information, take a look at our free mini-book on the topic!


By default, a chatbot is a weak interaction tool that only offers responses that have been scripted by a Chatbot Manager, based on the principal “if this, then that.” However, artificial intelligence (and especially machine learning) now enables bots to analyze the data and learn from their interactions in order to respond with increasing relevance.

Chatbot Manager

A Chatbot Manager is the person entrusted with managing the bot. This person, who could also be referred to as the virtual assistant’s coach, is in charge of implementing the chatbot and supervising its day-to-day operations. He or she is also responsible for transferring human skills to the machine, enabling it to reflect the company’s know-how, expertise and values.

Data crunching

Data crunching is the automated analysis of vast amounts of data originating from Big Data. Once imported into a system, the data is sorted, structured, processed and then analyzed in a consistent manner in order to help a machine make informed decisions.
This precise processing is indispensable in order for a system like a chatbot to use the information in a relevant way so that it can respond effectively to specific inquiries. In a certain sense, this is the basis of machine learning.

Deep learning

A subcategory of machine learning, deep learning permits hierarchical learning of a large quantity of information. In other words, the machine processes the data in order of complexity to understand a reality and grasp it through its own means with the aid of a neural network.
The name deep learning alludes to the fact that the system functions in layers. As Yann Ollivier, Director of Research at the French National Center for Scientific Research (CNRS) and a specialist in artificial intelligence, explains, the results from the first layer of neurons serve as a point of departure for calculating subsequent results (2). In order to distinguish a picture of a cat from a picture of a dog, a machine must compare increasingly complex characteristics based on the lessons learned, for example, the fact that the ears of the two animals are shaped differently.

Artificial intelligence (AI)

Artificial intelligence can be defined in multiple ways, particularly, as an automated system capable of analyzing data and making choices autonomously. Indeed, this is what often leads people to link artificial intelligence with chatbots. Two different types of artificial intelligence can be distinguished, based on the extent to which human cognitive functions are replicated:

  • “Weak” AI, in the case of a machine that simulates a specific human behavior
  • “Strong” AI, in the case of a machine that reproduces a human behavior while also grasping its own reasoning (an ability that still lies within the realm of science fiction)

However, keep in mind that, with the aid of machine learning (or automatic learning), artificial intelligence systems are capable of improving autonomously over time.

Machine learning / Automatic learning

Machine learning is one of the building blocks of artificial intelligence. The term refers to a process in which a machine, for example, a chatbot, is endowed with the capacity to learn automatically. As a result, the system develops the ability to decipher the intentions of internet users in order to offer adapted responses and make effective decisions.
Automatic learning enables the machine to improve without continuous support and guidance by employing algorithms based on comparison logic, research and mathematical probability. In this way, the chatbot is capable of acquiring new knowledge on its own by comparing the questions it’s asked and conducting research, whether in third-party databases, contact center conversations or elsewhere. However, except in certain specific cases, this capacity isn’t indispensable: a bot doesn’t necessarily have to be self-learning.

Natural Language Processing (NLP)

Natural Language Processing is the cornerstone of artificial intelligence, machine learning and linguistics.
It is an essential building block of any conversational system, enabling it to detect users’ intentions through lexical, semantic and syntactic analysis. As such, it can serve to give a machine a voice.
FYI: Natural Language Processing is currently used in voice assistants like Google Home and Amazon Alexa, in automatic translation, and as a means of optimizing the comprehension level of chatbots.
An NLP engine employs a sequence of mathematical processes and comparisons to “clean up” the user’s input, possibly correcting certain errors or applying synonyms, in order to identify all of the information that’s useful for purposes of comprehension. All of this makes it possible to detect the intention, in other words, to understand the needs of the user and possibly the related attributes.

For example…
Request: “I’d like to order a cheese pizza for 2 people for 8:00.
This request contains:

  • An “intention,” which is “to order to eat”
  • A sequence of information, or “entities,” that allow specifying the intention (a cheese pizza, for 2 people, for 8:00)

The only thing missing is the delivery address, which the chatbot can ask the user. If all of the information is missing, the chatbot will patiently ask all of the necessary questions in order to perform the transaction!

Natural Language Understanding (NLU)

A subset of Natural Language Processing, Natural Language Understanding is the processing chain that will make sense of the phrase. This is where the intention will be identified (see the example above) through a grammatical analysis of the phrase, identifying a subject and a verb, linking the words to each other and correlating the request to the chatbot’s knowledge base.
However, even if its grammatical analysis is perfect, a chatbot can’t provide responses to inquiries it hasn’t been trained to answer!

Neural networks

The power of a neural network stems from the individual power of each neuron. This is how the human brain functions: each neuron performs its own simple calculation, and the network formed by all of the neurons multiplies the potential of these calculations.
The neural networks used in artificial intelligence are built on the same principle, but with one exception: the connections between the neurons can be adjusted in order to perform a given task. This technology is particularly useful in predictive analysis, image recognition and speech processing.

Learning scenarios

Learning scenarios are the parameters a person enters into a machine enabling it to make a rational decision.
To be effective, a chatbot must be configured by a Chatbot Manager, based on relevant scenarios that are adapted to users’ recurring inquiries. Made up of multiple branches, this decision tree enables the bot to initiate a dialogue at the right moment and respond to the users as appropriately as possible.
These scenarios do not necessarily rely on artificial intelligence. However, they may leverage machine learning, Natural Language Processing and Natural Language Understanding in order to more accurately detect users’ intentions, personalize the conversations and thereby create engagement.

Supervised learning / Unsupervised learning

Machine learning can be split into two different models, both of which entail training a machine on a database that’s integrated, structured and then analyzed (data crunching).

  • In supervised learning, the machine relies on human intervention. The person provides the bases of the machine’s knowledge so it can then understand how to use them and propose improvements, which will be systematically validated by a human before being implemented.
  • In unsupervised learning, the machine doesn’t require this human validation component. It performs the research, identifies new knowledge and memorizes it all on its own, as long as the mathematical thresholds supplied to it are respected.

Turing Test

Conceived by mathematician Alan Turing in the fifties, the Turing Test consists of evaluating a machine’s capacity to imitate a human being to the extent that it’s indistinguishable from a flesh-and-blood person. The Turing Test is still considered to be the most valid means of judging the level of artificial intelligence attained by a machine.
(1) IDC France, L’IA en pleine croissance (AI in Full Growth), 2017 (in French)
(2) Le Monde, Comment le « deep learning » révolutionne l’intelligence artificielle (How Deep Learning Is Revolutionizing Artificial Intelligence), 2015 (in French)