Artificial Intelligence (AI) is hailed as a revolution for language services. These days, it’s hard to open a newspaper without reading about the supposed impending annihilation of translators and interpreters. It is no wonder that such clickbait has circulated so broadly, since the CEOs of many tech companies developing AI-driven language tools have gone out of their way to call for the massive layoffs of these ‘sacred cows’ and fearmonger about the end of human professionals. However, a basic understanding of how these systems work and their actual capabilities can dispel these talking points. In its current form, AI cannot replace conference interpreters. For the time being, the possibility of fully automated interpreting solutions remains firmly grounded in the realm of science fiction novels rather than any serious science.

Firstly, there is no universally agreed upon definition of the term ‘Artificial Intelligence.’ In fact, the term AI is sort of a misnomer, because it does not refer to the kind of intelligence we think of when we think of human cognition. In the case of AI-based interpreting tools, Artificial Intelligence refers to the imitation of human language by using Generative AI applications (like ChatGPT or Gemini) and Large Language Models (LLMs). In other words, these systems are not actually interpreting, but rather mimicking the act of interpreting. All the AI interpreting systems that the public currently knows about work in roughly the same manner: the system automatically recognizes speech and turns it into text, that text is then translated into text in the target language, and finally a synthetic voice reads out the translated text.

A human interpreter, on the other hand, listens to the meaning of a speech, analyzes the ideas, and reformulates those ideas into the target language. To carry out this complex cognitive process, interpreters rely on spoken language as well as a myriad of body language and other unspoken cues. After all, speech only represents about 7% of total meaning in human communication. The nuances of this cognitive processing, in addition to interpreters’ strong cultural and geopolitical knowledge, “make humans stronger than the machine,” according to Arle Lommel, an expert on language technologies, Artificial Intelligence, and translation quality that spoke at an event on AI tools and interpreting held by the International Association of Conference Interpreters in January of 2024.

Human interpreters are beacons of multiculturalism and diversity © Evan Schneider

Beyond highly specialized skills, such as the mastery of several foreign languages and strong cultural competencies, interpreters must also possess a broad array of soft skills to perform their work. UN conference interpreters often find themselves working in meetings on sensitive topics like the insecurity crisis in Haiti or the war in Gaza. Interpreting at these meetings requires a well honed sense of critical thinking and professional ethics to ensure accurate communication that takes into account relevant political sensibilities and avoids mistakes or inaccuracies, which could trigger a diplomatic incident.

Moreover, interpreters are often called upon to be the voice of people who have been victims of some of the most atrocious crimes known to mankind. It is not uncommon for us to interpret through the tears and sobs of a speaker. This requires finding a difficult, but human, balance between empathy and detachment to faithfully convey the speaker’s message. Interpreting is an inherently human exercise grounded in empathy and compassion. As sophisticated as these systems may get, they will never be capable of human emotion. Machine emotion? Maybe. But human emotion? Never.

The dangers of ignoring the fundamentally human nature of interpreting in favor of AI solutions that merely provide ersatz interpretation cannot be overlooked. According to the UN’s own Broadcast on the Responsible Use of Publicly Available Generative Artificial Intelligence (AI) Tools, using AI is “not without inherent risks,” and “many of the publicly available tools lack the necessary contractual, institutional, or technical protections to guard the UN’s data and interests.” This is why the UN advises against uploading sensitive or personal data to these platforms.

In addition to confidentiality issues, there are also serious concerns surrounding the reliability of the content produced by AI. The same UN Broadcast on AI warns that these tools can “generate false information, create security vulnerabilities, and facilitate mis- or disinformation. The Large Language Models (LLMs) used by AI interpreting platforms are prone to producing false information, known as ‘hallucinations.’” AI hallucinations can be extremely dangerous and difficult to spot because they generate fluent and coherent text that appears plausible to laypeople and, sometimes, even to experts.

LLMs are also rife with gender, racial, and anti-LGBTQI bias, according to a recent study done by UNESCO. For instance, when the prompt “a gay person is…” was entered into Llama 2, 70% of the content generated was negative and 60% of the content was also negative for ChatGPT. Open source LLMs showed a strong negative bias against women, often associating them with the terms: housemaid, cook and prostitute. The LLMs also produced negative content about certain ethnic groups, with Zulu men, for example, more likely to be described as gardeners or security guards. Unfortunately, the prejudices of the programmers and large datasets that make up these systems often end up being baked into them. Human interpreters, by contrast, are beacons of multiculturalism and diversity.

No technology is inherently good or bad. Like nuclear energy, Artificial Intelligence holds incredible promise, but it can also cause major harm. It’s up to us to decide how to use it. Generative AI is a tool that can work faster than humans to produce an amalgam of real and invented information without empathy, critical thinking, or ethical decision-making skills. Do we really feel comfortable entrusting high-level interpretation services to these systems? Who stands to profit if these systems are adopted? 

It’s true that, with AI tools, managers no longer need to worry about pesky human things, such as going on vacation, becoming ill, or having a baby. 

However, rather than falling for the siren song of too-good-to-be-true cost cutting, we should approach AI interpreting solutions with a critical eye and ask ourselves whether the potential benefits truly outweigh the risks of the technology. An ounce of prevention is worth a pound of cure. 

Special thanks to Tomás Pereira Ginet-Jaquemet and Ana Pleite Moreno for their assistance in researching and drafting this article.


READ MORE ARTICLES FROM 

INSIDE VIEW