Whether it is by helping us complete a mission in outer space, like in Kubrik’s 2001: A Space Odyssey or simply by taking out the trash, like in Hanna-Barbera’s The Jetsons, the media has depicted a world where technology can help us to become more efficient. It did not take long before this reliance – or perhaps overreliance – on Artificial Intelligence, which had always been a central theme in the genre, was taken up and critically discussed in what was to become one of the most iconic sci-fi television shows ever to be aired: Roddenberry’s Star Trek. In an episode entitled ‘The ultimate computer,’ the crew of the much-acclaimed starship Enterprise grapples with a new AI-powered technology designed to replace human beings on starships. 

The dialogue, albeit peppered with corny jokes, is not entirely dissimilar to some of the discussions taking place today about Artificial Intelligence and its possible impact on multilingual diplomacy, including on conference interpreters. A perhaps overly cynical take on the issue would be to argue that conference interpreters are solely and exclusively concerned with the preservation of their status and privileges, while companies are only interested in the bottom line this technology can generate for them and their shareholders. Conversely, a perhaps naïve take would be to argue that both conference interpreters and companies are only interested in providing the best possible service to their users. The truth, as so often, most likely lies somewhere in between. The question that arises for me, therefore, is whether it is possible to reframe the discussion for it to become less acrimonious and more constructive. In that sense, I would like to offer my thoughts on a few aspects of what has, at times, become a rather heated debate, and for that I’ll start from some considerations made by Gene Roddenberry, a Hollywood-starred screenwriter some 60 years ago.

Faced with the prospect of being replaced by a machine as the starship Enterprise’s captain, Kirk starts doubting his own, very critical view of this technology. The quick-witted medical officer, Dr. Mccoy, points out that: “We’re all sorry for the other guy when he loses his job to a machine. When it comes to your job, that’s different. And it always will be different.” There is merit to the argument made about the difference between purely academic discourse and talking about something when you have skin in the game. In the latter case, it is not uncommon for people to become defensive and perhaps even go on the counter-offensive to what they perceive as a personal attack. Today’s discourse about the potential and limitations of Artificial Intelligence when it comes to taking over conference interpreters’ jobs illustrates this quite well. On the one hand, the companies developing this technology are quick to label those reluctant to fully embrace it as backward, closed-minded, and merely worried about the loss of prestige that comes with their job. 

Much like the inventor of the ultimate computer, Dr. Daystrom, who challenges Captain Kirk’s skepticism: “Perhaps you object to the possible loss of prestige and ceremony accorded a starship captain. A computer can do your job without all that.” On the other hand, there are the practitioners, the conference interpreters who, confronted with a perceived threat, are quick to point out the real-world limitations and at times near-comical failures of this new technology, or simply invoke strawman arguments. In the episode, it is Captain Kirk who concedes that, “it can work a thousand, a million times faster than the human brain,” but quickly adds that, “it can’t make a value judgment. It has no intuition. It can’t think.” While the benefit of a captain’s ability to make value judgements seems straightforward, it is rather questionable to which extent a captain’s intuition will lead to a better outcome when engaging the Klingons.

It is undeniable, and this is something Captain Kirk seemed to know 60 years ago – or actually, some 200 years in the future – that machines will outperform the human brain in certain areas. It is equally undeniable, and not novel either, that Artificial Intelligence lacks intuition, is unable to make value judgements and can, for all intents and purposes, not think. Human intelligence and Artificial Intelligence both seem to be hallmarked by specific strengths: the human interpreter can read (or listen) between the lines, and can easily detect and convey attempts at humor, cynicism, and a whole range of emotions communicated through language, including gestures and facial expressions. All other things being equal, therefore, the human interpreter will probably be much more attuned to a speaker. Artificial Intelligence, on the other hand, will more quickly and more reliably find one-to-one equivalents, set phrases or quotations, and ensure literal accuracy where it is needed or sought. This is also where some potentially promising developments are going: rather than aiming for the replacement of human interpreters or the total rejection of new technologies in the booth, they aim for the augmentation of human interpreters by combining the two.

As simple as this solution of bringing together human and machine intelligence might sound – and no, we are not talking about cybernetic implants to create cyborg-like interpreters akin to one of the Star Trek universe’s villains, the Borg – it is anything but trivial. As conference interpreters are already busy listening to a speaker and to themselves (to monitor their own output) they cannot possibly receive additional information on the auditory channel. The visual channel, on the other hand, is often equally occupied as interpreters look at the speaker’s gestures, their facial expressions, their PowerPoint presentations or a manuscript they might have received. Providing information on this channel, then, means that interpreters will need to shift their attention to competing visual information. Such shifts come at a cognitive cost that can only be offset if the information provided by the artificial boothmate arrives in a timely manner and is highly reliable. If these cues do not arrive at the right moment, or if they require the interpreter to check their plausibility and perhaps even correct them (and therefore engage in additional processing), then the overall effort will be greater than that invested without this technology, and the output less accurate and reliable.

These interfaces are not in the realm of science fiction: they are being developed and tested. At the University of Geneva, the Faculty of Translation and Interpreting (FTI), supported by the European Parliament’s General Directorate for Logistics and Interpretation for Conferences (DGLINC), is currently studying the cognitive implications of the use of these technologies, with the aim of injecting reliable empirical data into a debate all too often devoid of it. We hope that these results will help reframe the discourse to be more constructive. Having said that, just like the final decision on whether to turn over the keys to the USS Enterprise (although I am not sure warp cores actually have an ignition) to the ultimate computer lies with Starfleet Command, it will be up to the international institutions and organizations concerned, and ultimately to the users, to decide whether human conference interpreters should hang up their headsets and plug in a computer in their stead.

As a final thought to this journey to the final frontier, I would like to offer the wisdom of the USS Enterprise’s science officer, Mr. Spock, who in the episode’s final scene reiterates, “I simply maintain that computers are more efficient than human beings, not better.” On that note, live long and prosper. 


READ MORE ARTICLES FROM 

INSIDE VIEW