Home history Machine-aided translation Machine-aided translation
A century-long quest to streamline communications across languages 
A woman talks and displays a record-sized optical disk to a small crowd of 1964 World's Fair visitors at the Russian-to English translation exhibit, powered by an IBM 1460

Language is the linchpin of human communication and connection — assuming all interested parties understand each other. Tools harnessing artificial intelligence have made great strides recently in bridging language divides, streamlining the exchange of ideas, goods and services around the globe.

Machine-aided translation has become an industry unto itself — and the results are all around us. It’s become possible to toggle between idioms on a website with a click, to speak English into a smartphone and hear a Finnish translation echo back within seconds, even to scan a street sign in a foreign land and see it magically appear on your screen in your native tongue.

A century ago, however, such innovations were the realm of science fiction, and language barriers posed a persistent and pressing problem. At the close of the First World War, IBM President Thomas J. Watson witnessed firsthand how negotiations between global political leaders would start out amicably only to get bogged down in misunderstandings. He instinctively felt that technology could help, and challenged a development team to find a solution. It set the company on a quest to pioneer machine-aided translation that continues to this day.

The dawn of automated translation
‘A quality of baby-talk that amazes and delights linguistic scholars’

In 1927, IBM introduced its first simultaneous translation system using a design patented by the British engineer, inventor and translator Alan Gordon Finlay and Edward Filene, an American retail magnate who would later be instrumental in the creation of America’s credit unions. The system, known as the Filene-Finlay simultaneous translator, consisted of little more than a set of headphones and a switchboard that connected listeners to human translators interpreting speeches from soundproofed booths.

In the mid-1940s, the company unveiled the IBM Wireless Translation System, also known as the Simultaneous Interpretation System. Combining radio technology with human translators, it provided listeners with portable access to a wider selection of languages. Within a few years, it was servicing a number of high-profile clients, including the United Nations, the Olympic Games Committee, the Red Cross and the International Military Tribunal during the Nuremberg trials.

In 1954, IBM introduced a translation solution that lessened the reliance on humans. The 701 Electronic Data Processing Machine, the company’s first commercial scientific computer, ran an experimental software program to automatically translate Russian into English. It dramatically increased translation speed but inserted a new set of challenges. Human interpreters use experience and instincts to reflect context, idioms, humor and structure in their translations. In an effort to replicate such nuance, the IBM team designed a program that incorporated algorithms based on grammatical and semantic logic. The results were thrilling but far from perfect. In 1959, for instance, IBM Research News, the company’s monthly newsletter about scientific developments, described how IBM’s computer had begun to produce “a quality of ‘baby talk’ that amazes and delights linguistic scholars.”

 

Early history of machine aided translation

Mark II translating device
To the public, such innovations seemed almost magical

Over the next decade, IBM continued to make advances. Research teams fine-tuned Russian-to-English translations with the Mark I Translating Device, developed for the US Air Force, and added languages, including Chinese. By the time the Mark II, which featured a high-speed optical disk loaded with 170,000 words, came around in the early 1960s, IBM’s machines could translate one page of a book from Russian to English in 10 seconds. In 1964, the company released the first automated Braille translation system, and in 1968, the IBM Braille Model D electric typewriter allowed non-Braille writers to create Braille documents.

At the turn of the 21st century, a raft of emerging technologies ushered in something of a golden age of translation. During the war in Iraq, for example, a team led by Yuqing Gao at the Thomas J. Watson Research Center collaborated with the US Defense Department to create MASTOR, the Multilingual Automatic Speech-to-Speech Translator. It could instantly recognize and translate 50,000 English and 100,000 Arabic words. Next came TALES, or Translingual Automatic Language Exploitation System, engineered to generate English closed captions for Arabic television and radio broadcasts. In 2007, IBM released Say It Sign It (SiSi), a system for converting spoken words into British Sign Language.

 

Sometimes they’d think there was someone under the exhibit or at some distant location doing the translating Barbara Niles Secretary to the manager of Manufacturing Services

To the public, such innovations seemed almost magical. At the 1964–65 World’s Fair in New York, an IBM team demonstrated Russian-to-English translation between the IBM pavilion in Queens and the Mark II Translating Device in Kingston, some 90 miles away. “Sometimes they’d think there was someone under the exhibit or at some distant location doing the translating,” said Barbara Niles, secretary to the manager of Manufacturing Services. “They wouldn’t believe it was a computer in Kingston.”

IBM Watson language translator
Neural machine translation and natural language understanding

With hundreds of thousands of employees sharing thoughts and ideas in 170 countries, IBM has become as much a customer of translations systems as a developer. So in 2008, the company introduced a bespoke system to handle intracompany communications. Salim Roukos, CTO of Translation Technologies, and his team unveiled n.Fluent, a system that provides instantaneous translations between English and several languages for electronic documents, web pages and instant messaging. It learns and improves based on usage patterns from multilingual employees.

In the 2010s, the company released a cloud-based service called the IBM Watson Language Translator. Powered by neural machine translation, it translates documents, news and more into dozens of languages. In 2017, developers at the Australian startup Lingmo paired the solution with IBM’s Natural Language Understanding API to create the world’s first independent translation device, Translate One2One. It’s an earpiece that can translate spoken conversations within seconds without Bluetooth or Wi-Fi.

The world is much smaller today than when IBM first started experimenting with translation systems a century ago. Global commerce has only increased the need for frictionless communication and heightened expectations for speed and accuracy. So while significant progress has been made, IBM continues its quest to create faster, more accurate translation services that bridge cultural and geographic divides while harnessing the subtleties, the context and even the humor unique to every language.

Related stories 1964-65 World’s Fair

An edutainment experience for millions focused on the practical value of emergent computing technologies

Deep Blue

IBM’s computer checkmated a human chess champion in a computing tour de force

Watson, ‘Jeopardy!’ champion

The DeepQA computer won TV’s smartest quiz show and kicked off an era of natural language processing