Chartered Institute
of Linguists

CIOL AI Updates


Introduction


Since we published CIOL AI Voices, CIOL Council and delegates at our 2024 Conference days underlined their view of the importance of enabling linguists to keep abreast of developments in AI with a regularly refreshed resource hub. Here are some useful recent resources for linguists, which we will add to and update regularly, alongside future CIOL Voices pieces.


Recent AI News & Articles for Linguists


SlatorGoogle Finds ‘Refusal to Translate’ Most Common Form of LLM Verbosity

Researchers from Google have identified 'verbosity' as a key challenge in evaluating large language models (LLMs) for machine translation (MT). Verbosity refers to instances where LLMs provide the reasoning behind their translation choices, offer multiple translations, or refuse to translate certain content. This behavior contrasts with traditional MT systems, which are optimized for producing a single translation.

The study found that verbosity varies across models, with Google's Gemini-1.5-Pro being the most verbose and OpenAI’s GPT-4 among the least verbose. The most common form of verbosity was refusal to translate, notably seen in Claude-3.5.

A major concern is that current translation evaluation frameworks do not account for verbose behaviors, often penalizing models that exhibit verbosity, which can distort performance rankings. The researchers suggest two possible solutions: modifying LLM outputs to fit standardised evaluation metrics or updating evaluation frameworks to better accommodate varied responses. However, these solutions may not fully address verbosity-induced errors or reward 'useful' verbosity, highlighting the need for more nuanced evaluation methods.

 

The Business of FashionAI Pioneer Thinks AI Is Dumber Than a Cat

Yann LeCun helped give birth to today’s artificial-intelligence boom. But he thinks many experts are exaggerating its power and peril, and he wants people to know it.

“We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true,” says LeCun.

“You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”

Today’s models are really just predicting the next word in a text, he says. But they’re so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.

 

SlatorGoogle Aligns LLM Translation with Human Translation Processes

Google researchers have developed a new multi-step process to improve translation quality in large language models (LLMs) by mimicking human translation workflows. This approach involves four stages: pre-translation research, drafting, refining, and proofreading and aims to enhance accuracy and context in translations. Tested across ten languages, this method outperformed traditional translation techniques, especially in translations where context is crucial.

 

SlatorHow the Media Covers the 'AI vs Translators' Debate

Slator reports that translation is one of the most-referenced professions in 2024's media coverage of AI’s potential impact. Generally, according to Slator, media opinions on AI and translation can be grouped into one of four categories: "Humans Are Still Superior (For Now)" with c25% of articles, "Translators Are In Danger", accounting for c40% of articles, "AI + Humans = Optimal Translation" was the message of around 20% of articles and finally c15% of articles talked about translation but without mentioning translators at all. A mixed picture at best.


Ipsos - The Ipsos AI Monitor 2024