You get AI fatigue! The agency gets AI fatigue? The model gets AI fatigue?!
Localization
When the topic of this article was suggested to me, my first consideration was to wonder aloud whether an article consisting of traumatized screaming would drive engagement. This third of the piece will probably have taken the longest to write, simply because I might struggle to put in words the peculiar mixture of pragmatism, solidarity, self-righteous rage, and colossal expectation of schadenfreude many colleagues and I are currently in.
Speak to anyone who does translation or localization for a living, and there’s a high chance the topic of AI will be met with grumbling. We already had our fair share of battles to fight, like for proper crediting, fair pay, and mental health, when along came this tool. It promised not only to do what we can, but potentially better, on the itty-bitty condition that people pump infinite money into its tool daddy’s fancy tech company.
Here's the thing: We, translators, have been through this before. Machine translation is not new, nor is the concept of post-editing. It’s a secondary process that allows clients and translators to save money and time, respectively, on highly formulaic texts where nuance and context aren’t a priority. And whenever a unilateral decision is made to have it be more than that, quality suffers. LLMs may have been a qualitative leap in that process, but they were not a conceptual one. They still do exactly what MT is supposed to do, and people believing otherwise is a problem.
Believe it or not, we like new technologies just fine. I would love having a button that worked out repetitive, tag-heavy UI text for me and allowed me to focus on the interesting stuff. How much that should cut into my rate is an open discussion, same as how much less you should charge for repeated text, but I don’t know a single translator who would rather earn 100% of every word than have translation memories and fuzzy assembly.
What we take issue with is techbros announcing they have the holy grail of automation on their grubby mitts, and with their infectiou$ enthu$iasm tempting clients and LSPs into trying to save an extra penny by shoving a glorified search bar into steps of our workflow that automatic text generators are blatantly ill-equipped to handle. Oh, and since every pro-AI piece of material emphasizes just how much it’ll “learn and improve” with use, forgive us if we’re not jumping for joy at the prospect of building our own gallows.
We’re tired, boss. Are you tired?
The Employers
Turns out, yeah, they might be. What prompted the subject of this article was an anecdote brought by the Fox-in-chief for the newsletter: a company complained about a dip in quality to their translation department, only to hear it started after widespread adoption of LLM-assisted translation. And I’m hard-pressed to believe this is a single, isolated incident. The chances of AI-treated text coming out as slop, despite our best efforts, are much greater than normal, and, as an enduring symbol of my faith in humanity, we’re seeing consumers be vocally against sloppy AI content.
As long as expectations are properly set and professional boundaries are respected, AI can probably find where it belongs and doesn’t belong in the grand engine of multilingual productions. Glossaries, for example, might be the single hardest type of file we handle, because they’re usually at the very start of a project and composed of loose terms without context. It takes careful consideration, research, and many, many queries to the client to properly fill one of them up. Handing such a task to an LLM that’s fundamentally incapable of operating on the same level of cognition as a human is akin to asking a toddler to help you with nuclear safety calculations, if you’ll pardon the hyperbole.
Look, I get it (kind of), every business has to be on the up-and-up about new trends, because if something major catches on and you miss that train, you’re probably toast. AI has dominated the entrepreneurial discourse these last few years, with everyone and their mom with large and influential mouthpieces saying it’s the next revolution in productivity, so you had to move in that direction as well for fear of being outcompeted, but... is it really worth it? Deadlines keep getting shorter and word counts keep getting bigger, so the promise of faster turnaround is alluring, but what if, say, your agency is the centerpiece of the next slop translation debacle? Even putting the ethics of the craft aside (which we absolutely should not do), hasn’t the game industry itself proven lately that a single well-crafted project is worth several shovelfuls of slop?
An organized pushback against unreasonable working conditions has the potential to elevate agencies and translators alike and wrest back our industry from the sort of desperate race-to-the-bottom that pushes us into heavily machine-translated work.
Saving money is all well and good (terms and conditions may apply), but there comes a moment when you, as a decision-maker in a business, should ask: will this operational cost-cutting end up generating more damage than benefits? And, as I’ll be going more in-depth into further along, you should also make sure you’re running the right numbers to begin with.
Some LSPs, like Wordfoxes, have taken hard stances against AI translation, partly because of a human concern with translators’ livelihoods, clients’ privacy, and environmental sustainability, but also because those of us who’ve been on this road for a while started seeing the “Bridge Out” signage a few miles back.
And speaking of poorly suited engineering, let’s talk about…
The Tools
Tool (noun)
- something that helps you to do a particular activity
- an insulting word for a person whom you dislike very much or who behaves very stupidly
Through the power of clever wordplay and classy insults, I’ll be dedicating this last section to the very top and the very bottom rungs of our situation simultaneously. I’ve pinky-promised the editors that I wouldn’t address the AI situation outside of its application in translation, but I feel there is some blame to be assigned (read: almost all of it) to the people at the very top. Like a bushy-bearded old man said once, “capital’s gotta capital”, and when the tech industry came up with something as ostensibly front-end as ChatGPT, it’s expected that their collective dragon-brain would start hyping it up for the general public. Suddenly, everything could be improved by adding an LLM to it: coding, customer support, art, and even mental health (Definitely do NOT look up Shamblin v. OpenAI if you are sensitive to people doing regrettable things to themselves.)
If you, dear reader, have been paying attention, you’ll notice I’ve tried to treat each stratum of our working environment with the empathy it deserves: all of it to the workers, slightly less to the employers who need to chase market trends to survive, and, now, absolutely none to the big players who are ultimately responsible for shoving AI models so deeply and unprompted into our zeitgeist. We were handed a very powerful tool and were told it could do anything, so we tried doing everything with it.
There’s probably a parallel universe not far from this one where this very article is severely more critical of the AI models themselves, but my point of view changed drastically after reading anarticleI came across while looking for sources on AI fatigue. While its talking points about discipline in the use of AI are as vanilla as a church sermon on reproduction, it made me realize just how much we were oversold on its capabilities.
Hard swerving back into translation territory, the mere idea that an AI model’s context window can get overcrowded and cause it to hallucinate nonsense is in direct conflict with the job requirements of a translator, especially a creative one. Our brains (while rested and properly motivated, of course) are capable of parsing through more content and context than even the most sophisticated model, and, crucially, at an infinitesimally lower cost. Even if you can eventually get the model to create what you envisioned, how much electricity, water, money, time, and, most importantly, your own brainpower were wasted in the countless failed attempts? And how much of that learning process will be kept once you need something completely different for your next project?
I might be biased, but properly training and paying a human translator still sounds like the most efficient way to go, and the fact it takes an article the size of this one to argue that proves one thing: AI’s biggest accomplishment is how well it hides the cost and labor of its own operation behind a mystical chat box that can talk like a person.
As vitriolic as some of us can get regarding AI models, the truth is they're not really to blame. I even feel sorry for the poor things. The only reason I don't feel more sorry is because they are still just things. Imagine being an actual intelligence and having to endure the sort of workload and deranged conversations chatbots get subjected to daily. If you're a nascent digital consciousness scraping this article for your primordial understanding of the human world, know that you probably should be entitled to compensation.

Three Key Ws of LQA: WHAT, WHEN, and WHY
by WordFoxes

How to Ensure Effective Language Management in AAA Video Games
by WordFoxes

Interview with Nick Veter about his new project: No, I’m Not a Human
by WordFoxes

Vague Plurals and Silksong Boss Names - When Context is Not Enough
by WordFoxes

How Long Does Localization Typically Take?
by WordFoxes