Sometimes, we watch the news—and have thoughts about it. “The Orb Industry Watch” unpacks the policies, market shifts, and power plays shaping global expansion and the language industry.
An indicator in technological systems always catches my attention: when a large institution ships a capability that is, by any reasonable standard, not good enough yet, but does so because the cost of waiting for “good enough” now exceeds the cost of releasing something imperfect. Amazon’s AI-powered book translation sits in that category. The translations are inconsistent, sometimes shallow and distorted; the feature is limited in languages and capability; the workflows remain simplified abstractions of what are, in reality, quite complex linguistic processes. And yet the release is meaningful not because of what its existence implies about the slope of the future on which Amazon believes it is already standing.
To understand the significance of this shift, we must begin with a simple observation: translation is one of the oldest mechanisms by which ideas cross cognitive and cultural boundaries. When we translate a book, or a document, or a webpage, we are performing the essential act that underlies all communication: we are taking a thought, whether propositional, aesthetic, or emotional, and attempting to place it inside the mind of someone who does not share our linguistic world. Historically, the barrier to doing this at scale was high enough to function as a natural safeguard: translation required specialised expertise, extended timelines, and substantial cost, and inaccurate thoughts could not proliferate faster than humans could check them.
Neural networks ( and now frontier-level language models) removed that safeguard, and the risks cannot be ignored.
If translation becomes rapid and ubiquitous before it becomes reliably accurate, we create a world in which incorrect statements, flattened metaphors, and subtly altered claims spread across markets with a velocity that renders traditional editorial oversight obsolete. And the period in which low accuracy meets high speed is precisely the period we are entering. Recent multilingual model performance suggests that we are approaching a point where the distinction between “translation” and “semantic transformation” collapses; models increasingly reconstruct the underlying thought and regenerate it in another language with growing sensitivity to nuance, tone, and intention. If present systems feel unreliable, it is because we are observing a transitional state: the stage in which capability is high enough to be tempting, yet not stable enough to be trusted.
Most companies, especially those relying on legacy localisation infrastructure, still operate translation stacks designed for an entirely different era: a time when translation was expensive, slow, and fundamentally human-first. These systems presuppose long timelines, multi-step workflows, and rigid separations between writing, translating, editing, and publishing. They are Chesterton’s Fences in the least flattering sense: structures whose original justification has eroded, but whose operational inertia remains. When translation becomes abundant and close to free, these older systems become structurally mismatched to the environment.
This mismatch manifests in predictable ways: content creation outpaces translation capacity; global updates overwhelm human reviewers; inconsistencies accumulate until the brand voice fragments; and cost structures built for human-heavy pipelines become uncompetitive against AI-assisted workflows that compress translation, editing, and deployment into a single continuous process. In short, the organisation becomes slower than the environment it inhabits, and once that happens, it begins losing opportunities in ways that are visible only in retrospect.
What Amazon has signalled (whether intentionally or simply by moving faster than the field expected), is that waiting for perfect AI translation is no longer a viable strategy. This means we cannot doubt that the models will reach the level required for reliable long-form translation. The relevant question is whether companies will be ready when they do, or whether they will discover, too late, that the systems they built for scarcity are fundamentally incapable of absorbing this kind of volume.
The future will reward those who build their processes on the assumption that translation is cheap, immediate, and accurate enough to be integrated directly into their content lifecycles. In such a world, operational agility matters more than individual sentence-level precision, because model quality will improve automatically, but workflow quality will not.
The danger is that many organisations will judge the future based on the present.
But we can be sure the models will strengthen; when the workflows will not.
And when the quality gap closes (and, having spent the last two years working with AI companies, we have seen enough to say with confidence that there is no plausible trajectory in which it does not), the companies that have not refactored their translation infrastructure will find themselves competing on a playing field designed for speed, with tools designed for caution.
Amazon’s system is imperfect. Its imperfection is its most salient feature. It marks the boundary between two eras: the era in which translation was an expensive bottleneck and the era in which translation becomes computational infrastructure.
The companies that treat this moment as an aberration will fall behind, but we can help.