On the Authorship of Machine-Made Meaning.

“From the Orb vault” is a series of previous market research, presentations, blurbs, and other conceptual writings that we will start publishing regularly, in hopes it might help shape views on the often unregarded topic of global expansion and localisation (L10N). Through these insights, we aim to shed light on the complexities and inefficiencies that many overlook in the rush to scale internationally.

The emergence of AI-generated translations poses a deceptively simple question: who owns the output? But behind that lies a deeper riddle, perhaps even a quiet crisis. As machines increasingly mediate our words across borders, we are not merely confronting questions of legal ownership; we are brushing against the boundaries of authorship, meaning, and moral accountability in the age of synthetic language.

Translation as Creation, Not Transmission

The legal scaffolding around translation has long rested on the idea that it is a creative act. Not merely the carrying of meaning across a linguistic river, but the building of a new bridge with unique materials inflected by the translator’s intuition, cultural fluency, and interpretive bias.

But what if the builder is a machine? And what if the material (the language model) has itself been formed from an opaque amalgam of human works, scraped without permission and synthesised without understanding?

AI-generated translation is not authored in any conventional sense. It has no intent, no awareness, no ethical relationship with the material it processes. It is neither thief nor artist. It is, at best, an echo chamber of past language, and at worst, a black-box ventriloquist repeating phrases it does not know are haunted.

Can You Own What You Don’t Understand?

The argument over copyrightability misses a larger point: the absence of consciousness does not only invalidate authorship; it voids responsibility.

If an AI translates a poem with unintended political undertones in a volatile region, who answers for the consequences? If a brand uses AI to localise messaging and the result offends or distorts, is the failure a legal matter or a moral one?

The current frameworks (copyright, intellectual property, liability) presume human agents somewhere in the system. But in the AI translation loop, we increasingly find a chain of disavowal: the software provider claims neutrality, the user claims automation, the translator is absent.

We may be building a system that deliberately avoids the burden of authorship while reaping the benefits of creation.

The Collapse of Labor, the Dilution of Meaning

There’s also the existential side. If language is what makes us human (not just a tool, but a territory), what happens when that territory is flattened by scale and speed?

When everything can be translated instantly and approximately, do we lose the strangeness, the delay, the friction that once demanded interpretation? Do we begin to forget that language costs something and that meaning is forged, not fetched?

If a novel translation, rendered by machine, is denied copyright because it lacks a human author, should we not also ask: does it deserve readership?

From Bystanders to Architects: Our Strategic Recommendations

For companies navigating the murky waters of AI translation, we propose not retreat, but intelligent stewardship. This is the time to evolve and not abandon our frameworks for meaning-making. Here’s how:

  1. Authorship protocols for hybrid outputs — Implement internal guidelines for what constitutes co-authored translation, where AI outputs are materially shaped, verified, or refined by human experts. Require meaningful editorial intervention before claiming authorship or copyright.
  2. Cultural accountability layer — Add a cultural oversight role into your content pipeline: a human-in-the-loop who is neither translator nor editor, but a custodian of context. This person should interrogate the cultural ramifications of machine output, ensuring not just fluency but appropriateness and resonance.
  3. Ethical watermarking — Begin labeling machine-translated content with invisible but verifiable ethical metadata: who intervened, what level of review occurred, and what the model was trained on. Not for compliance (yet) but for credibility. In a future audit of AI authorship, this is your provenance.
  4. Invest in explainability, not just accuracy — Push your providers for models and workflows that don’t just score high on BLEU, but can explain why a translation was made. This is your defense against bias, liability, and reputational damage.

In short, act like authors even when your tools don’t. Because language, at its best, is not just a transaction of information, but a transfer of care.

Quentin Lucantis @orb