Sometimes, we watch the news—and have thoughts about it. “The Orb Industry Watch” unpacks the policies, market shifts, and power plays shaping global expansion and the language industry.
At SlatorCon Silicon Valley, Phrase attracted a small crowd for something that is equally an organisational revolution and a technological breakthrough: the emergence of AI agents.
I was less struck by the technology itself than by what it says about how we’re arranging work and trust. We’ve spent the last few decades figuring out how to get computers to take orders, now we’re figuring out how to get them to decide. Decision-making (particularly in messy or unclear situations) has always been the defining strength of human agency. Getting it into machines might be a way for us to scale initiative, which has always been the hardest to do (and will probably lead to mass unemployment but not the point).
Semih Altinay of Phrase framed suggested that agents are connective tissue that unifies content, product, and marketing into one consistent stream of intent. Localisation, he continued, becomes now more than translation service: a strategic practice that aligns global communication with corporate intention. That is: the AI agent internalises some portion of the company’s utility function, extending it along cultural and linguistic boundaries.
Uber’s Juan Marcano provided a specific example: an in-app agent that roams around Uber like a frenetic tester, digging up bugs on its own. It learns by reward and failure, like a person, but at digital speed. This sounds robust: edge cases are found earlier, fewer surprises. It’s a small microcosm of a larger principle: once you outsource judgment, you outsource risk as well, and the resilience of the system is a function of how well it can self-correct.
Then it was the turn of AWS. Govind Varadan reminded us that with autonomy comes responsibility. The distinction between copilots and agents is that the first advises; the second acts. The moment you allow a system to modify your data rather than simply query it, you are in a realm of governance, accountability, and trust. Here, technical capability intersects with epistemic philosophy: authority requires explainability, and action without alignment is a recipe for disaster.
Listening to these debates, I had the feeling the conversation is somehow about delegation: how we trust things beyond the human boundary. Humans always delegate (teams, processes, markets) but agents move the boundary. They force us to reconsider what it means to trust a system, and where do we put the HITL? How do you structure incentives and feedback such that delegated judgment shares your values? Unfortunately, setting up these systems at full-speed will make sure we do not get the answer right, which is mostly what we have been seeing in the industry so far.
Phrase’s keynote was equally interesting. Altinay described ambition as being the “AI backbone” of the world’s content ecosystems. I think the metaphor somehow stands. The backbone isn’t the brain; it’s the circulatory system that communicates intention. Agents, in this view, enable the flow of sense, circulating through languages and contexts, adapting dynamically to constraints and unforeseen contingencies. For us, this could mean l10n is no longer a service but a distributed cognitive process, whereby context and consequence are encoded, evaluated, and acted upon. The challenge, however, is one of perception: who will value it as a distinct discipline, rather than seeing it as just another step in the automation of everything, and pay accordingly?
Of course, humans remain essential. Judgment, taste, voice: these cannot be fully outsourced. Yet the balance of our work is shifting: the systems we now use now understand and not just execute. We are entering an era of agentic language, where words are not just transmitted or translated, but tracked, shaped, and optimised through context and culture.
The landscape is changing, altering not only the tools we use, but the very architecture of cognition, responsibility, and value within our institutions. The question that was not answered is who pays for the care of context when automation can appear seamless?