Blog

The Office Is Becoming an AI Control Room

The Office Is Becoming an AI Control Room

The most revealing thing about AI and work this week was not a single blockbuster model launch. It was the way several stories clicked together. Google announced new Gemini features for Docs, Sheets, Slides, and Drive that promise to turn scattered company information into drafts, dashboards, and polished presentations. Atlassian told staff it was cutting roughly 10 percent of its workforce while reorganising for an “AI-first” future and changing the skill mix it needs. WIRED reported that Anthropic’s Claude Code has become such a big business that OpenAI is now racing to catch up. And The Verge published a deeply unsettling feature about laid-off professionals being paid to train the very systems that may one day do their jobs.

Read together, those stories describe the real shape of the future of work. It is not the clean sci-fi version in which software suddenly replaces whole professions overnight. Nor is it the soothing corporate line that AI will merely “augment” everyone. What is emerging instead is a workplace that looks more like a control room: fewer people building every first draft by hand, more people defining goals, supervising machine output, and checking exceptions.

The blank page is now a product

One of the oldest frictions in knowledge work is the blank page. Google’s new Workspace updates are explicitly designed to erase it. Gemini can now synthesize information from files, emails, chats, calendars, and the web to produce a formatted draft in Docs, construct a spreadsheet in Sheets, generate slides, and answer questions across Drive. In one example, a user describes a campaign plan and Gemini pulls together the source material, structures the document, and applies formatting. In another, it builds and enriches spreadsheets, even handling optimisation tasks that once required formulas or specialist tools.

That sounds like a convenience feature until you notice what is being moved into software. The first-draft labor of office work has always consumed huge amounts of time: gathering context, assembling fragments, deciding on structure, and getting to something solid enough to react to. If the first move in a task becomes “describe what you want” instead of “start building it from scratch,” the human role shifts up the stack. The job becomes less about manufacturing version one and more about steering, editing, approving, and connecting the output to institutional reality.

The same pattern is now obvious in software engineering. WIRED’s report on OpenAI’s effort to catch Anthropic in coding agents describes an industry that has crossed from autocomplete into delegation. Anthropic told WIRED that Claude Code represented more than $2.5 billion in annualized revenue in February, while OpenAI’s Codex was already above $1 billion by late January. That is not toy usage. It is companies paying real money to move real work into agentic systems. Engineers are not just asking for snippets anymore; they are waking up to overnight progress on actual projects.

The hidden workforce behind the magic

Yet the most important corrective arrived from The Verge. Its report on Mercor and the booming market for AI training labor makes clear that “automation” still depends on a staggering amount of human effort. Writers, lawyers, consultants, teachers, and scientists are being asked to create rubrics, golden answers, reasoning traces, and synthetic workplace scenarios so models can learn what good work looks like. Mercor says around 30,000 professionals work on its platform each week. That is not a side market. It is a new labor layer.

There is something brutally revealing about this. The future of work is not being built only in labs or boardrooms. It is also being assembled by highly educated people who often arrive there after layoffs, weak hiring markets, or shrinking freelance demand. They are not replacing AI; they are feeding it. In many cases, they are translating tacit professional judgment into explicit criteria that a model can imitate later. Once you see that, a lot of the current economy snaps into focus. AI is not just consuming data from the internet; it is consuming the operating logic of occupations.

That is why Atlassian’s memo mattered. The company did not claim, in cartoonish fashion, that machines had made 1,600 people redundant overnight. It said it wanted to self-fund more investment in AI and enterprise sales, move faster, and retain people with the skills needed in an AI-first company. Corporate language is always polished, but the signal was still clear. The important shift is not whether AI replaces every worker directly. It is that companies are reorganising around the assumption that some categories of work can now be compressed, automated, or absorbed by fewer people using better tools.

The new premium is judgment

This is the turn many executives and workers still have not fully absorbed. The immediate effect of AI is not universal unemployment. It is a repricing of different kinds of human contribution. When systems can generate competent drafts, analyses, code changes, presentations, and responses, the scarcest thing in the room is no longer raw production capacity. It is judgment.

Judgment sounds abstract until you define it properly. It means knowing which source documents matter and which are outdated. It means spotting the slide that is persuasive but false. It means understanding whether an AI-generated spreadsheet reflects the reality of a business rather than the surface pattern of the data. It means deciding whether code that passes tests is actually safe to ship. And, increasingly, it means being able to specify good work in enough detail that a machine can attempt it without drifting into expensive nonsense.

That is the real significance of all these stories landing in the same week. Google is packaging first-draft cognition into office software. Coding agents are turning technical labor into a supervisor sport. Data-labeling platforms are industrialising the extraction of expert judgment. Employers are rethinking headcount, org charts, and hiring around that new stack. The office is becoming an environment where one person can direct several machine processes at once, but only if they can frame tasks clearly, check outputs quickly, and take accountability for the result.

There will be winners in that world. People who can combine domain expertise with taste, systems thinking, and operational discipline will become much more valuable. But there will also be losers, especially where firms use productivity gains mainly as a reason to reduce labor costs while pushing more review work, surveillance, and precarity onto the people who remain.

The easiest mistake now is to keep asking whether AI will take jobs, as if the answer must be either yes or no. The better question is who gets promoted into supervision, who gets squeezed into training and validation, and who gets cut out of the loop altogether. That is the labor politics underneath the product demos. The blinking cursor is disappearing, and with it the old assumption that work begins when a human starts typing. In the next era, work begins when someone decides what the machine should attempt, what “good” looks like, and who will answer for the consequences when it gets it wrong.