Blog
Three-Quarters of Google's New Code Is AI-Generated. That's Not the Interesting Part.

---
The number itself is staggering. In late 2024, Google was at 50%. By April 2026, they've pushed to 75% — a 50% relative jump in AI-authored code across one of the largest engineering organisations on the planet. Sundar Pichai's announcement framed it as a productivity milestone, and sure, it is. Google's Gemini models are now drafting the majority of new project code internally, with human engineers reviewing every line.
But here's the thing: writing code was never the bottleneck. Any developer who has shipped real software knows this. The bottleneck was always coordination — getting requirements right, keeping teams aligned, making sure the security review happens before the deployment, not after. And that's exactly where the story gets genuinely wild.
From autocomplete to orchestration
Pichai used a phrase that deserves more attention than it got: "digital task forces." Google's AI agents aren't just writing functions. They're managing complex code migrations — the kind of soul-crushing refactoring work that used to take teams of engineers months — at six times the speed of humans working alone. Six times. On migrations. If you've ever done a large-scale framework migration, you know that number should make you sit up straight.
Meanwhile, LangChain and Cisco just published a detailed reference architecture for what they're calling "agentic engineering" — and it's not what you think. The core insight isn't "better coding AI." It's a multi-agent control plane where worker agents mirror individual contributors and leader agents mirror project leads. These agents share memory, coordinate through the A2A protocol, and chain workflows across team boundaries. They audit their own decisions. They onboard themselves.
The architecture treats agents not as tools but as team members with defined responsibilities and accountability. That framing matters enormously. When your AI system has a leadership layer, shared context, and traceability baked in, you've stopped talking about "AI-assisted coding" and started talking about something closer to an engineering operating system.
The numbers that actually change everything
If you want to see what this looks like at scale, Cloudflare just pulled back the curtain on their own AI engineering stack — and the numbers are almost absurd. In the last 30 days: 3,683 internal users (93% of R&D), 47.95 million AI requests, 241.37 billion tokens routed through their infrastructure. Twenty-four1 billion tokens. At one company.
But the metric that should make every CTO reassess their quarterly plan is this: Cloudflare's 4-week rolling average of merge requests climbed from roughly 5,600 per week to over 8,700. The week of March 23 hit 10,952 — nearly double the Q4 baseline. They've never seen a quarter-to-quarter increase like it.
What changed? Cloudflare didn't just give everyone Copilot and call it a day. They built a full-stack internal platform: Zero Trust auth through Cloudflare Access, centralised LLM routing via AI Gateway, on-platform inference on Workers AI, MCP servers with single OAuth, an AI Code Reviewer integrated into CI, sandboxed execution for agent-generated code, stateful long-running agent sessions, and a 16,000+ entity knowledge graph powered by Backstage.
This is the point: the organisations seeing these compounding gains are treating AI as infrastructure, not as a feature toggle.
The uncomfortable question for the rest of us
So here's where it gets real. Google, Cisco, Cloudflare — these are well-resourced companies with dedicated teams building bespoke agent infrastructure. What happens to everyone else?
The honest answer: the gap is about to get brutal. Companies still debating whether to "allow" AI coding tools are already a year behind. The organisations pulling ahead aren't the ones using AI to write individual functions faster. They're the ones rearchitecting their entire delivery pipeline around the assumption that code generation is cheap, coordination is expensive, and agents can handle the coordination too.
The Cisco/LangChain paper makes this explicit: the goal isn't faster code generation. The goal is moving software through the system faster and safely. That's a fundamentally different design constraint. When you optimise for throughput rather than keystrokes, you end up with something that looks less like an IDE plugin and more like a control plane.
Cloudflare's numbers prove the model works. Google's numbers prove it works at planetary scale. The ICSE 2026 keynote on agentic software engineering puts the frame bluntly: as these systems become more agentic, they won't merely support the shift to software-defined infrastructure — they'll increasingly analyse, adapt, and redesign it.
The role of the software engineer isn't shrinking. It's expanding. More of society depends on semi-executable systems built from natural language, code, tools, policies, and workflows — and agentic environments increasingly generate, coordinate, and evolve those components. The engineer's job shifts from writing every line to governing the system that writes the lines. That's a promotion, not a layoff.
But only if you make the jump. The window where you can treat AI coding as an optional productivity boost is closing fast. The organisations already running agent swarms, building control planes, and shipping at double their previous velocity aren't waiting for permission.
75% of Google's new code is AI-generated. The interesting part is that 75% isn't the ceiling — it's the floor. The ceiling hasn't been invented yet.