The operations maths just broke | Firemind
Insight

The operations maths just broke

21 April 2026

Why legacy spend, AI-generated code, and the rise of AI agents are forcing enterprise IT to move to autonomous operations.

Your IT budget for 2026 is already 80% spoken for.

That is the first problem. The other two are worse.

Three things are happening at the same time, to the same IT organisation, on the same budget. Each one is already stretching operations teams. Together, they break the operating model that most enterprises are still running on. They are why autonomous operations has moved from a future state to a near-term requirement.

This is not a forecast. The numbers are already in.

Legacy already ate the budget

Gartner estimates that 60 to 80% of enterprise IT spend goes to keeping existing systems running. Before a single line of new code is written, the majority of IT spend is already committed.

That used to be a prioritisation problem. It has become a structural one. The remaining 20 to 40% has to fund everything else the organisation says it wants from technology. Innovation. Modernisation. Compliance. Resilience.

And that remaining slice now has two new demands on it that no budget committee has modelled.

The code estate is getting bigger and riskier at the same time

Google reports that more than 30% of its own new code is now AI-generated. Enterprise adoption is behind Google but moving in the same direction. Veracode’s October 2025 research finds that 45% of AI-generated code ships with a known security flaw. Only 55% of it passes a basic security check.

Read those two numbers together. The organisation is shipping more code, faster, with more flaws embedded in it. Every sprint adds to the attack surface. Every merged pull request writes another entry into the “to be patched later” column.

Patching. Monitoring. Evidencing. All of it lands on the same operations teams already overcommitted to keeping legacy running. Every flawed merge becomes a CVE to chase, a patch window to schedule, audit evidence to produce.

And now there are agents

IDC forecasts the number of AI agents deployed in enterprises will grow from 28 million today to more than one billion by 2029. That is a 35x expansion inside the same decade.

McKinsey’s October 2025 research finds that 80% of organisations have already encountered risky behaviour from AI agents, including improper data exposure and unauthorised system access. Capgemini’s research shows that only 2% have scaled agent deployments into production at all.

So here is the operating picture for a CIO planning the next three years. Legacy is already spoken for. The code estate is growing faster than it can be secured. And agents are about to multiply by 35x, most of them outside any governance model that exists today.

Why the operating model was not designed for this

The default response is to hire. More SREs. More security engineers. More governance analysts. The maths does not work. No headcount plan scales 35x. No training programme closes that governance gap in three years when 80% of organisations are already living with risky agent behaviour.

This is where the conversation has to change. The problem is not that IT teams are under-resourced. The problem is that the unit of work is wrong. Ticket queues, manual review, and human-executed runbooks were built for a world where the pace of change was human-scaled. That world is ending.

Where autonomous operations makes the maths work

The organisations that come through this decade intact will not be the ones that grew their operations teams the fastest. They will be the ones that changed what an operations team does.

That change is already starting. It is the shift to autonomous operations. Infrastructure that detects, diagnoses, and resolves within governed boundaries. Code that is reviewed and hardened continuously.

And a control plane built for a billion agents, not a handful. Policy defines what an agent can do. The platform logs every decision. High-risk actions pause for a human. Low-risk actions run inside the boundary and write their own audit trail. Governance stops being a review meeting after the fact. It becomes the policy the agents operate inside.

Autonomous operations is not a feature bolted onto the current model. It is a different unit of work. Engineering capacity shifts from executing runbooks to defining the policy those runbooks operate within. Ticket queues become audit metadata. The human is still accountable for the outcome. The machine handles the execution.

A concrete example. In the old model, a failing service at 3am generates a ticket. Pager goes off. On-call engineer logs in, reads the runbook, restarts the service, closes the ticket.

In the new model, policy already defines what a failing service looks like, what remediation to run, and what counts as an edge case. The agent executes the remediation inside those boundaries. No ticket opens. The audit entry records the decision. The on-call engineer reviews it the next morning, not at 3am.

Same outcome. Different unit of work.

The maths does not work inside the old model. It works inside the new one. The question for IT leaders in 2026 is not whether that shift is coming. It is whether they will be running it, or catching up to it.

See how autonomous operations works in production → Autonomous Cloud Operations

Frequently asked questions .

Why is 60 to 80% of the IT budget spent on legacy?

Operating cost compounds. Every new system added to the estate adds a permanent maintenance line. Without continuous, automated operation, the run-cost of each generation of technology accumulates into the next.

Is AI-generated code safe to ship at scale?

Not without continuous review and hardening. Veracode's research shows 45% of AI-generated code ships with a known security flaw. Shipping more of it without continuous review compounds that risk. The answer is not less AI-generated code — it is an operations model that patches, reviews, and governs continuously, not in quarterly cycles.

How do you govern a billion AI agents?

Not through human oversight at every action. The only model that scales is policy-defined autonomy: agents operate inside pre-approved guardrails, with escalation paths for anything outside them. That is the foundation of autonomous operations.

View all insights

Frequently asked questions .

Why is 60 to 80% of the IT budget spent on legacy?

Operating cost compounds. Every new system added to the estate adds a permanent maintenance line. Without continuous, automated operation, the run-cost of each generation of technology accumulates into the next.

Is AI-generated code safe to ship at scale?

Not without continuous review and hardening. Veracode's research shows 45% of AI-generated code ships with a known security flaw. Shipping more of it without continuous review compounds that risk. The answer is not less AI-generated code — it is an operations model that patches, reviews, and governs continuously, not in quarterly cycles.

How do you govern a billion AI agents?

Not through human oversight at every action. The only model that scales is policy-defined autonomy: agents operate inside pre-approved guardrails, with escalation paths for anything outside them. That is the foundation of autonomous operations.

CONTACT US

Start with a focused conversation about your environment.

We help you build, optimise and run AI that delivers measurable results.

Your benefits:

  • Outcome-driven - Measurable business impact
  • Expert-led - Hands-on delivery from senior practitioners
  • Secure by design - Your data and compliance requirements first
  • Fast to value - From discovery to production in weeks

What happens next?

Let's talk

A 20-minute focused session on your goals and current situation.

We propose

A clear plan and scope tailored to your priorities.

You decide

No obligation - move forward when the time is right.

No obligation - just a focused 20-minute discussion about your goals.