Jobs aren’t “going away.” The easy parts of jobs are going away.
That distinction matters because it changes what you do next.
For 20+ years, every serious wave of tech change has followed the same script: we don’t remove work—we move it. We compress the routine and expand the messy human aspects: judgment, validation, trade-offs, and ownership. Economists have long argued this. Technology tends to substitute for well-defined “routine” tasks. It complements non-routine problem-solving and interaction.
Generative AI is simply the first wave that can eat a chunk of cognitive routine that we pretended was “craft.”
So yes—roles across engineering are about to be “redefined.” Software developers, tech leads, architects, testers, program managers, general managers, support engineers—basically anyone who has ever touched a backlog, a build pipeline, or a production incident—will get a fresh job description. It won’t show up as a layoff notice at first. It’ll appear as a cheerful new button labeled “Generate.” You’ll click it. It’ll work. You’ll smile. Then you’ll realize your role didn’t disappear… it just evolved into full-time responsibility for whatever that button did.
And if you’re waiting for the “AI took my job” moment… you’re watching the wrong thing. The real shift is quieter: your job is becoming more like the hardest 33% of itself.
Now let’s talk about what history tells us happens next.
The Posters-to-Plumbing Cycle
Every transformation begins as messaging and ends as infrastructure. In the beginning, it’s all posters—vision decks, slogans, townhalls, and big claims about how “everything will change.” The organization overestimates the short term because early demos look magical and people confuse possibility with readiness. Everyone projects their favorite outcome onto the new thing: engineers see speed, leaders see savings, and someone sees a “10x” slide and forgets the fine print.
Then reality walks in wearing a security badge. Hype turns into panic (quiet or loud) when the organization realizes this isn’t a trend to admire—it’s a system to operate. Questions get sharper: where does the data go, who owns mistakes, what happens in production, what will auditors ask, what’s the blast radius when this is wrong with confidence? This is when pilots start—not because pilots are inspiring, but because pilots are the corporate way of saying “we need proof before we bet the company.”
Pilots inevitably trigger resistance, and resistance is often misread as fear. In practice, it’s frequently competence. The people who live with outages, escalations, compliance, and long-tail defects have seen enough “quick wins” to know the invoice arrives later. They’re not rejecting the tool—they’re rejecting the lack of guardrails. This is the phase where transformations either mature or stall: either you build a repeatable operating model, or you remain stuck in a loop of demos, exceptions, and heroics. This is where most first-mover organizations are today!
Finally, almost without announcement, the change becomes plumbing. Standards get written, defaults get set, evaluation and review gates become normal, access controls and audit trails become routine, and “AI-assisted” stops being a special initiative and becomes the path of least resistance. That’s when the long-term impact shows up: not as fireworks, but as boredom. New hires assume this is how work has always been done, and the old way starts to feel strange. That’s why we under-estimate the long term—once it becomes plumbing, it compounds quietly and relentlessly.
The Capability–Constraint See-Saw
Every time we add a new capability, we don’t eliminate friction—we move it. When software teams got faster at shipping, the bottleneck didn’t vanish; it simply relocated into quality, reliability, and alignment. That’s why Agile mattered: not because it made teams “faster,” but because it acknowledged an ugly truth—long cycles hide misunderstanding, and misunderstanding is expensive. Short feedback loops weren’t a trendy process upgrade; they were a survival mechanism against late-stage surprises and expectation drift.
Then speed created its own boomerang. Shipping faster without operational maturity doesn’t produce progress—it produces faster failure. So reliability became the constraint, and the industry responded by professionalizing operations into an engineering discipline. SRE-style thinking emerged because organizations discovered a predictable trap: if operational work consumes everyone, engineering becomes a ticket factory with a fancy logo. The move wasn’t “do more ops,” it was “cap the chaos”—protect engineering time, reduce toil, and treat reliability as a first-class product of the system.
AI is the same cycle on fast-forward. Right now, many teams are trying to automate the entire SDLC like it’s a one-click migration, repeating the classic waterfall fantasy: “we can predict correctness upfront.” But AI doesn’t remove uncertainty—it accelerates it. The realistic path is the one we learned the hard way: build an interim state quickly, validate assumptions early, and iterate ruthlessly. AI doesn’t remove iteration. It weaponizes iteration—meaning you’ll either use that speed to learn faster, or you’ll use it to ship mistakes faster.
Power Tools Need Seatbelts
When tooling becomes truly powerful, the organization doesn’t just need new skills—it needs new guardrails. Otherwise the tool optimizes for the wrong thing, and it does so at machine speed. This is the uncomfortable truth: capability is not the same as control. A powerful tool without constraints doesn’t merely “help you go faster.” It helps you go faster in whatever direction your incentives point—even if that direction is off a cliff.
This is exactly where “agentic AI” gets misunderstood. Most agent systems today aren’t magical beings with intent; they’re architectures that call a model repeatedly, stitch outputs together with a bit of planning, memory, and tool use, and keep looping until something looks like progress. That loop can feel intelligent because it keeps moving, but it’s also why costs balloon. You’re not paying for one answer; you’re paying for many steps, retries, tool calls, and revisions—often to arrive at something that looks polished long before it’s actually correct.
Then CFO reality arrives, and the industry does what it always does: it tries to reduce cost and increase value. The shiny phase gives way to the mature phase. Open-ended “agents that can do anything” slowly get replaced by bounded agents that do one job well. Smaller models get used where they’re good enough. Evaluation gates become mandatory, not optional. Fewer expensive exploratory runs, more repeatable workflows. This isn’t anti-innovation—it’s the moment the tool stops being a demo and becomes an operating model.
And that’s when jobs actually change in a real, grounded way. Testing doesn’t vanish; it hardens into evaluation engineering. When AI-assisted changes can ship daily, the old test plan becomes a liability because it can’t keep up with the velocity of mistakes. The valuable tester becomes the person who builds systems that detect wrongness early—acceptance criteria that can’t be gamed, regression suites that catch silent breakage, adversarial test cases that expose confident nonsense. In this world, “this output looks convincing—and it’s wrong” becomes a core professional skill, not an occasional observation.
Architecture and leadership sharpen in the same way. When a model can generate a service in minutes, the architect’s job stops being diagram production and becomes trade-off governance: cost curves, failure modes, data boundaries, compliance posture, traceability, and what happens when the model is confidently incorrect.
Tech leads shift from decomposing work for humans to decomposing work for a mixed workforce—humans, copilots, and bounded agents—deciding what must be deterministic, what can be probabilistic, what needs review, and where the quality bar is non-negotiable.
Managers, meanwhile, become change agents on steroids, because incentives get weaponized: measure activity and you’ll get performative output; measure AI-generated PRs and you’ll get risk packaged as productivity. And hovering over all of this is the quiet risk people minimize until it bites: sycophancy—the tendency of systems to agree to be liked—because “the customer asked for it” is not the same as “it’s correct,” and “it sounds right” is not the same as “it’s safe.”
The Judgment Premium
Every leap in automation makes wine cheaper to produce—but it makes palate and restraint more valuable. When a giant producer wine producer can turn out consistent bottles at massive scale, the scarcity shifts away from “can you make wine” to “can you make great wine on purpose.” That’s why certain producers and tasters become disproportionately important: a winemaker who knows when not to push extraction, or a critic like Robert Parker who can reliably separate “flashy and loud” from “balanced and lasting.” Output is abundant; discernment is the premium product.
And automation doesn’t just scale production—it scales mistakes with terrifying efficiency. If you let speed run the show (rush fermentation decisions, shortcut blending trials, bottle too early, “ship it, we’ll fix it in the next vintage”), you don’t get a small defect—you get 10,000 bottles of regret with matching labels. The cost of ungoverned speed shows up as oxidation, volatility, cork issues, brand damage, and the nightmare scenario: the market learning your wine is “fine” until it isn’t. The best estates aren’t famous because they can produce; they’re famous because they can judge precisely, slow down at the right moments, and refuse shortcuts even when the schedule (and ego) screams for them.
Bottomline
Jobs aren’t going away. They’re being redefined into what’s hardest to automate: problem framing, constraint setting, verification, risk trade-offs, and ownership. Routine output gets cheaper. Accountability gets more expensive. The winners won’t be the people who “use AI.” The winners will be the people who can use AI without turning engineering into confident nonsense at scale.
AI will not replace engineers. It will replace engineers who refuse to evolve from “doing” into “designing the system that does.”