The Programmer Has Been Promoted
The New York Times Magazine published a landmark piece this week. Reporter Clive Thompson interviewed more than 70 software developers at Google, Amazon, Microsoft, Apple, and startups. His finding: software engineers now spend more time directing AI agents than writing code. The agents plan, implement, test, and revise. The developers review, redirect, and approve. The headline reads: "Coding After Coders: The End of Computer Programming as We Know It."
He got the facts right. He got the framing wrong.
"The end of programming as we know it" is being read across the internet as a mournful statement, a loss of craft, a profession in decline. It should be read as a graduation ceremony. Because this is not the first time this announcement has been made. It has been made, with equal gravity and equal certainty, at every major transition in the history of computing. And every single time, the people making it were wrong.
1. A Sixty-Year Pattern of Wrong Predictions
In 1957, FORTRAN shipped. Before FORTRAN, programmers wrote in assembly language. They communicated with machines in the machine's own idiom: direct memory addresses, register manipulations, the specific instruction set of whatever hardware they were sitting in front of. It was intimate and painstaking and required a deep model of exactly what the hardware was doing at every step.
When FORTRAN arrived, many assembly programmers were furious. They argued that compilers couldn't possibly produce efficient machine code. They said the abstraction was leaky and unreliable. They said real programmers worked close to the metal, and this high-level language business was for people who didn't understand computers.
FORTRAN was not the end of programming. FORTRAN was the beginning of scientific computing as a global industry.
In the 1980s, object-oriented programming arrived with Smalltalk and later C++. The procedural programmers had their counterarguments ready. OOP hides what the machine is actually doing. The abstractions produce bloated, inefficient programs. You lose control of your data layout. Real programs are procedural.
OOP was not the end of programming. OOP made commercial software at scale possible.
In the 1990s, Java arrived with a garbage collector. C and C++ programmers were scandalized. Giving up manual memory management meant accepting unpredictable pauses, heap bloat, performance cliffs. Real programmers managed their own memory. Garbage collection was for people who couldn't be trusted with the fundamentals.
Garbage collection was not the end of programming. It is part of why the internet exists.
In the 2000s, cloud computing made it possible to treat infrastructure as a commodity. Why would any serious engineer surrender control over their own servers? The security implications were terrifying. The latency was unpredictable. Real operations meant knowing your hardware.
The cloud was not the end of operations. It created an entirely new discipline of platform engineering and allowed two-person startups to serve millions.
The pattern is not subtle. At every inflection point, a generation of practitioners has declared that the essential craft has been diluted or lost. At every inflection point, the new abstraction layer has enabled software that the previous layer made practically impossible. At every inflection point, the number of people writing software has gone up, not down. The job has never disappeared. The job has always expanded.
2. What the Times Piece Actually Describes
Read the article carefully and it is not a story about developers disappearing. It is a story about what happens when you finally remove the primary bottleneck from a creative discipline.
The bottleneck in software development was never ideas. It was never architectural judgment. It was never understanding users, system design, or knowing which problem was worth solving. The bottleneck was implementation velocity: the raw rate at which a human could translate an idea into working, tested, deployed code. That bottleneck was always brutally slow relative to human thought.
That bottleneck is being dismantled.
One developer Thompson profiles used to spend a full day on work that now takes half an hour. Not because he got smarter. Not because he worked harder. Because the implementation step, which consumed most of his day, is now handled by agents running in parallel. His actual job, the part that requires accumulated judgment, domain knowledge, and the ability to recognize when something is wrong, has not been automated. It has been amplified.
When a developer directs agents, he is not doing less work. He is doing the same intellectual work at ten times the output. The ratio of thinking to typing has inverted. What used to be ninety percent typing and ten percent deciding is now the reverse. That is not a loss. That is a lever being applied.
The Times frames this transformation as eerie, vertiginous, strange. That framing reflects the disorientation of the transition, not the character of where it leads.
3. The Amazon Counterargument, and Why It Strengthens the Case
The same week Thompson's piece dropped, the Guardian published a story about Amazon employees who say AI tools are making them slower. The internal tool hallucinates, generates flawed code, forces engineers to correct AI mistakes. One engineer described it as "trying to AI my way out of a problem that AI caused." More than a half-dozen current and former employees told the Guardian that Amazon is pressing all staff to integrate AI across their work even when it hurts productivity. Management tracks AI usage. Employees feel surveilled. The tools get blamed for the results.
Skeptics will point to this as a counterargument to the Times piece. Look, they'll say. At scale, AI is actually slower. The Times is writing puff pieces for the AI industry.
But the Amazon story and the Times story are not contradictory. Together, they describe the same phenomenon from opposite directions.
The difference between the startup founders in the Times piece and the Amazon employees in the Guardian piece is not the quality of the tools. The difference is autonomy. The Times developers adopted AI because they wanted to, because the results were better, because their work became more satisfying. There was no usage dashboard tracking their token consumption. There was no quarterly review of AI adoption rates. They used the tools when the tools helped and didn't use them when they didn't.
Amazon mandated AI use, tracked it, pressured people to demonstrate engagement, and is apparently using adoption metrics as a proxy for productivity. The developers resent being measured this way. The tools get used performatively rather than productively. And then the Guardian gets a story.
This dynamic has played out before. The same pattern produced the cargo cult Agile that saturated enterprise software in the 2010s, where companies adopted the ceremonies of Agile methodology while discarding the principles, producing all the overhead with none of the benefits. The same pattern produced the microservices mandate era, where organizations decomposed perfectly functional monoliths into distributed systems nightmares because distributed was considered modern.
The lesson is not that AI doesn't work. The lesson is that forced adoption of any tool, administered from above as a compliance exercise, produces compliance behavior, not productivity. That is a management failure. The tool is incidental.
4. The One Real Loss Worth Naming
There is something real in the concern, though it gets stated imprecisely.
A developer who has spent a decade writing code builds an intuition that runs below articulation. They feel when an abstraction is wrong before they can explain why. They recognize the signature of a memory leak, a hot path, a subtle concurrency bug, the way a doctor recognizes symptoms before running the tests. That intuition comes from repetition: from having written the implementations themselves, made the mistakes themselves, spent the hours debugging themselves.
If the next generation of developers directs agents almost exclusively and rarely writes code by hand, they may not build that intuition. They will be competent at describing what they want. They may not know, viscerally, what the code is actually doing.
This is worth naming. It is not unique to AI. Every abstraction layer creates some version of this gap. Modern web developers who have never managed bare metal infrastructure have less systems intuition than their predecessors. Java developers who never managed memory have less systems intuition than C programmers. That gap is real. It is also the accepted cost of building higher.
The answer is not to refuse the abstraction. The answer is intentionality. The best engineers of the AI era will be those who study the code the agents generate, who read the output carefully, who periodically work closer to the metal specifically to maintain the intuition. Not because it is required for daily work, but because it sharpens the judgment that makes the higher-level direction better.
The developers who spend the next five years directing agents without ever scrutinizing the output will eventually be burned by a system design flaw their intuition couldn't catch. The developers who treat agent output as an object of study, not just a deliverable, will build the judgment to catch those flaws early.
5. What Becomes Possible Now
The most important consequence of this shift is not about who writes the code. It is about what gets built that could not have been built before.
For most of software history, engineering bandwidth was the binding constraint on product scope. A two-person startup could maintain one product, maybe two if the founders were exceptional. Every feature competed with every other feature for developer time. Entire categories of improvement simply never happened because the implementation cost was too high relative to the benefit. Product roadmaps were exercises in rationing scarcity.
That constraint is dissolving. There are independent builders today operating with the productive surface area of teams that would have required fifteen or twenty people three years ago. A single developer running agents in parallel can maintain multiple products, run experiments that would have required a full sprint, ship features in hours that would have required weeks of coordinated work.
This is not a marginal improvement in developer efficiency. It is a structural change in the economics of building software. The historical parallel is the printing press, which did not eliminate authors but transformed who could reach an audience and at what cost.
Software is undergoing the same transformation. The binding constraint is no longer implementation bandwidth. It is ideas, taste, and judgment. Those things scale differently than typing speed. You cannot throw more hours at vision.
The Times piece, framed as a kind of mournful elegy for the craft of typing code, is actually documenting something much more interesting: the moment when the rate-limiting step of software development stopped being technical execution and became human judgment about what is worth building.
Every prior generation of programmers would have recognized that shift as an upgrade.
They were right. So is this one.