My View of Software Engineering Has Changed For Good

Some time ago, I wrote an article called The Illusion of Vibe Coding. In it, I argued that there are no shortcuts to mastery, and that relying on AI-generated code without deep understanding only shifts problems downstream. That article came from skepticism.
Recently, after seeing early autonomous agent systems like OpenClaw, I realized something important: the skepticism is still there, but my perspective has shifted.
In my view, we are no longer just discussing “better tools for developers.” We are starting to see the outlines of an entirely different operating model for software engineering.
We no longer writing code, we express intent
For years, the industry has framed progress as AI “assisting” engineers: smarter autocomplete, faster refactoring, better suggestions. All useful, but still stuck in the same old mindset – and I don’t think that way of thinking works anymore.
The future of software engineering isn’t just developers using AI tools. It’s autonomous coding agents and humans working together, each guiding the other.
Today, software development is all about execution. Humans break work into tickets, write and review code, test, deploy, and repeat. Even with better tools, the mental load of coordination and context switching hasn’t changed.
In The Illusion of Vibe Coding, I argued that skipping understanding makes systems fragile. I still believe that. What’s different is who I think will do the executing.
I believe autonomous coding agents will take over much of the mechanical work, while humans move up the stack.
Instead of assigning tasks, we’ll define intent: outcomes, constraints, trade-offs. Agents will plan, execute, and consult humans only when uncertainty is high.
Humans are no longer the primary executors, we become the checkpoint. From the agent’s view, we’re part of the control loop, not collaborators.
Karpaty, ex director of AI at Tesla, also notes that as execution shifts to machines, humans’ focus naturally moves toward guiding principles, architecture, and intent. This aligns with what I’ve observed: our role is increasingly supervisory and strategic, not mechanical.

Agents never forget the system, humans do
This shift isn’t just better code generation, it’s context.
Autonomous agents can read entire systems in ways humans can’t: traverse repos, inspect dependencies, analyze history, read docs, and link decisions across months or years.
What we call “tribal knowledge” is mostly a workaround for human memory. Agents don’t forget, avoid legacy code, or shy away from repos. They treat the system itself as the unit of change, not a ticket, file, or pull request.
Trust in software will shift from the output to the process itself
One aspect I think is often underestimated is verification.
I believe autonomous agents won’t just write code and wait for CI, they’ll run tests, add coverage, debug failures, and review their work against architecture.
In The Illusion of Vibe Coding, I warned against blind trust in generated code. Now, I think trust should shift from output to process. CI stops being a gate for humans and becomes a feedback loop agents actively use. Software moves from “written then checked” to continuously reasoned about.
Companies will adopt autonomous agents in different ways
I don’t think companies will adopt autonomous agents uniformly. Instead, I expect a clear split.
- Some companies will deeply integrate autonomous agents into their systems, aligning them with architecture, security, and risk. The advantage won’t be the model itself, but the alignment layer, which will quietly become intellectual property.
- Other companies will use cloud-based agents, prioritizing speed, accessibility, and low overhead. They’ll excel with standard architectures and fast-moving teams, even if they don’t fully grasp the business, often an acceptable trade-off.
Most organizations will likely go hybrid, using cloud agents for experimentation and self-hosted agents for core systems. As today, the most sensitive parts of the business will stay in-house.
So, what does this mean for software engineers?
I don’t believe this future eliminates engineers, but it will force a sorting.
Experts remain essential, shifting from coding to orchestrating autonomous agents, ensuring output is secure, maintainable, and aligned with intent.
I think infrastructure engineers become even more critical – they build, run, and improve the systems autonomous agents rely on. I believe the most important production system won’t be the application itself, but the system that creates and evolves it.
What’s interesting is that this shift is already changing where the bottleneck lives.
Robert C. Martin recently described this succinctly:

In other words, humans are becoming the bottleneck.
Execution can be automated; judgment and responsibility cannot. The middle layer shrinks, and value moves to architecture, risk, and knowing what not to build.
One thing responsibility that stays the same: responsibility cannot be outsourced.
What will happen to interns and junior engineers?
This is the part where I don’t have a confident or comforting answer, and pretending otherwise would be dishonest.
If agents handle most execution, the old path for juniors (doing small tasks to learn) won’t work anymore. That doesn’t doom them, but the way they gain experience will have to change.
I think Anthropic’s research is spot on: struggling and effort are essential for mastery, and removing friction too early quietly erodes learning.
And this isn’t just a junior problem; I think it applies equally to experienced developers.
When agents let us skip hard thinking, growth stalls: juniors may never build fundamentals, and experienced engineers can slowly lose judgment and architectural instinct. The ones who thrive will be those who actively choose to remain experts, even when tools make shortcuts tempting.
Two types of developers will emerge
As I described earlier, two broad roles are likely to emerge, and both will require deep knowledge:
- One path is for engineers who orchestrate autonomous agents. They need a solid grasp of system architecture, can turn business needs into technical solutions, and have the judgment to know if a change is safe, maintainable, and scalable. That kind of judgment isn’t learned by cutting corners, it comes from wrestling with trade-offs, failure modes, and long-term consequences.
- The other path is engineers who build and run the autonomy infrastructure itself. They need deep expertise in infrastructure, security, networking, permissions, observability, and cost management, and in self-hosted setups, also skills in model hosting, fine-tuning, or even training specialized systems.
Both paths lead to highly skilled experts. Neither is shallow.
What worries me isn’t juniors being replaced – it’s everyone getting passive, letting agents do the thinking that actually builds real skill.
My advice hasn’t changed since The Illusion of Vibe Coding: keep learning, keep experimenting, and don’t outsource your understanding. Use agents to amplify your thinking, not replace it. The difference now is urgency, and how we learn must evolve at every level.
Imagine an engineer adding a feature. Instead of just coding, they first understand why it matters, what could break, and how it fits in the system. They might use an agent to propose a solution but review it critically. Over time, they don’t just write better code, they get better at judging designs, spotting risks, and turning vague ideas into solid solutions. That’s the kind of skill that keeps you valuable when execution is cheap.
Machines will execute, humans will supervise and judge
We keep asking how AI will assist developers. That question assumes the old hierarchy stays. It won’t.
In my view, software engineering is heading toward a quiet inversion: machines will execute, and humans will supervise, constrain, and judge, not because humans are weaker, but because machines excel at execution, and humans excel at responsibility.
Vibe coding was never the goal – it was just a step. The future isn’t about coding faster. It’s about deciding, deliberately, what’s actually worth building.


