Some Engineering Teams Won’t Be Ready for AI Orchestration – and It Will Cost Them

It’s a question many engineering teams aren’t ready to answer honestly.
Partly because the answer changes depending on who you ask, and partly because the two emerging answers point in completely opposite directions.
Iain Bishop, CEO of Damala Technology and a former CTO with over two decades of experience, believes that “there are uneven gains with AI at the moment.”
Some teams are moving fast – shipping more, experimenting, shaping decisions, and owning outcomes. Others are still treating AI like a smarter autocomplete, focusing on infrastructure and reliability. The gap between these groups, Iain believes, is only going to grow.
Soon, devs won’t jus use AI – they will coordinate it
Most teams today are still operating in what Iain describes as the copilot phase.
AI sits alongside developers, helping them generate code, suggest improvements, or speed up repetitive tasks. It’s useful, but it doesn’t fundamentally change how work is structured, though that could change soon, Iain believes.
What we’ll see over time is a move from a copilot model to an orchestration model.
In that world, developers don’t just use AI, they coordinate it. Instead of writing everything themselves, they manage multiple AI agents, assign tasks, validate outputs, and connect everything into a working system. The role shifts from execution to direction.
You’re still accountable, no matter how smart AI becomes
As tools become more powerful, there’s a growing temptation to push more responsibility onto them. Iain sees that as a dangerous path:
If AI is just like a co-worker, it isn’t truly autonomous and we remain accountable no matter how powerful the tools are.
The risk isn’t that AI will take control. It’s that teams will give it up too easily:
If we allow AI tools to operate completely autonomously, we lose that accountability. And that’s the wrong approach.
This means developers aren’t becoming less responsible, they’re becoming more. They’re accountable not just for what they write, but for what they orchestrate.
AI’s first impact won’t be mass layoffs, it will be role compression
AI’s first big impact won’t be mass layoffs, it will be role compression. “In the coming years, teams will shrink, and people will need to wear multiple hats,” Iain says.
The lines between traditional roles are starting to blur: you’ll see more product engineers build AI-driven solutions. At the same time, deep technical expertise won’t disappear; if anything, it becomes even more critical, Iain explains.
There will always be a need for systems engineers who understand what good code looks like.
As AI generates more code, someone still needs to ensure the architecture makes sense.
Iain sees two clear paths:
- Toward product – understanding users, business needs, and delivering end-to-end solutions;
- Deeper into systems – architecture, design, and scalability.
“The risk is for engineers who stay in the middle,” he says. “With AI handling more execution, being just kind of technical and kind of product-aware may no longer be enough.”
Structuring AI lets teams move fast without losing control
Most companies aren’t struggling with what AI can do, they’re struggling with how to manage it, Iain says: “There’s a rapid pace of change, and companies need to get control of what’s happening.”
The instinct is to lock things down (limit tools, restrict access, add heavy governance) but engineers will find ways around it.
A more sustainable path is to structure how AI is used. Iain points to orchestration platforms, where standards, design systems, and governance are built into AI workflows. This lets teams move fast without losing control, and ensures organisations don’t have to choose between speed and consistency. Control comes not just from systems, but from people understanding the tools they’re using.
Knowing how to use new models won’t come automatically
With all the focus on automation, one skill is quietly becoming critical: communication.
Iain says that for teams new to AI, it’s about more than prompts – it’s understanding models, structuring context, and guiding outputs into something usable.
Prompt engineering is really about creating the right context to get the best response.
This changes how developers work. Instead of writing everything, they guide systems, shape inputs, and validate outputs. Models will keep improving, that’s inevitable, but knowing how to use them well won’t be automatic.


