How Uber Engineers Use AI Agents

At the Pragmatic Summit, I listened to Uber’s Director of Engineering, Anshu Chadha, and Principal Engineer, Ty Smith, discuss how one of the world’s largest technology companies is integrating generative AI into its engineering workflow.
Then, they shared:
At Uber, engineers are beginning to assign coding tasks to AI agents much like managers distribute work among their teams.
Say hello to my new colleague – AI
Uber has been using AI for years in systems like its matching platform, but bringing generative AI into the day-to-day work of engineers is a newer step.
According to Anshu, the goal isn’t to replace engineers – it’s to help them get more done.
We’re not pushing for AI to automate all humans in the company. Our goal is to let engineers focus on creative work rather than toil.
Practically speaking, repetitive tasks such as code migrations, upgrades, documentation, and bug fixes are now being handled by AI-powered agents. According to Anshu, it frees engineers to build features and enhance the user experience.
The end of hands-on programming as we know it?
One of the biggest shifts Uber has observed is the transition from traditional AI-assisted coding tools toward agent-based workflows.
Tools like GitHub Copilot made coding faster by helping developers in the moment, but now we’re entering a new era: AI agents that can work independently, tackling tasks without needing someone at the keyboard.
Back in 2022 and 2023, developer velocity saw a modest 10–15% increase. Today, the paradigm has shifted to what we call “peer programming,” where developers can delegate workloads to AI agents and intervene or redirect them as needed.
This approach essentially positions engineers as tech leads directing AI agents. Developers define the goal, while agents execute parts of the work in the background and return results for review.
Uber has built an internal platform that plugs AI agents right into its engineering workflow, mostly on Michelangelo, its machine learning platform. This gives access to models from OpenAI and Anthropic, as well as Uber’s own internal models.
On top of that, they’ve created agent-driven tools that tap into company data (source code, documentation, Jira tickets, Slack) so the AI agents have enough context to actually get work done.

AI tackles toil, but gaining trust is the real challenge
At the conference, a standout demo was Uber’s “Minions” system. Engineers submit a prompt via web, Slack, or command line, and it generates code changes and opens pull requests automatically. Ty says:
You give the agent a prompt and expect a pull request as the output. A few minutes later the system notifies you on Slack that the task is complete and the PR is ready to review.
The platform also helps engineers craft better prompts by suggesting improvements when instructions are unclear, increasing the likelihood that the agent will succeed.
When Uber first rolled out agentic workflows, they found about 70% of submitted tasks were “toil” – repetitive maintenance work developers usually avoid. These predictable tasks are ideal for AI, creating a feedback loop: the more AI handles, the more developers are willing to delegate.
Still, scaling AI isn’t just about technology. Supporting engineers as they adjust from traditional workflows and gain confidence in AI-generated code is an important focus.
Uber found that peer-driven adoption worked better than mandates. Anshu points out:
The most successful tactic has been sharing wins. When engineers see examples from their peers where AI helped them accomplish something impressive, adoption spreads quickly.
But measuring real impact remains tricky.
Uber tracks metrics like developer satisfaction, productivity, and code output, but connecting them to business outcomes is harder. “These are activity metrics, not business outcomes,” Anshu says.
To fix this, Uber is working to track the full development lifecycle (from design to production) to see how AI truly speeds up product delivery.

AI is powerful but EXPENSIVE
Cost is also becoming an issue. Running large language models at scale requires expensive compute resources, and AI infrastructure spending has grown dramatically, Anshu explains:
Since 2024, our costs have gone up at least six times. This technology is amazing, but the cost of AI is too high.
That’s why Uber is investing in AI infrastructure that picks the right model for each task, balancing performance and cost. With the AI landscape changing fast, the company continuously evaluates new tools and updates its stack:
What’s successful this month may be overtaken next month. So we constantly test new tools, gather feedback from developers, and adapt.
To conclude: For Uber, generative AI is now central to engineering. And their experience shows that success depends as much on culture, cost control, and ongoing experimentation as on the technology itself.


