That number is expected to rise to 65% within two years. Yet 96% of developers, according to this Sonar research, say they don’t fully trust AI-generated code.

In 2025, Sonar surveyed over 1.100 developers worldwide to see how software engineering is evolving. The findings show AI is now central to development – but it hasn’t made the job easier, only reshaped it.

Developers now write code faster than ever. At the same time, they spend more time questioning, reviewing, and validating what gets shipped. Productivity has increased. Confidence has not kept pace. This tension defines the current “State of Code” research.

AI is no longer experimental, it’s operational!

  • 72% of developers who use AI coding tools rely on them daily
  • Developers estimate that 42% of the code they commit is AI-assisted
  • They expect that number to rise to 65% by 2027

AI-assisted coding has shifted from novelty to routine. Among developers who have tried AI tools, most now use them daily. Many estimate that more than 40% of the code they commit includes AI assistance. They expect that percentage to grow significantly in the next two years.

Developers use AI across the full spectrum of work: prototypes, internal tools, customer-facing products, and even business-critical systems. AI no longer supports side experiments. It participates directly in production workflows.

This level of integration signals a structural shift in software engineering. Teams no longer ask whether to use AI. They focus on how to use it responsibly.

Do developers trust AI?

  • 58% use it in business-critical services
  • 88% use AI for prototypes and proof-of-concept work
  • 83% use it for internal production systems
  • 73% integrate it into customer-facing applications

Developers consistently report that AI makes them faster. Most say AI helps them solve problems more efficiently and reduces time spent on repetitive tasks. Many even report increased job satisfaction because they can offload boilerplate and mechanical work.

However, speed does not equal certainty. Nearly all developers (96%) express doubts about the reliability of AI-generated code. They acknowledge that AI often produces output that looks correct at first glance but contains subtle errors or hidden flaws. This creates a trust deficit.

Despite this skepticism, not all developers consistently verify AI output before committing it. The pressure to ship features quickly often outweighs the discipline of thorough review. As a result, teams face a new bottleneck: verification.

Reviewing AI-generated code frequently demands more effort than reviewing human-written code. Developers must reconstruct intent, validate assumptions, and check edge cases without knowing how the model arrived at its solution. AI compresses the time spent writing code but expands the time required to evaluate it.

In this environment, confidence – not velocity – becomes the true measure of engineering maturity.

AI doesn’t remove toil, it changes it!

75% believe AI reduces toil. However, time allocation data reveals a more complex picture:

  • Developers still spend roughly 23–25% of their work week on low-value or repetitive tasks
  • This percentage remains consistent regardless of AI usage frequency

One of the promises of AI tools involved reducing developer toil: repetitive, frustrating tasks such as writing documentation, generating tests, or navigating poorly structured codebases. Many developers believe AI reduces certain kinds of toil. They report improvements in documentation quality, test coverage, and refactoring efficiency.

However, the overall proportion of time developers spend on low-value work has not meaningfully decreased. Instead, AI shifts the nature of toil. Developers now spend less time writing boilerplate and more time validating AI suggestions. They spend less time drafting documentation and more time correcting generated code. Traditional frustrations have not disappeared; they have transformed.

This dynamic challenges simplistic narratives about AI productivity gains. AI does not eliminate friction. It redistributes it.

Teams that recognize this reality can adapt workflows accordingly. Teams that assume AI automatically saves time risk underestimating the verification cost.

Technical debt: reduction and acceleration at once

Positive impact:

  • 93% report at least one improvement related to technical debt
  • 57% see better documentation
  • 53% report improved testing or debugging
  • Nearly half report easier refactoring

Negative impact:

  • 88% report at least one negative consequence
  • 53% say AI generates code that appears correct but is unreliable
  • 40% say it produces unnecessary or duplicative code

AI influences technical debt in both directions. On the positive side, developers use AI to modernize legacy code, generate missing tests, improve documentation, and refactor inefficient structures. These activities reduce long-standing debt and improve maintainability.

On the negative side, AI sometimes generates redundant, overly verbose, or structurally weak code. Developers frequently encounter code that appears correct but introduces subtle reliability problems. When teams integrate such code without rigorous review, they create new debt.

AI therefore acts as both a debt reducer and a debt accelerator. Managing this tension requires deliberate governance. Teams must treat AI contributions as first-class code changes subject to the same standards as human-written code. Automated testing, static analysis, and clear architectural principles become even more critical in this environment.

Technical debt already ranks among developers’ top frustrations. Uncontrolled AI usage can amplify that burden rather than alleviate it.

What is shadow AI?

Most used tools:

  • GitHub Copilot (75%)
  • ChatGPT (74%)
  • Claude (48%)
  • Google Codey/Duet (37%)
  • Cursor (31%)

GitHub Copilot and ChatGPT dominate AI-assisted coding usage, but developers rely on a growing ecosystem of tools. Many teams use multiple AI platforms simultaneously, selecting each for specific strengths. This fragmentation creates flexibility but introduces complexity.

A critical risk signal:

  • 35% of developers use AI tools through personal accounts
  • 52% of ChatGPT users access it outside company-managed environments
  • 57% worry about exposing sensitive data

Developers often access AI tools through personal accounts rather than company-approved environments. This behavior reflects demand for productivity but introduces governance risks. When developers paste proprietary code into unmanaged AI systems, organizations lose visibility and control.

Security concerns rank among the most significant worries associated with AI adoption. Developers themselves recognize the risk of exposing sensitive data.

Organizations must respond pragmatically. Banning AI rarely works. Developers adopt tools that help them work faster. Instead, companies should provide secure, sanctioned AI environments and define clear usage guidelines. Enablement, not prohibition, produces safer outcomes.

The experience divide

Less experienced developers:

  • Use AI for understanding codebases
  • Rely more heavily on AI for implementation
  • Estimate a higher percentage of AI-assisted code

Senior developers:

  • Use AI more selectively
  • Focus on review, optimization, and refactoring
  • Express higher skepticism

AI does not affect all developers equally. Less-experienced developers tend to adopt new AI tools more aggressively. They use AI to understand unfamiliar codebases, generate implementations, and explore new frameworks. For them, AI functions as both assistant and tutor.

More experienced developers integrate AI differently. They rely on it to review code, optimize performance, and assist with maintenance tasks. Their experience enables them to identify flaws more quickly and apply AI selectively.

Junior developers often trust AI more readily. Senior developers approach it more cautiously. This difference does not reflect competence; it reflects perspective.

High-performing teams combine both approaches. Junior engineers introduce experimentation and speed. Senior engineers provide oversight and architectural discipline. Together, they mitigate risk while capturing gains.

What this means for engineering teams?

The State of Code 2025 reveals a profession in transition. AI increases output but raises the bar for validation. It reduces certain manual burdens while introducing new forms of cognitive load. It offers opportunities to address legacy debt while risking new complexity.

The central lesson does not concern speed. It concerns confidence. Developers must treat AI as a powerful collaborator that requires supervision. Engineering leaders must invest in testing infrastructure, review practices, and secure tooling environments. Organizations must acknowledge that productivity improvements depend on disciplined integration, not blind adoption.

Developers no longer compete on how quickly they can type code. They compete on how effectively they can evaluate, refine, and ship trustworthy systems.

The tools have changed. The responsibility has not. Teams that focus on clarity, code quality, and secure AI integration will thrive in this new environment. Teams that chase velocity without verification will accumulate invisible risk.

> subscribe shift-mag --latest

Sarcastic headline, but funny enough for engineers to sign up

Get curated content twice a month

* indicates required

Written by people, not robots - at least not yet. May or may not contain traces of sarcasm, but never spam. We value your privacy and if you subscribe, we will use your e-mail address just to send you our marketing newsletter. Check all the details in ShiftMag’s Privacy Notice