Tame Python Chaos With uv – The Superpower Every AI Engineer Needs

Edvin Teskeredžić

Managing Python dependencies has always been a pain point - from slow installs to version mismatches across projects. uv aims to fix that.

If you’re like me, you love writing code – it’s fun and solves real problems. But too often, we end up wrestling with tooling before we can even start, especially in Python with its dreadful dependency management.

Using pip is like assembling IKEA furniture without instructions. Add a single package, and suddenly you’re battling dependency conflicts, version chaos, and cryptic errors – decoding them feels like reading an ancient language.

I can’t even run a simple script without either polluting my global environment or creating a virtual environment for a one-off script I’ll never run again.

It’s ridiculous how something that should simplify our lives makes it more complicated, especially compared to other ecosystems. Having previously worked with Ruby and its beautiful and mature gem system, I felt like switching to Python downgraded my experience.

Simplifying Python projects with uv

As engineers, we desire simple, robust, and predictable solutions. Yet here I was, praying that my environment wouldn’t implode every time I made a tiny change or – God forbid – updated a package version.

After some search (and reading the excellent book Hypermodern Python Tooling), I’ve found an alternative that solved most of my issues in one swoop: uv – a fast, modern, and reliable Python dependency management tool.

It streamlined the internal process of developing Python services within our team and made us more productive. Our builds are faster, and it eliminated all the complexity we had from using many different tools.

In this article, I’ll convince you to leave your comfort zone and switch to using uv as your go-to tool for everything and anything Python.

The hellscape that is Python tooling

In terms of tooling, entering the Python ecosystem while previously working with Ruby (and privately with Go) felt like I replaced a reliable Toyota with a carriage drawn by a malnourished horse.

Virtual environments, pip, and dependency management wasted many engineering hours that we could have spent writing code (and producing actual value!).

Spinning up a new microservice tested your patience, as the tooling tried to ruin your day with a broken package, an unreadable error message, or sluggish build times because the cache didn’t update somewhere.

Let’s talk about real-world examples

Dependency management with pip behaves unpredictably. It can’t build anything beyond the simplest dependency graph. When you work on legacy code (and you absolutely will), updating it often breaks your functionalities. You can only debug by endlessly examining the output of pip freeze until that command engraves itself in your brain like an ancient chant.

The core problems come from the fact that pip doesn’t guarantee that two runs of installing dependencies will also install the same sub-dependencies. If you want to know more, there’s a great article about this.

Virtual environments devour disk space. The concept sounds great: if project A uses one version of dependency X, and project B uses another, you isolate them in virtual environments so every project keeps its source of truth. But in practice, you quickly drain your memory when running many Python services, each with its own project.

The problem worsens with AI-based projects: in the AI space, we often use PyTorch and that single dependency consumes at least 936MB for the CPU version and 1.8GB for the GPU version (numbers taken from the PyTorch forums, here and here).

Imagine running multiple AI projects – if we keep this up, our disk space disappears.

These are just a few examples of Python’s frustrating tooling. Stick with me – we’re about to fix them all with one tool: uv.

Say Hi to uv!

Now that we’ve established how badly Python needs better tooling, let’s look at uv.

What exactly is uv? According to their website, uv is a high-speed Python package and project manager written in Rust. Integrating uv into your workflow eliminates most of the tools you currently rely on – it directly replaces pip, pip - tools, poetry, and many more.

The core design philosophy focuses on giving you one Python development tool while also delivering extreme speed (we’ll test this later).

“Alright,” you might ask, “what can this magical tool do?” Great question! Let’s look at some of uv’s most compelling features.

Project dependency management

Managing your Python project with uv is extremely simple because uv provides first-class project support built around declarative manifests and universal lockfiles (similar to those in the JS ecosystem and Poetry).

This setup ensures reproducible installs and fast syncs. You initialize projects with uv init, add and declare dependencies (e.g., numpy and lang chain), and generate fully pinned lockfiles with uv lock.

After that, subsequent environment creations or updates (“syncs”) use that lockfile to reproduce identical sub-dependency graphs every time. In practice, this stops the endless back-and-forth of “works on my machine” messages via Slack.

Unlike plain pip install, which can produce different sub-dependency graphs on repeated runs (creating inconsistent project environments), uv lock locks everything down – you won’t encounter surprises halfway through a deployment pipeline.

Running standalone scripts

Whipping up a quick Python script to try something out is amazing – until you need packages without standard Python. Then the experience quickly turns into venv juggling and ad-hoc pip installs.

With uv, we can run standalone scripts without creating new environments: just run uv run --with [your packages] script.py to auto-install any dependencies to the cache and run your script. These commands install packages on the fly, and you can later lock them with uv lock if needed.

You can even embed metadata into your scripts to declare dependencies and required Python versions, like so:

# /// script
# requires-python = ">=3.13"
# dependencies = [
#     "langchain",
#     "typer==0.12.3",
# ]
# ///

import os
import langchain
# ... rest of your script ...

You can save this script and run it efficiently with uv run script.py. Thanks to uv’s speedy resolver and global cache, the script runs repeatably and in milliseconds. You can finally spend less time fighting your environment and more time doing what you actually want: writing code.

Disk-Space Efficiency

Uv maintains a global cache of all downloaded packages. Once you fetch any version of a package (even a sub-dependency!), you never need to re-download or duplicate it across projects. By comparison, pip re-downloads wheels into each virtual environment, and Poetry duplicates caches in projects – these approaches waste massive amounts of disk space, which matters especially for AI-based projects and workflows.

To get a bit technical, uv keeps a single copy of each exact release in a global cache directory. Any project that needs the same version can access it via a symlink, rather than copying the entire package into the project. Beyond this cache, uv writes project-specific artifacts (e.g., compiled wheels) into a per-project store. Since both stores use content hashes to identify files, uv instantly detects duplicates.

This means that if two projects depend on langchain-0.3.24, uv stores that version only once.
The real-world impact is visible in multi-project AI workflows. As AI engineers, we often maintain and juggle multiple PyTorch-based services.

We drastically reduce total disk usage by sharing the PyTorch wheel (approximately 1.8 GB) across various projects. As the memory footprint from dependencies shrinks, scaling up microservice-based architectures becomes more manageable.

This improvement is crucial for CI pipelines and developer workstations, which can run multiple microservices simultaneously.

Speed!

One of the main drawbacks of working in the AI space is the sheer size of all the packages and tools that a modern AI service uses. Installing all those dependencies takes time, slows down our CI/CD pipeline, and hampers our developer experience.

Naturally, install speed becomes crucial in such scenarios. The makers of uv provide speed comparison benchmarks, testing their tool against pip and Poetry.

Across all scenarios (warm/cold installs and warm/cold resolution), uv outperforms pip and Poetry by orders of magnitude.

These results come directly from uv’s official benchmarks. These benchmarks install Trio’s official dependencies. Let’s see how uv’s benefits impact our developer workflow.

Comparing uv to other tools when building an AI microservice

To compare uv with other tools (in this case, pip and poetry), we’ll walk through the common steps of building a Python application and see how much easier it is to use uv. The typical steps for creating a Python app include:

  • Initializing a project
  • Adding dependencies
  • Locking/freezing your dependencies
  • Installing/syncing packages
  • Running scripts and notebooks (handy for AI/ML engineers)

To provide an at-a-glance comparison, we’ve summarized these steps in a table:

Testing Install Speed on a “real” AI microservice

To compare the experience of using uv with other tools like pip and poetry, let’s build a basic AI microservice. We’ll start with these dependencies:

fastapi==0.110.2
uvicorn[standard]==0.29.0

# AI/GenAI/LLM related
transformers==4.40.1
torch==2.2.2
scipy==1.13.0
scikit-learn==1.5.0
numpy==1.26.4
sentence-transformers==2.7.0
langchain==0.1.16

# Pydantic for data validation
pydantic==2.7.1

# Async tools
httpx==0.27.0

# Background tasks and scheduling
celery==5.4.0
redis==5.0.4

# Observability (logging, tracing, monitoring)
opentelemetry-api==1.25.0
opentelemetry-sdk==1.25.0

# Environment variables
python-dotenv==1.0.1

# ORM
sqlalchemy==2.0.30


This setup doesn’t mimic a real AI microservice; it just represents what you’d commonly find in one. We’ll create this as a Python project using three different build tools: pip, Poetry, and uv. Then, we’ll benchmark how fast these tools install the dependencies.

We’ll use Python 3.12 for our virtual environment, which should not affect the results. In addition, we run this test on Ubuntu 22.04.5 LTS. For all tests, we perform a cold installation (no cache), followed by a warm installation (with cache).

Keep in mind that cold install times depend heavily on your network speed and bandwidth, and your ISP can throttle TCP, affecting the results.

Why uv wins?

Swapping between multiple CLIs introduces cognitive overload – uv ends that. With pip and venv, you constantly toggle between Python’s venv module, manual requirements files, and pip installs. Poetry tries to put everything under its own umbrella, but still makes you chain together poetry install , poetry lock , and poetry run.

By contrast, uv gives you a single tool for all your needs and covers every workflow step. It just works. Under the hood, uv’s dependency resolver and global cache guarantee repeatable installs without spinning wheels or bloated environments.

Every project shares the same content-address store, which prevents duplicate downloads and wasted disk space.

Since uv builds from the ground up with modern workflows in mind, it provides first-class support for Python version management (Poetry requires a separate tool, pyenv, for this), workspaces (missing from Poetry), script metadata that turns Python files into portable executables (not supported by Poetry), and even built-in publishing capabilities.

In short, uv gives you a simple, lean, and fast pipeline while simultaneously replacing the old ways of dependency management in Python.

Uv in production – benefits and lessons learned

After careful consideration, we slowly started adopting uv as our go-to tool for writing new Python microservices, especially in AI. By switching to uv, we consolidated our workflow under one tool and leveraged its many benefits. The results speak for themselves: consistent, millisecond-scale installs, smaller container layers, and zero “it worked on my machine” issues across all environments.

The benefits are measurable, too. Our average CI/CD pipeline build time dropped by half for services we migrated to uv. Running multiple services locally became easier because uv provides a global cache and symlinks to dependencies, greatly reducing disk usage.

In addition, uv.lock ensures every branch and pipeline uses the same dependency graph, eliminating surprise version bumps or broken sub-dependencies.

Finally, our cognitive workload as developers decreased significantly as we removed multiple tools from our toolchain, dramatically reducing maintenance overhead.

After using uv for some time, we also discovered a few valuable tricks:

  • Cache Management – Periodically running uv cache prune keeps the cache directory clean by removing all unused entries.
  • Accelerator Configuration – Following uv’s PyTorch integration guide lets us pin CPU vs. CUDA builds per service declaratively, avoiding manual URL overrides. This approach allows us to install CUDA-enabled PyTorch on Linux, CPU-only builds on MacOS, and even include custom flags to enable or disable GPU builds when needed.

You should give uv a chance

As we’ve seen, traditional Python tooling too often slows us down, wastes space, and forces us to hope for reproducible builds. Uv cuts through that noise with a single, fast binary that handles every step of your workflow – from project initialization to publishing.

Uv runs much faster than its competitors, and its universal lockfile guarantees deterministic builds that work everywhere, every time.

Whether you build heavy AI pipelines, ship microservices, or maintain a monorepo filled with standalone scripts and notebooks, uv brings consistency, performance, and simplicity to your workflow.

Stop wrestling with half-baked tools and give uv a spin today – I promise it will turn your dependency hellscape into a clean, reliable experience!

> subscribe shift-mag --latest

Sarcastic headline, but funny enough for engineers to sign up

Get curated content twice a month

* indicates required

Written by people, not robots - at least not yet. May or may not contain traces of sarcasm, but never spam. We value your privacy and if you subscribe, we will use your e-mail address just to send you our marketing newsletter. Check all the details in ShiftMag’s Privacy Notice