Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
The Python Developer's Guide to Surviving Version Mismatches
ProgrammingPythonComputer Science

The Python Developer's Guide to Surviving Version Mismatches

A pragmatic, systems-level breakdown of Python versioning chaos and the tooling, patterns, and discipline needed to eliminate it from your stack.

by Kseniia Smolnikova

ML Engineer

May, 2026
41 min read

facebooklinkedintwitter
copy
The Python Developer's Guide to Surviving Version Mismatches

There is a category of bug that experienced Python engineers dread more than logic errors or race conditions. It does not show up in unit tests. It does not trigger linters. It manifests at 2 AM during a deployment, or silently corrupts data for weeks, or causes a perfectly green CI pipeline to explode on a colleague's machine running a subtly different Python minor version. It is the version mismatch — and it is far more architecturally complex than most teams treat it.The root cause is rarely carelessness. It is the cumulative result of Python's release cadence, the CPython ABI boundary, the semantic ambiguity of dependency specifiers, and the fundamental fact that "Python" is not one thing. It is a runtime, a bytecode format, a C extension ABI, a standard library, and an ecosystem — and each of those layers versions independently. Treating the problem as "just pin your deps" is the engineering equivalent of putting a bandage over a structural crack.This guide is written for engineers at every level who have been burned by version issues. We will go deep but keep examples concrete: CPython versioning internals, virtual environment architecture, multi-version orchestration with pyenv, deterministic lockfile strategies with Poetry and uv, container-layer discipline, and CI matrix testing patterns that catch drift before it reaches production.

Why Python Versioning Is Structurally Harder Than It Looks

Python follows a well-defined release cycle, but the practical implications of that cycle are underappreciated. CPython ships a new minor version (e.g., 3.12, 3.13) roughly every October. Each minor version receives bugfix releases for approximately 18 months, then security-only patches through its fifth year. This means at any given moment, the "supported" Python universe spans three to four active minor versions simultaneously.The dangerous assumption engineers make is that minor versions are safely interchangeable. They are not — and the reasons are layered.

The CPython ABI Boundary

The most treacherous layer is the C extension ABI. When a package like numpy, pydantic-core, or cryptography ships compiled wheels, those wheels encode the CPython version, the ABI tag, and the platform into their filename:

numpy-1.26.4-cp312-cp312-linux_x86_64.whl
              ^^^^  ^^^^
              Python 3.12 | ABI tag cp312

Think of it like a plug and socket. A wheel compiled for Python 3.12 (cp312) is shaped for that specific socket. Plugging it into Python 3.11 or 3.13 either fails immediately or causes unpredictable behavior at runtime.A concrete example: you install your project on your laptop running Python 3.12. Everything works. Your teammate clones the repo and runs it on Python 3.11. They get:

ImportError: numpy/core/_multiarray_umath.cpython-312-x86_64-linux-gnu.so:
cannot open shared object file: No such file or directory

That error is not about numpy being missing. It is about the compiled binary being built for the wrong Python version.

Bytecode Cache Poisoning

Python compiles your .py files into bytecode and stores it in pycache folders as .pyc files. The bytecode format changes between Python minor versions. Each .pyc file carries a magic number — a small integer that identifies which CPython version produced it. Here is the simple version of the problem:

  • Your CI pipeline runs on Python 3.11, builds the project, and bakes __pycache__ into a Docker image.
  • Your production server runs Python 3.12. Python notices the magic number mismatch, discards the cache, and recompiles everything at startup — adding latency to every cold start.

In the worst case, if cache invalidation logic has a bug (it has happened in niche scenarios), you get stale bytecode executing under the wrong interpreter.

Never bake __pycache__ into Docker images, and always add __pycache__/ and *.pyc to your .dockerignore.

The sys.version_info Lie

Many engineers do version checks like this:

import sys

if sys.version_info >= (3, 11):
    print("Good to go!")

This tells you the interpreter version. It tells you nothing about:

  • Whether the C extensions already loaded were compiled for this exact version;

  • Whether your virtualenv's packages were installed using this interpreter or a sibling version;

  • Whether a patch release (e.g., 3.11.1 → 3.11.9) changed subtle behavior in ssl, asyncio, or datetime.

Patch releases are supposed to be safe. Occasionally they are not. Python 3.11.0 had a known regression in asyncio.timeout that was fixed in 3.11.1. If your team was pinned to 3.11.0 and you relied on that behavior, upgrading silently changed how your timeouts worked.

The Anatomy Of A Python Environment

Before solving the problem, you need a precise mental model of what a "Python environment" actually is. It is not a single thing. It is a composition of at least five distinct layers, and each one versions independently.

The System OS and Hardware is the foundation everything else sits on. It determines which shared native libraries are available — things like libssl, libz, and libc. These are not Python packages. They are system-level binaries that compiled Python extensions link against at runtime. You do not manage them with pip. They are installed by your OS package manager (apt, brew, yum) and they change when your OS updates, independently of anything you do inside Python.

The CPython Interpreter Binary lives on top of the OS. This is the python3 executable itself — the program that reads your .py files, compiles them to bytecode, and executes them. This is what pyenv manages. Its version is what you see when you run python --version. A critical point: there is not one Python binary on a typical developer machine. There are often three or four — the system Python installed by the OS, one or more versions installed by pyenv, and possibly one inside a conda environment. The shim mechanism in pyenv ensures the right one is called for each project.

The Standard Library ships bundled with the interpreter binary and versions with it. When you upgrade from Python 3.11 to 3.12, you do not just get a new interpreter — you get a new stdlib. Modules get added (tomllib arrived in 3.11), deprecated (distutils was removed in 3.12), and behaviorally changed (datetime.fromisoformat was expanded in 3.11). Code that worked perfectly against the 3.10 stdlib can fail or silently misbehave against the 3.12 stdlib with no installation step in between.

The Virtual Environment and site-packages is the layer most engineers interact with daily. When you run python -m venv .venv or uv sync, Python creates an isolated directory that holds its own copy of pip and a site-packages folder for third-party packages. The virtual environment is permanently bound to the interpreter that created it. If you create a virtualenv with Python 3.11 and then try to use it with Python 3.12, you will get import errors, ABI mismatches, or silent behavioral bugs depending on which packages are installed. The binding is physical — the virtualenv contains symlinks or copies pointing to the specific interpreter binary used at creation time.

C Extension Wheels are the most dangerous layer precisely because they look like ordinary Python packages but behave like native binaries. When you install numpy, pydantic-core, or cryptography, pip downloads a pre-compiled .whl file containing a .so (on Linux/macOS) or .pyd (on Windows) binary. That binary was compiled against a specific CPython version and a specific set of system libraries. It encodes all of that in its filename. If any of those assumptions are violated at runtime — wrong Python version, wrong libssl, wrong glibc — the failure can be an immediate ImportError, a crash deep inside a native call, or in the worst case, silently wrong output.

The key insight is that a crack at any layer propagates downward. A libssl version incompatibility beneath cryptography will not surface at install time — it will surface as an ssl.SSLError in production when a user triggers a TLS handshake. A stdlib behavioral change in datetime will not be caught by a test suite that does not explicitly test boundary inputs. This is why fixing version mismatches requires knowing which layer the mismatch lives in before reaching for a solution.

Multi-Version Management with pyenv

pyenv is the standard tool for managing multiple CPython installations side by side on a developer machine. It works by placing a shim directory at the front of your $PATH. Every time you run python or pip, the shim intercepts the call and dispatches it to the correct interpreter based on a simple resolution chain:

  1. The PYENV_VERSION environment variable (if set);
  2. A .python-version file in your current directory (or any parent directory);
  3. The global default set via pyenv global.

Installing and Pinning

# Install two Python versions side by side
pyenv install 3.11.9
pyenv install 3.12.4

# Check what is installed
pyenv versions
# * system
#   3.11.9
#   3.12.4

# Go into your project and pin it to 3.12.4
cd ~/projects/my-api
pyenv local 3.12.4

# This creates a .python-version file in the directory
cat .python-version
# 3.12.4

# Confirm the right interpreter is active
python --version
# Python 3.12.4

The .python-version file must be committed to version control. This is the single most impactful thing you can do for your team. Without it, every developer and every CI node uses whatever Python version it happens to have installed — which drifts silently over time.

A Concrete Before and After

Before pyenv (the broken state):

  • Developer A has Python 3.11.2 installed via Homebrew;
  • Developer B has Python 3.12.1 installed from python.org;
  • CI server has Python 3.10.12 installed by the platform image;
  • Production has Python 3.11.9 installed by the ops team;
  • Every environment behaves slightly differently. Nobody knows why.

After pyenv with a committed .python-version:

  • Every developer, every CI node, and your Docker build stage all use exactly 3.12.4;
  • Version drift is impossible unless someone explicitly changes the .python-version file;
  • That change is visible in Git history, reviewable in a PR, and intentional.

Lockfile Architecture: Poetry vs uv vs pip-tools

A lockfile records the exact resolved package graph — not your declared constraints, but the actual packages and versions selected after solving the full transitive dependency tree. Without a lockfile, pip install -r requirements.txt is non-deterministic over time because any unpinned transitive dependency can release a new version between two installs. Here is the simplest possible demonstration of why this matters:

# requirements.txt (no lockfile)
requests>=2.28.0

On Monday, pip install resolves requests==2.28.2 and certifi==2023.7.22. On Friday, certifi releases 2024.2.2. Your colleague runs pip install and gets a different certifi. Most of the time nothing breaks. Once in a while, something does — and nobody understands why two developers with the same requirements.txt have different behavior.

A lockfile eliminates this entirely by recording:

# poetry.lock (simplified)
[[package]]
name = "certifi"
version = "2023.7.22"
hash = "sha256:abc123..."

Now every install, everywhere, always gets certifi==2023.7.22 — until you deliberately run poetry update.

ToolLockfile FormatHash VerificationSpeedBest For
pip + pip-toolsrequirements.txt (pinned)Optional (--generate-hashes)SlowLegacy projects, minimal tooling overhead
Poetrypoetry.lock (TOML)Yes, alwaysModerateApplication projects, monorepos, publishing
uvuv.lock (TOML)Yes, always10–100x faster than pipPerformance-critical CI, large dependency trees
PipenvPipfile.lock (JSON)YesSlowLegacy; largely superseded by Poetry and uv
condaenvironment.yml + explicit listNo (by default)Moderate with libmambaScientific computing, cross-language deps (R, C libs)

Run Code from Your Browser - No Installation Required

Run Code from Your Browser - No Installation Required

Getting Started with Poetry

# Install Poetry
curl -sSL https://install.python-poetry.org | python3 -

# Create a new project
poetry new my-project
cd my-project

# Add a dependency (updates pyproject.toml and poetry.lock)
poetry add requests

# Install from lockfile (what your teammates and CI should run)
poetry install

# Update a specific package and regenerate the lock
poetry update requests

The golden rule: always commit poetry.lock. The lockfile is not a build artifact — it is source code. If it is in .gitignore, remove it today.

Getting Started with uv

# Install uv
pip install uv

# Initialize a project
uv init my-project
cd my-project

# Add a dependency
uv add httpx

# Install from lockfile — refuses to update it
uv sync --frozen

# Run your code inside the managed environment
uv run python main.py
uv run pytest

The --frozen flag is critical for CI and deployment. It makes uv sync fail if the lockfile is out of sync with pyproject.toml, rather than silently updating it. This means your production installs are always exactly what was tested.

Version Specifiers and the Constraint Spectrum

How you write version constraints in pyproject.toml determines how much freedom the resolver has — and how much risk you carry when dependencies release new versions.

[tool.poetry.dependencies]

# Exact pin — zero flexibility, maximum reproducibility
# Good for: internal tools you fully control
requests = "2.31.0"

# Compatible release — patch updates allowed only
# ~= 2.31.0 means >= 2.31.0 and < 2.32
requests = "~2.31.0"

# Caret (Poetry default) — minor and patch updates allowed, no major
# ^2.31.0 means >= 2.31.0 and < 3.0.0
requests = "^2.31.0"

# Inequality — maximum flexibility, highest drift risk
# Good for: libraries you publish to PyPI
requests = ">=2.28.0"

A simple mental model for choosing:

  • Publishing a library to PyPI? Use ^ or >= so you do not conflict with your users other dependencies;
  • Deploying an application? Use ^ in pyproject.toml and rely on the lockfile to pin exact versions. The constraint is just a boundary; the lockfile is the truth;
  • Never use bare >= in an application without a lockfile. That is how you wake up to a broken deployment on a Monday morning because a dependency released a major version over the weekend.

Container Layer Discipline

Containerization solves the "it works on my machine" problem — but only if you treat your FROM line with the same rigor you apply to your lockfile.

The Floating Tag Problem

# WRONG: resolves to a different image every time Docker Hub updates it
FROM python:3.12

# BETTER: pins the exact patch version and base OS
FROM python:3.12.4-slim-bookworm

# BEST: pins to an immutable digest — this image can never change
FROM python:3.12.4-slim-bookworm@sha256:a7b3c1...

Here is what goes wrong with FROM python:3.12:

  1. You build your image on March 1st. It works perfectly;
  2. Python.org pushes a patch update to the 3.12 tag on April 15th, including an OpenSSL upgrade;
  3. Your CI rebuilds the image on April 16th. The new OpenSSL version is incompatible with how a C extension links against it;
  4. Your build breaks — and nothing in your application code changed.

Pinning to a digest makes the base image immutable. Updating it becomes an explicit, intentional action that goes through code review.

A Minimal but Correct Dockerfile

# Stage 1: Install dependencies
FROM python:3.12.4-slim-bookworm AS builder

WORKDIR /app

# Install build tools needed for C extensions
RUN apt-get update && apt-get install -y --no-install-recommends \
    gcc libssl-dev && rm -rf /var/lib/apt/lists/*

# Copy lockfile first — Docker caches this layer until lockfile changes
COPY pyproject.toml uv.lock ./
RUN pip install uv && uv sync --frozen --no-dev

# Stage 2: Lean runtime image
FROM python:3.12.4-slim-bookworm AS runtime

WORKDIR /app

# Copy only the resolved virtualenv from the build stage
COPY --from=builder /app/.venv /app/.venv
COPY src/ ./src/

# Use the virtualenv's Python
ENV PATH="/app/.venv/bin:$PATH"

ENTRYPOINT ["python", "-m", "myapp"]

And the matching .dockerignore:

__pycache__/
*.pyc
*.pyo
.venv/
.python-version
.git/

This pattern gives you: a lean image (no gcc in production), ABI consistency (build and runtime use the same Debian base), and no stale bytecode cache in the image.

CI Matrix Testing to Catch Version Drift Early

Local tooling discipline is not enough on its own. You also need CI that automatically tests your code across every Python version you claim to support — before a mismatch reaches production. Here is a practical GitHub Actions configuration:

name: Python Version Matrix

on: [push, pull_request]

jobs:
  test:
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-22.04, macos-13, windows-2022]
        python-version: ["3.11.9", "3.12.4", "3.13.0"]

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v5
        with:
          python-version: ${{ matrix.python-version }}

      - name: Install uv
        run: pip install uv

      - name: Install dependencies from lockfile
        run: uv sync --frozen

      - name: Run tests
        run: uv run pytest --tb=short -q

Two settings worth explaining:

  • fail-fast: false — by default, GitHub Actions cancels the entire matrix when one job fails. You want to see all failures, not just the first one. A failure on Python 3.13 on Windows might be completely different from a failure on Python 3.11 on Linux.
  • Pinned python-version — use "3.12.4" not "3.12". The minor pin on setup-python uses the latest available patch if you only specify the minor version, which can change without you noticing.

A Taxonomy Of Version Mismatch Failure Modes

When a version mismatch hits, knowing the category immediately points you to the right fix. Most failures fall into four buckets.

  • Import Time Failures are the easiest to diagnose because they are loud and immediate. They happen the moment Python tries to load a module and discovers the compiled binary does not match the running interpreter. Two common variants: an ImportError caused by an ABI tag mismatch (the .so file was compiled for a different Python version), and a ModuleNotFoundError caused by a stdlib module being renamed or removed between versions.

  • Runtime Behavioral Failures are trickier. The import succeeds, but the code produces different results depending on which Python version is running. This includes TypeError caused by a changed default argument in a stdlib function, ValueError caused by stricter input parsing introduced in a newer version, and subtle asyncio behavior deltas that only surface under load or in specific event loop configurations.

  • Performance Regressions do not break correctness but silently degrade throughput or startup time. The most common cause is __pycache__ invalidation: if your bytecode cache was built under a different Python version than the one running your application, Python discards and regenerates it on every cold start. In Python 3.13's experimental free-threaded build, JIT tier changes can also alter performance characteristics in ways that are invisible to functional tests.

  • Silent Data Corruption is the most dangerous category. No exception is raised. No test fails. The output is simply wrong. A well-known historical example is datetime.fromisoformat(), which accepted a narrow subset of ISO 8601 strings in Python 3.10 but the full specification in Python 3.11. Code that relied on 3.10 silently rejecting certain date strings would accept them without complaint under 3.11, producing incorrect downstream behavior. Another classic example is dict ordering: before Python 3.7, dictionary iteration order was undefined. Code that accidentally depended on a specific hash-based ordering would produce different results on different versions with no error of any kind.

Quick Reference by Failure Category

Failure CategoryNoise LevelWhen It AppearsPrimary CauseFirst Fix to Try
Import TimeLoud — immediate exceptionOn import statementABI tag mismatch or missing stdlib moduleRecreate virtualenv with correct Python version
Runtime BehavioralMedium — wrong output or exception in testsDuring test run or under specific inputsstdlib API change between minor versionsCheck Python changelog for the versions involved
Performance RegressionSilent — no errors, slower executionUnder load or on cold startStale __pycache__ or JIT tier changeAdd __pycache__/ to .dockerignore, re-benchmark
Silent Data CorruptionSilent — wrong output, no exceptionIn production, often weeks laterBehavioral change in parsing or ordering functionsAdd explicit boundary-condition tests for stdlib functions

How to Diagnose by Symptom

If you see an immediate crash on startup, start with the import time category. Check whether the error message contains a .so filename with a CPython version tag (like cpython-312). If the version in the filename does not match your running interpreter, your virtualenv was built against the wrong Python. Delete it and recreate it using the correct interpreter.

If your tests pass locally but fail in CI, start with the runtime behavioral category. Compare python --version output between your local machine and the CI runner. If they differ by even a patch version, check the Python changelog for that range. Pay particular attention to changes in datetime, ssl, urllib, json, and asyncio, which have historically been the most common sources of silent behavioral drift.

If your application is slower than expected after a deployment, start with the performance regression category. Check whether __pycache__ was baked into your Docker image and whether the Python version in your build stage matches the one in your runtime stage. A mismatch here forces bytecode regeneration on every container startup.

If your data looks wrong but nothing raises an exception, start with the silent data corruption category. This is the hardest to debug because there is no traceback to follow. The approach is to bisect: identify the smallest input that produces wrong output, then check whether the behavior of the specific stdlib function involved changed between the Python versions in your environment. The official CPython changelog and the What's New documents for each minor version are the authoritative reference.

Start Learning Coding today and boost your Career Potential

Start Learning Coding today and boost your Career Potential

Import Time Failures

These are the easiest to diagnose because they are loud and immediate.

ImportError: numpy/core/_multiarray_umath.cpython-312-x86_64-linux-gnu.so:
cannot open shared object file

What it means: The installed wheel was compiled for Python 3.12 but you are running Python 3.11 (or vice versa).

Fix: Ensure your virtualenv was created with the same Python version you are running. With pyenv, pyenv local 3.12.4 followed by python -m venv .venv guarantees this. Delete and recreate the virtualenv if the version changed.

Runtime Behavioral Failures

These are trickier. The import succeeds, but behavior differs.

A concrete example with datetime:

# Python 3.10 behavior
from datetime import datetime
datetime.fromisoformat("2023-01-01T12:00:00+00:00")
# Works fine

datetime.fromisoformat("20230101T120000Z")
# Python 3.10: raises ValueError — format not supported
# Python 3.11: works fine — full ISO 8601 support added

# If your code RELIES on 3.10 rejecting that second format,
# upgrading to 3.11 silently breaks your validation logic

Fix: Add explicit tests for boundary behavior, especially around stdlib parsing functions. Never assume "stricter is safer" — the direction of change can go either way between versions.

Silent Data Corruption

The most dangerous category. No exception is raised. No test fails. The output is simply wrong.

A historical example with dictionary ordering:

# Pre-Python 3.7: dict iteration order was undefined
# From Python 3.7: dicts are guaranteed insertion-ordered

config = {"b": 2, "a": 1}
first_key = next(iter(config))

# On Python 3.6 (CPython): might be "b" or "a" depending on hash
# On Python 3.7+: always "b" (insertion order)

# Code that accidentally relied on alphabetical order from hash behavior
# would produce different results depending on the Python version

Fix: Never rely on implicit ordering. If you need ordered iteration, use collections.OrderedDict or sort explicitly. Write tests that assert specific output order when order matters.

The Python 3.13 Free-Threaded Mode Consideration

Python 3.13 introduced an experimental build that disables the Global Interpreter Lock (GIL). It ships as a separate binary (python3.13t) with a distinct ABI tag (cp313t).

This matters for version management because:

  • C extensions compiled for standard Python 3.13 (cp313) do not work under the free-threaded build (cp313t);
  • Thread-unsafe C extensions can corrupt data or crash under cp313t even if they load successfully;
  • Your CI matrix, Docker base images, and wheel selection logic all need a third axis: standard, free-threaded, or both.

For most teams, this is not an immediate concern — the free-threaded build is experimental and opt-in. But if you maintain a library with C extensions, now is the time to add cp313t to your wheel build matrix and test thread safety explicitly.

Conclusion

Python version mismatches are not random bad luck. They are the predictable output of treating a multi-layer versioned system as if it were a single monolithic runtime. Every layer — the interpreter binary, the ABI, the bytecode cache, the package graph, the underlying native libraries — is an independent versioning surface, and any of them can diverge silently under insufficient discipline.

The engineering response is not paranoia. It is a small set of concrete habits: commit a .python-version file, commit a lockfile, pin your Docker base image to a digest, and run a CI matrix. None of these require exotic tooling or architectural heroics. They require the maturity to treat environment reproducibility as a first-class concern rather than something you deal with after things break.

The bugs will not stop arriving the moment you adopt these practices. But they will arrive loudly, early, and on machines that are not serving production traffic — which is exactly where you want to find them.

FAQs

Q: Do I need both pyenv and Poetry, or does one replace the other?
A: They solve different problems and work together. pyenv manages which Python interpreter binary is active on your machine — think of it as controlling the socket. Poetry or uv manages which packages go into the environment built on top of that interpreter — think of it as controlling what gets plugged in. You typically need both: pyenv to pin the interpreter, Poetry or uv to pin the package graph.

Q: Is it safe to use uv in production CI pipelines given its relative newness?
A: Yes, for most projects. uv implements the same PEP standards as pip and Poetry, and its resolver is more correct in certain edge cases than pip's backtracking resolver. The main caveat is ecosystem maturity around plugins — some Poetry plugins and tox integrations assume Poetry as the backend. Evaluate against your specific toolchain, but the resolver itself is production-ready and widely adopted.

Q: My team uses conda for scientific computing. Does this advice apply?
A: Partially. conda solves the cross-language dependency problem (R, BLAS, LAPACK, CUDA) that pip-based tools cannot address. But conda environments still suffer from version drift if you do not commit an explicit environment specification. Use conda env export --no-builds and commit the result. conda-lock provides a true cross-platform lockfile for conda environments and is strongly recommended for any team that needs reproducibility across machines.

Q: What is the single highest-leverage change a team can make right now?
A: Commit a lockfile. If you are using pip with an unpinned requirements.txt, switch to pip-tools and commit the output of pip-compile today. If you are using Poetry or uv but the lockfile is in .gitignore (a surprisingly common mistake from template generators), remove it from .gitignore and commit the lockfile immediately. A committed lockfile eliminates the majority of "it works on my machine" version bugs overnight, with no other changes required.

Q: How do I handle version mismatches that only appear on Windows in CI?
A: Windows-specific mismatches most commonly come from three sources: path separator handling in wheel filenames, CRLF line endings affecting scripts, and Windows-specific ABI variations in C extensions (particularly around MSVCRT vs UCRT linkage in Python 3.12+). The correct approach is to include a windows-2022 runner in your CI matrix and verify that all your C extension dependencies ship win_amd64 wheels on PyPI. If a wheel does not exist for Windows, pip falls back to source compilation, which requires Visual Studio Build Tools — a hidden build dependency that is easy to miss in container-based CI.

Q: I inherited a project with no lockfile and a loose requirements.txt. Where do I start?
A: Start by freezing what you have right now before anything else drifts further. Run pip freeze > requirements.frozen.txt in a known-working environment and commit that file as an emergency snapshot. Then migrate to a proper tool: run pip install uv, create a pyproject.toml with your top-level dependencies, run uv lock, and replace the frozen file with the generated uv.lock. Do this migration in a single PR so the change is visible and reviewable. Once the lockfile is in place, the hard part is done.

È utile questo articolo?

Condividi:

facebooklinkedintwitter
copy

È utile questo articolo?

Condividi:

facebooklinkedintwitter
copy

Contenuto di questo articolo

some-alt