Debian's approach to Rust – Dependency handling (2022)

(diziet.dreamwidth.org)

64 points | by zdw3 days ago

13 comments

  • davexunit21 hours ago
    It's frustrating to see the "rewrite it in Rust" meme continue spread to all sorts of projects when there is no reasonable packaging story for the language and no solution in sight because the community does not see it as a problem. Cargo has introduced a huge problem for distros that didn't exist with languages like C. Compared to Rust, packaging C projects for distros is easy. Because of these problems, developers are receiving more issue reports from distro maintainers and are becoming more hostile to distro packaging, adopting a "use my binary or go away" mentality. Of course, this is a general problem that has been happening for all languages that come with a language-specific package manager; developers love them but they're typically unaware of the massive downstream problems they cause. It's getting harder to harder to exercise freedom 2 of free software and that's a shame.
    • kpcyrd13 hours ago
      > when there is no reasonable packaging story for the language

      For context: I've been around in the Debian Rust team since 2018, but I'm also a very active package maintainer in both Arch Linux and Alpine.

      Rust packaging is absolutely trivial with both Arch Linux and Alpine. For Debian specifically there's the policy of "all build inputs need to be present in the Debian archive", which means the source code needs to be spoon-fed from crates.io into the Debian archive.

      This is not a problem in itself, and cargo is actually incredibly helpful when building an operating system, since things are very streamlined and machine-readable instead of everybody handrolling their own build systems with Makefiles. Debian explicitly has cargo-based tooling to create source packages. The only manual step is often annotating copyright attributions, since this can not be sufficiently done automatically.

      The much bigger hurdle is the bureaucratic overhead. The librust-*-dev namespace is for the most part very well defined, but adding a new crate still requires an explicit approval process, even when uploads are sponsored by seasoned Debian Developers. There was a request for auto-approval for this namespace, like there is for llvm-* or linux-image-*, but back then (many years ago) this was declined.

      With this auto-approval rule in place it would also be easier to have (temporarily) multiple versions of a crate in Debian, to make library upgrades easier. This needs to be done sparsely however, since it takes up space in Packages.xz which is also downloaded by all users with every `apt update`. There's currently no way to make a package available only for build servers (and people who want to be one), but this concept has been discussed on mailing lists for this exact reason.

      This is all very specific to Debian however, I'm surprised you're blaming Rust developers for this.

      • blub10 hours ago
        > but adding a new crate still requires an explicit approval process, even when uploads are sponsored by seasoned Debian Developers.

        What’s the additional security benefit of the explicit approval? A major problem with Rust (for me) is the multitude of dependencies required even for trivial software, which complicates supply-chain monitoring.

        Auto-approval for crates gives me a bad feeling.

        • kpcyrd6 hours ago
          It's not a security control, it's for copyright compliance and in case of incorrectly filled out package metadata.
    • ahupp21 hours ago
      It’s not like this is unique to rust; you see similar issues with node and python. Distributions have many jobs, but one was solving the lack of package management in C. Now that every modern language a package manager, trying to apply the C package management philosophy is untenable. Specifically, the idea of a single version, globally installed, and producing distro packages for every language specific packages.
      • llm_trw20 hours ago
        Apart from the fact building it the 'non-C way' results in a magic meat package that you have no idea what it contains: https://hpc.guix.info/blog/2021/09/whats-in-a-package/

        Guix is also a distro that allows for any number of versions of the same package globally, something that language specific dependancy managers do not.

        Distors are there for a reason, and anyone who doesn't understand that reason is just another contributor to the ongoing collapse of the tower of abstractions we've built.

        • kpcyrd14 hours ago
          This is outdated information. Debian (and other distros) already had their own SBOM format called buildinfo files that encodes this kind of information.

          In Debian stable ripgrep on amd64 is currently on version 13.0.0-4+b2.

          The relevant buildinfo file can be found here:

          https://buildinfos.debian.net/buildinfo-pool/r/rust-ripgrep/...

          It encodes the entire Rust dependency graph that was used for this binary, with exact versions of each crate.

        • tcfhgj19 hours ago
          > Distors are there for a reason

          for me: make an os out of the kernel

        • pjmlp18 hours ago
          Making an userspace to Linux kernel, and sorting out disagreements by creating yet another fork.
      • rlpb17 hours ago
        > Now that every modern language a package manager...

        ...they fail to integrate with dependencies written in any other language.

        It's fine if you just want to sit a monoculture language software stack on top of a multilingual base platform. You can't make a functional system with one language alone, yet those who criticise distribution packaging architecture do so while simultaneously depending on this ability that language-specific package managers do not have. There is no viable alternative today. Most critics think they understand the general problem but only have narrow practical experience, so end up believing that their solution is superior while not considering the general multilingual software supply problem.

        Nix isn't a solution either, because in the general case Nix isn't security-supporting arbitrary and multiple dependency versions either.

      • XorNot18 hours ago
        Except from a management and maintenance perspective...this is a nightmare. When a security vulnerability drops somewhere, everywhere needs to be patched ASAP.

        Distros (and the people who run most scales of IT org) want to be able to deploy and verify that the fix is in place - and its a huge advantage if it's a linked library that you can just deploy an upgrade for.

        But if it's tons and tons of monolithic binaries, then the problem goes viral - every single one has to be recompiled, redeployed etc. And frequently at the cost of "are you only compatible with this specific revision, or was it just really easy to put that in?"

        It's worth noting that docker and friends also while still suffering from this problem, don't quite suffer from it in the same way - they're shipping entire dynamically linked environments, so while not as automatic, being able to simply scan for and replace the library you know is bad is a heck of a lot easier then recompiling a statically linked exe.

        People are okay with really specific dependencies when it's part of the business critical application they're supporting - i.e. the nodejs or python app which runs the business, that can do anything it wants we'll keep it running no matter what. Having this happen to the underlying distributions though?

        (of note: I've run into this issue with Go - love the static deploys, but if someone finds a vulnerability in the TLS stack of Go suddenly we're rushing out rebuilds).

        • rat8716 hours ago
          Why is this an issue? Simply recompile and download each package. If the distro worries that the maintainers would take too low just fork and recompile the packages themselves. These days its really not that big of a problem in terms of disk space or network traffic. And if some packages are large its often because of images resources which can be packaged separately. It seems like a lot less effort then trying to guess if dynamicly linked library will work with every package in every case after the update.
        • curt1517 hours ago
          >When a security vulnerability drops somewhere, everywhere needs to be patched ASAP.

          Is that ultimately the responsibility of the application developer or the OS developer?

          • XorNot17 hours ago
            It's "whoever turns up to do the work" but I would point out that distros generally have more people in the process who can pick up the work.

            The issue is one way or another it needs to happen ASAP: so either the distro is haranguing upstream to "approve" a change, or they're going to be going in and carrying patches to Cargo.toml anyway - so this idea of "don't you dare touch my dependencies" lasts exactly as long as until you need a fix in ASAP.

            • LtWorf10 hours ago
              Probably most of these tiny crates have 1 or 0 maintainers. Chances are that they will not be quick to fix a vulnerability.

              And even if they are, for rust software that doesn't come from debian, there is no way to ensure it all gets rebuilt and updated with the fix.

              Also, projects are generally slow (taking several months) to accept patches. When a distribution has fixed something and the users notice no issue, the upstream project if downloaded and compiled would be a different matter entirely.

    • zozbot23420 hours ago
      The "problems" are the same as with any statically-linked package (or, for that matter, any usage of AppImage, Flatpak, Snap, container images etc. as a binary distribution format). You can use Rust to build a "C project" (either a dynamic library exporting a plain C API/ABI, or an application linking to "C" dynamic libraries) and its packaging story will be the same as previous projects that were literally written in C. The language is just not very relevant here, if anything Rust has a better story than comparable languages like Golang, Haskell, Ocaml etc. that are just as challenging for distros.
      • zajio1am19 hours ago
        This is unrelated to whether it is statically-linked or dynamically-linked, it is about maintaining compatibility of API for libraries.

        In C, it is generally assumed that libraries maintain compatibility within one major version, so programs rarely have tight version intervals and maintainers could just use the newest available version (in each major series) for all packages depending on it.

        If the build system (for Rust and some other languages) makes it easy to depend on specific minor/patch versions (or upper-bound intervals of versions), it encourages developers to do so instead of working on fixing the mess in the ecosystem.

        • pornel1 hour ago
          This is an inaccurate generalization of both C and Rust ecosystems, and an inflammatory framing.

          openssl has made several rounds of incompatible changes. ffmpeg and related av* libs make major releases frequently. libx264 is famous for having "just pin a commit" approach to releases.

          It's common for distros to carry multiple versions of major libraries. Sometimes it's a frequent churn (like llvm), sometimes it's a multi-year migration (gtk, ncurses, python, libpng).

          C libraries aren't magically virtuous in their versioning. The language doesn't even help with stability, it's all manual painstaking work. You often don't see these pains, because distro the maintainers do heroic work of testing the upgrades, reporting issues upstream, holding back breaking changes, and patching everything to work together.

          ----

          Cargo packages almost never pin specific dependency versions (i.e. they don't depend on exact minor/patch version). Pinning is discouraged, because it works very poorly in Cargo, and causes hard dependency resolution conflicts. The only place where pinning is used regularly is pairs of packages from the same project that are expected to be used together (when it's essentially one version of one package, but had to be split into two files for technical reasons, like derive macros and their helper functions).

          By default, and this is universally used default, Cargo allows semver-major-compatible dependency upgrades (which is comparable to sover). `dep = "1.2.3"` is not exact, but means >=1.2.3 && <2.0.0. The ecosystem is quite serious about semver compatibility, and there is a tooling to test and enforce it. Note that in Cargo the first non-zero number in the version is the semver-major.

        • rtpg16 hours ago
          A rust program depends on various libraries. It releases a specific version. Not pinning to specific dependencies for that specific version is weird for everyone who gets that program outside of their OS distribution package manager!

          I release foo 1.2.3 as a package, when doing it depending on bar 4.5.6. Perhaps foo still compiles with bar 4.5.2 or 4.6.8... but that's _not_ foo 1.2.3 right? And if there are bug fixes in some of those but not others, if I had my deps set up to just package with "bar 4.x" , suddenly I'm getting bug reports for "1.2.3" without a real association to "the" 1.2.3.

          I'm saying this, perhaps this is as easy as having two sets of deps (like deps for repackagers, deps for people wanting to reproduce what I built)... but if a binary is being shipped _not_ being explicit about the exact dependencies used is upstream of a whole lot of problems! Is there a great answer from OS distro package managers for this?

          Is the thing that should happen here be that there's a configure step in rust builds that would mess around with deps based on what you have?

          • LtWorf10 hours ago
            If your library changes API all the time, do you think it's a good library?

            Do you rewrite every API call in your software every time you bump dependencies?

            Do you think that a library that changes API very often and thus remains insecure because bumping it breaks the program should be running on people's computers?

            • zozbot2348 hours ago
              > If your library changes API all the time, do you think it's a good library?

              If it has a 0.x.y version number? Yes, that's what 0.x means to begin with. In fact Rust/cargo provides a stricter interpretation of semver than the original, which allows you to express guarantees of API stability (by keeping the minor version number fixed) even in 0.x-version projects.

    • phire16 hours ago
      The major source of conflict is that the disto model for c/c++ dependencies is simply not good. Developers only tolerate it because the problem was previously unsolved before linux package managers. We are talking about a time before even autoconf, almost everyone was rolling their own configure scripts, or creating makefiles which self-configured.

      It was a mess, and linux package managers were first to actually solve the problem. I fully believe that the explosive growth of linux in the early years had little to do with the kernel, and was much more about the proper package managers introduced by Debian, Red Hat, and friends.

      But just because package managers were massively better than nothing, and nothing (universally) better has come along since, doesn't mean they are actually good. The user experience is pretty good. But for developers, the drive to dynamically link every single dependency, and only have one (maybe two) versions of each dependency, and to randomly update dependencies causes massive problems for developers. It only works as well as it does for C/C++ applications because they have gotten used to working within the restrictive framework. And maybe a bit of Stockholm syndrome.

      -----------------

      Let me tell you a story:

      Back in 2014, I was working on Dolphin Emulator (which is entirely c++), trying to fix regressions for a Dolphin 5.0 release. One of these regressions was that on linux, launching a game a second time (or opening the controller config window twice) would cause a crash. I quickly tracked the bug down to a bug introduced in SDL 2.0.3. Most consumers of SDL weren't impacted by this bug, because they only ever initialised SDL once.

      The fix was simple enough, it had already been fixed in the latest reversion and so would be fixed whenever 2.0.4 was released. But we wanted to release before that, so how to fix the bug? If this was cargo, I would just point it at a git revision (possibly a custom branch, with just the fix back ported). But that wasn't an option.

      I spent ages trying to work out a workaround that avoided the bug, but couldn't find one. I considered including a patched version of SDL with dolphin; Dolphin's source tree already contained a bunch of libraries which we statically linked in. Some patched, others were just there for easier builds on Windows/Mac). But I knew from experience that any distro would simply patch our build system to use the distro version. Dolphin's build system already had an extensive set of manual checks to use the distro version of a library if it existed. Many of these of these checks were contributed upstreams by distro package maintainers.

      I considered throwing an error message (twice, both at build and runtime) if dolphin was linked with a bugged version of SDL 2... But I figured the distro maintainers would probably patch out both error message too. Remember, we are talking about a bug that wouldn't even be fixed to the next release of SDL, with an uncertain release date. And there would probably be distros which stuck with the buggy version for way too long (Debian actually avoided the bug, because it was still shipping SDL 2.0.1, before the bug was introduced)

      Eventually I got frustrated. And I realised that SDL was only used for controller input, and only on linux/bsd. SDL already wasn't meeting our needs, we had a dedicated core input backend for MacOS and both DInput and XInput backends for Windows because they worked better. I had also just gone though the entire SDL input code, back when I was trying to find a workaround, and had learned how it work.

      So... to workaround the limitations of Linux Disto packaging (and a stupid, easy to fix SDL bug), I simply deleted SDL out of the Dolphin codebase. Only took me three days to completely reimplement all the controller input functionality, directly accessing the lower-level linux libraries: libudev for controller detection/hotpugging, open the correct `/dev/input/eventX` file, libevdev for parsing the raw input events.

      Three days to implement it. A few weeks of testing to find bugs... Not only did this workaround the bug. But it allowed Dolphin to access controller functionality that SDL wasn't exposing at the time (suddenly, the Dual Shock 4's touchpad would work, and the pressure sensitive buttons on Dual Sock 3). I don't think SDL was even exposing the gyroscope/accelerometers of those controllers at the time.

      ---------------------

      I was very happy with the result, but it was crazy that I was forced to go to such extremes, just because of the linux package manager insistence on enforcing one shared version of a dependency. I'm not sure how common it is for developers to rip-out and replace an entire dependency because of distro packaging, but workarounds for buggy library versions are common.

      These days, I'm really enjoying programming with rust; The fact that cargo always gives me the exact version of the package I asked for is a massive win. And why should we give it up, just because distros want to do something worse. And it would be so hard to make the transition, because a lot of rust code has been written under the assumption that cargo will give you the version (or version range) than you ask for.

      If you think about it, it's kind of insane that packages will be build and tested against one version of a library, just to have it randomly replaced with a later version, potentially introducing bugs. In some edge cases, packages even get downgraded to an older version between build/test and being installed on a user's system.

      ---------------------

      What's the solution for rust? I have no idea. Distros do have a point that statically linking everything isn't the best idea. Maybe we can reach some kind of middle ground where some dependencies are statically linked and others are dynamically linked, but with explicit support for multiple versions of each dependency co-existing on the system.

      Maybe we also add explicit support to package managers to track which dependencies have been statically linked into them, so we can force a rebuild of them all when there is a security issue.

      • oneshtein11 hours ago
        > I considered throwing an error message (twice, both at build and runtime) if dolphin was linked with a bugged version of SDL 2... But I figured the distro maintainers would probably patch out both error message too.

        Distro maintainers will not patch in a bug into a program, they will patch out bug from the SDL. :-/

        • phire10 hours ago
          Maybe.... Probably. Especially if I had left a detailed comment explaining why.

          But I still would have required an interim solution for everyone building from source until these patched/fixed versions of SDL started shipping in distros. And then how would the warning message tell the difference between the original SDL 2.0.3 and a patched version of SDL 2.0.3?

          And we are talking about dozens of distros, multiple chances for something to go wrong.

          My point is that I quickly came to the conclusion that ripping out SDL and reimplementing the functionality was the simplest, most conclusive solution.

          • oneshtein4 hours ago
            > But I still would have required an interim solution for everyone building from source until these patched/fixed versions of SDL started shipping in distros.

              $ rpm -q SDL2
              SDL2-2.30.9-1.fc40.x86_64
              SDL2-2.30.9-1.fc40.i686
            
            :-/
    • lmm17 hours ago
      It's the distro packaging that makes it hard to exercise freedom 2, and it always has been - frankly when distro packagers intervene in language packaging it's always done more harm than good. Distributions still don't have good answers to installing packages locally, or even installing two versions of the same library side by side, and it's not reasonable for that to hold back packaging and development progress. (There are good solutions to this problem, but they look more like Nix than like traditional distro packaging).
      • davexunit16 hours ago
        Nix and Guix solve the problem, I don't see why they shouldn't count. Traditional global install distros are too inflexible and imperative.
    • forrestthewoods20 hours ago
      > and are becoming more hostile to distro packaging

      This to me sounds like the distro packaging model isn’t a good one.

      • davexunit20 hours ago
        Someone always makes this comment and it's always wrong.
        • woodruffw20 hours ago
          It’s been great for C and C++ packaging. I don’t think the track record has been great for Python, Go, JavaScript, etc., all of which surfaced the same problems years before Rust.
          • aragilar15 hours ago
            Why does conda or bazel exist then? If you're willing to limit yourself to pure Python plus a sprinkling of non-python for crypto, a single language with IPC over HTTP (i.e. Go) or run in a locked-down effectively mono-lingual environment for your language (JS), then the language package manager makes sense, if you can ignore the subtle breakage that comes with each of them (e.g. Jupyterhub cannot use the PyPI distributed version of specific packages, DNS issues on MacOS). Otherwise you need a cross-language package manager, and the only thing that has appeared to scale is volunteer package maintainers (complemented in some cases by paid ones) looking after upstream-provided software.
          • saghm17 hours ago
            Yeah, this is my perception as well. My lukewarm take is that because so much of the stuff packaged was written in C/C++ for so long, the packaging systems are mostly optimized to work well with the quirks of thing written in them, but that sometimes comes at the expense of stuff written in other languages. In a lot of ways, distro package managers basically have evolved the role of pip/npm/Cargo/etc. for C/C++ packages, which leads to a mismatch when trying to also use them for other languages in pretty similar ways you might expect when trying to grapple with handling a build with an arbitrary pair of languages from the list above
        • pjmlp18 hours ago
          Lets start with the package fragmentation for one, and even the same format not being compatible across distros.
          • davexunit16 hours ago
            Completely irrelevant. Freedom 2 means people are free to build and distribute as they wish. Upstream need not care what the format is. Just make the software easy to build and get out of the way.
            • pjmlp8 hours ago
              Not everyone cares about GPL, which doesn't specify how freedom 2 is supposed to work in practice anyway.
        • forrestthewoods20 hours ago
          The Linux model of global shared libraries is an objective failure. Everyone is forced to hack around this bad and broken design by using tools like Docker.
          • yjftsjthsd-h18 hours ago
            It's okay, docker is also an failure, because it relies and random parties to package things up into container images and then keep the result up to date. Given the number of Dockerfiles I've seen that do charming things like include random binary artifacts and pinned versions that will probably never be checked for security updates, I tend to prefer the distro packages.
            • woodruffw18 hours ago
              That's a "catastrophic success," not a failure.

              I wish people would interrogate this more deeply: why do we see so many Dockerfiles with random binary artifacts? Why do people download mystery meat binaries from GitHub releases? Why is curl-pipe-to-bash such a popular pattern?

              The answer to the questions is that, with few exceptions, distributions are harder to package for, both in first- and third-party contexts. They offer fewer resources for learning how to package, and those resources that do exist are largely geared towards slow and stable foundational packaging (read: C and C++ libraries, stable desktop applications) and not the world of random (but popular) tools on GitHub. Distribution packaging processes (including human processes) similarly reflect this reality.

              On one level, this is great: like you, I strongly prefer a distro package when it's available, because I know what I'm going to get. But on another level, it's manifestly not satisfying user demand, and users are instead doing whatever it takes to accomplish the task at hand. That seems unlikely to change for the better anytime soon.

          • davexunit20 hours ago
            It's not the "Linux model". It's an antiquated distro model that has been superseded by distros like Guix and NixOS that have shown you can still have an understandable dependency graph of your entire system without resorting to opaque binary blobs with Docker.
            • pjmlp18 hours ago
              When will AWS, Azure, GCP adopt such great distros?
              • davexunit16 hours ago
                I know this is not a good faith question but I did devops for many years and the state of the tooling and culture is so atrocious that I don't know if they will ever catch on. They're all lost in a swamp of YAML and Dockerfiles, a decade or more behind the rest of the industry in terms of software engineering knowledge. No one ever had any clue when I tried to talk to them about functional and immutable package management.
                • pjmlp8 hours ago
                  It is a remark in form of question, pointing out that Nix/Guix are 3l1t3, and will never achieve the adoption scale of Red-Hat, Debian, Arch, SusSE, Alpine, or the hyperscalers own distributions.
                  • davexunit3 hours ago
                    I don't think Nix or Guix are particularly "3l1t3". They are less mainstream, but they have users of varying technical backgrounds.
            • forrestthewoods20 hours ago
              [flagged]
              • davexunit16 hours ago
                I use Guix. Have a nice day.
          • c0l019 hours ago
            And what is it that makes the *inside* of all (read: almost all) these nice and working Docker/container images tick?

            Distributions using a "model of global shared libraries".

            • int_19h19 hours ago
              Except that it's usually completely wasted on a Docker container. You could as well just statically link everything (and people do that).
            • forrestthewoods18 hours ago
              Uhh no?

              If you have to run 5 different docker images each with their own “global shared library” set you clearly no longer have system wide globals. You have an island of deps potentially per program or possible a few islands for a few sets of programs.

              Which, once again, completely defeats the entire purpose of the Linux global shared library model. It would have been much much much simpler for each program to have linked statically or to expect programs to include their dependencies (like Windows).

              Containers should not exist. The fact that they exist is a design failure.

              • davexunit16 hours ago
                Static linking is acceptable when you can update a version of a library in one place and it will trigger rebuilds of all dependent software, ensuring things like security updates are delivered system-wide with confidence. The Windows every-app-is-an-island model makes you reliant on every app developer for updates to the entire dependency graph, which in practice means you have a hodge podge of vulnerable stuff.
          • oneshtein11 hours ago
            It means that Everyone is now the maintainer of his own distro fork.
        • FridgeSeal17 hours ago
          Someone always makes this comment and it’s always wrong.

          If you’re going to say something so spicy, at least back it up with some reasons, otherwise you could have just said it out loud and saved everyone from reading it (blah blah HN guidelines blah blah).

      • jeroenhd17 hours ago
        It's not good but it's the best we've got. There's a reason the most common alternative to distro dependency management involves shipping entire copies of other distro with a few adjustments inside of a sandbox to make sure it doesn't conflict with the other copies of other distros.
        • forrestthewoods16 hours ago
          > but it's the best we've got

          Naw. Lack of reliability in running programs is a distinctly Linux phenomenon. Windows culture is much much much better.

          • davexunit16 hours ago
            Shipping bundles of whatever libraries for each app is a security nightmare. I don't know how the Windows model can be suggested with any seriousness.
            • forrestthewoods16 hours ago
              The modern day reality is that on Linux everyone ships countless Docker containers of god knows what libraries and binaries that was built lord knows when. So its exact the same thing except there’s an extra step that’s fighting how the system wants to operate.
              • abenga13 hours ago
                The existence of Linux distros with software not packaged using Docker implies that it is definitely not "everyone".
                • forrestthewoods12 hours ago
                  eye roll

                  I know I’m dealing with programmers but the level of precision required on internet comments is absurd.

                  I work on Windows and don’t use Docker so I too known it’s not Everyone. But when I said everyone I would like to think not Everyone assumed I meant literally Everyone. Capitalization intended.

                  • abenga6 hours ago
                    Aside from server software, who really distributes software using Docker? My point is it's closer to "no one" than "everyone".
  • Macha21 hours ago
    To be blunt:

    If you compile my Rust library against dependencies that are not compatible as declared in Cargo.toml, the result is on you. If you want to regenerate a compatible Cargo.lock, that would be more understandable, and I don't go out of my way to specify "=x.y.z" dependencies in my Cargo.toml, so I have effectively given permission for that anyway.

    They give the example of Debian "bumping" dependencies, but given the relative "freshness" of your typical Rust package vs say... Debian stable, I imagine the more likely outcome would be downgrading dependencies.

    This reminds me of the time Debian developers "knew better" with the openssh key handling...

    • guappa18 hours ago
      To be blunt: if your library requires me to handle it differently than all the other software that exists, it's a bad library. If you break compatibility every week you should not be writing a library at all.

      > This reminds me of the time Debian developers "knew better" with the openssh key handling...

      How much % of the CVEs that exists are from debian patches and how much is upstream developers? Can you confidently claim that upstream developers are better at security than debian developers? Or are you just picking a single example and ignoring all the rest of the data that you don't like?

      I have 3794 packages currently installed on my machines. If all of them required special handling like your snowflake library I'd just not be using a computer at all, because I certainly don't have the capacity to deal with 3794 special snowflakes.

      • lmm17 hours ago
        > I certainly don't have the capacity to deal with 3794 special snowflakes.

        It's hardly unique. Every major language other than C/C++ (and Python, but Python is just a mess in general) works like Rust here. If anything I'd say it's the C people who are the snowflakes.

        > Can you confidently claim that upstream developers are better at security than debian developers?

        I can confidently claim that upstream has better policies. Debian routinely patches security-critical software without dedicated security review; this predictably lead to disaster in the past and will lead to disaster again.

        Also the Debian SSL key bug was not just one more CVE on a par with any other. It was probably the worst security bug ever seen in general-purpose software.

        • andrewshadura8 hours ago
          > Debian routinely patches security-critical software without dedicated security review

          This is untrue. Provide some evidence or withdraw your false claim. Thanks.

        • guappa10 hours ago
          > I can confidently claim that upstream has better policies.

          Debian has thousands of different upstream, you don't even know who they are, how can you claim to know their policies and to have evaluated all of them?

          Also, all the 1 person projects do not get fixes in a timely manner. As you would understand if there was any amount of reasoning behind your arguments.

          Are you done making stuff up or must you continue?

          > Also the Debian SSL key bug

          1. Shit happens

          2. If you want random data you read from /dev/random, an uninitialised buffer is NOT random.

          3. If your example dates to over a decade ago, I'd say the track record is excellent.

    • jeroenhd17 hours ago
      That's a valid take, and a good reason for your library (and other software using it) never being included in major distros. You're not obligated to make your code work with specific distros.

      It's a good reason for others to avoid your libraries, though, especially if they do intend to be compatible with common distros.

      The openssl issue was just the result of openssl being unnecessarily complex and difficult to work with, though.

    • orra18 hours ago
      > This reminds me of the time Debian developers "knew better" with the openssh key handling...

      This is a totally unfair comparison. The upstream OpenSSL code was broken: it was invoking undefined behaviour. Moreover, the Debian maintainer had asked upstream whether it was OK to remove the code, because it was causing Valgrind memory warnings.

      • lmm17 hours ago
        > The upstream OpenSSL code was broken: it was invoking undefined behaviour.

        No it wasn't. It was reading indeterminate character values, which has defined behaviour (char cannot have a trap representation).

        > the Debian maintainer had asked upstream whether it was OK to remove the code

        They hadn't asked the correct mailing list, and they only asked at all for one of the two cases where they removed similar code, which wasn't the one taht caused the problem.

        • orra16 hours ago
          Reading an indeterminate value is its own undefined behaviour. See the second bottom paragraph on page 492: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
          • lmm15 hours ago
            That's a change in C11. It was not undefined behaviour in C99 (which was the live standard at the time).
            • orra8 hours ago
              I linked N1256, which is essentially C99 plus the errata (Tchnical Corrigendum 1, 2, & 3).
    • kpcyrd14 hours ago
      > but given the relative "freshness" of your typical Rust package vs say... Debian stable

      That's not how Debian development is done, fresh software is uploaded to unstable only and then eventually ends up in a stable release. As a maintainer, Debian stable is something you support, not something you develop against.

    • yjftsjthsd-h18 hours ago
      > If you compile my Rust library against dependencies that are not compatible as declared in Cargo.toml, the result is on you.

      The annoying thing is that that was always supposed to be true. Users of distro packages are already supposed to report all bugs to the package maintainers, and then they deal with it. Sometimes part of dealing with it is finding that a bug did in fact originate upstream and reporting it, but that was never supposed to be step 1.

      • XorNot18 hours ago
        The thing is user's want a solution: if a user is reporting a bug then already two things are true: (1) they're experiencing it and it's blocking them from using the software and (2) it's significant enough to their process they didn't just install something else.

        So "user reporting a bug" is already an unusually highly motivated individual compared to the alternatives. Which leads to (3) they were trying to use the software right then, and a response from package maintainers of "this is an upstream bug" doesn't solve the problem, and in fact barely implies that a solution might be being requested from upstream.

        Which also leads to (4): the most common immediate resolution anyone is going to present that user is "use the latest version / master / whatever". Which means the user is already going to make the transition to "now I'm a direct upstream user" (and equally the high probability the bug might still exist in upstream anyway).

    • MuffinFlavored20 hours ago
      > If you want to regenerate a compatible Cargo.lock

      I only read the comments here so far and not the article but it sounds to me that people are up-in-arms about the "how do we handles" related to:

      * primarily C/GNU makefile ecosystem

      then

      you now want to add Rust to it

      and

      Cargo.lock on cares about Cargo.toml versions, not what other crates `build.rs` are linking against provided by the system external to Rust?

  • woodruffw1 day ago
    The author writes:

    > I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian’s millions of users.

    Followed by a rationale. However, the rationale rings hollow to me:

    * Compiling a Rust package outside of its declared semver might not cause security problems in the form of exploitable memory corruption, but it's almost certainly going to cause stability problems with extremely undesirable characteristics (impossible for the upstream to triage, and definitionally unsupported).

    * The assumption that semantic changes to APIs are encoded within types is incorrect in the general case: Rust has its fair share of "take a string, return a string" APIs where critical semantic changes can occur without any changes to the public interface. These again are unlikely to cause memory safety issues but they can result in logical security issues, especially if the project's actual version constraints are intended to close off versions that do work but perform their operation in an undesirable way.

    As a contrived example of the above: `frobulator v1.0.0` might have `fn frob(&str) -> &str` which shells out to `bash` for no good reason. `frobulator v1.0.1` removes the subshell but doesn't change the API signature or public behavior; Debian ends up using v1.0.0 as the build dependency despite the upstream maintainer explicitly requesting v1.0.1 or later. This would presumably go unnoticed until someone files a RUSTSEC advisory on v1.0.0 or similar, which I think is a risky assumption given the size (and growth) of the Rust ecosystem and its tendency for large dep trees.

    The author is right that, in practice, this will work 99% of the time. But I think the 1% will cause a lot of unnecessary downstream heartburn, all because of a path dependency (assumptions around deps and dynamic linkage) that isn't categorically relevant to the Rust ecosystem.

    • rcxdude21 hours ago
      It is also something that will make upstream hate them, possibly to the point of actively making their life difficult in retribution.

      (Problems along this line already are why the bcachefs dev told users to avoid debian and the packager eventually dropped the package: https://www.reddit.com/r/bcachefs/comments/1em2vzf/psa_avoid...)

      • woodruffw21 hours ago
        I'm one of those potential upstreams. It's all OSS and I understand the pain of path dependency, so I don't think it would be fair to go out of my way to make any downstream distribution maintainer's life harder than necessary.

        At the same time, I suspect this kind of policy will make my life harder than necessary: users who hit bugs due to distribution packaging quirks will be redirected to me for triage, and I'll have a very hard time understanding (much less fixing) quirks that are tied to transitive subdependencies that my own builds intentionally exclude.

      • kelnos21 hours ago
        > It is also something that will make upstream hate them

        This was my thought. Xubuntu decided to ship unstable development pre-releases of the core Xfce components in their last stable OS release, and I got really annoyed getting bug reports from users who were running a dev release that was 4 releases behind current, where things were fixed.

        This was completely unnecessary and wasted a bunch of my time.

      • Arnavion20 hours ago
        Users ought to report bugs to their distro first, and the distro package maintainer should then decide whether it's an upstream bug or a distro packaging bug. That would cut out all the noise that upstream hates.

        The problem is users have gotten savvy about talking to upstream directly. It helps that upstream is usually on easy-to-use and search-engine-indexed platforms like GitHub these days vs mailing lists. It would still be fine if the user did the diligence of checking whether the problem exists in upstream or just the distro package, but they often don't do that either.

        • int_19h19 hours ago
          It's a lot of effort for the user to do that kind of testing. They would need to either build that package from scratch themselves, or to install a different distro.
          • Arnavion19 hours ago
            "Users ought to report bugs to their distro first" does not require any testing or compiling. Them reporting directly upstream is the one that would require diligence to prove it's not a distro bug, though that still doesn't nececssarily require compiling or checking other distros, just looking at code.
      • jeroenhd17 hours ago
        I've been avoiding bcachefs because the main author always seem to be stirring up some kind of internet drama. If they recommend users to stay away from Debian, I take that as a good reason to use Debian over other distros.

        I like Debian's stability and don't really need my filesystem to move fast and break things. If this is the direction bcachefs is going, I don't think it's ever going to be a good platform to run an OS on top of.

        • rcxdude8 hours ago
          As far as I can tell, bcachefs is in "move fast and fix things" phase, where the dev is iterating to bring the fs from experimental to actually usable, and I can imagine it's quite stressful if some segment of your existing users are being given a version with known serious bugs.
      • Hemospectrum21 hours ago
        > ...and the packager eventually dropped the package

        Related HN discussion from August/September: https://news.ycombinator.com/item?id=41407768

    • amluto18 hours ago
      The rationale doesn’t even seem like a rationale — it’s more like an argument that this choice might plausibly work. But why? Those pure source build-time dependencies don’t even seem useful to package. As a user of Debian or a downstream distro, there is approximately zero chance that I would want to use these packages for any purpose.

      This is a completely different situation from C/C++ and from building a distro out of C/C++ source. C does not have a useful package manager, and it’s handy if my distro has C libraries packaged. My distro might use them!

      (I might also use them for my own unpackaged work! But then I might get grumpy that the result of building my own code isn’t actually useful outside that version of that distro, and then I risk having to use containers (sigh), when what I actually wanted was just a build artifact that could run on other distros.)

    • jvanderbot1 day ago
      I really don't understand this problem at all. I can cargo-deb and get a reasonable package with virtually no effort. What is uncompliant about that?
      • woodruffw23 hours ago
        My understanding (which could be wrong) is that this is an attempt to preserve Debian's "global" view of dependencies, wherein each Rust package has a set of dependencies that's consistent with every other Rust package (or, if not every, as many as possible). This is similar to the C/C++ packaging endeavor, where dependencies on libraries are handled via dynamic linkage to a single packaged version that's compatible with all dependents.

        If the above is right this contortion is similar to what's happened with Python packaging in Debian and similar, where distributions tried hard to maintain compatibility inside of a single global environment instead of allowing distinct incompatible resolutions within independent environments (which is what Python encourages).

        I think the "problem" with cargo-deb is that it bundles all-in-one, i.e. doesn't devolve the dependencies back to the distribution. In other words, it's technically sound but not philosophically compatible with what Debian wants to do.

        • iknowstuff20 hours ago
          The sheer amount of useless busywork needed for this, only to end up with worse results and piss off application creators, is peak fucking Linux.

          If I was paid to do this kind of useless work for a living I’d probably be well on the way to off myself.

          • XorNot18 hours ago
            Software bill-of-materials work has become a huge focus in the security/SRE space lately, due to the existence of actually attempted or successful supply chain attacks.

            In that context, creating an ever increasing menagerie of dependencies of subtly different versions of the same thing is the pointless busy work because for the tiny bit of developer convenience, you're opening an exponentially growing amount of audit work to prove the dependencies are safe.

            • flomo13 hours ago
              You make a great point, but I think we're going to see stuff like Microsoft selling 'supported NPM' to corps, rather than a zillion volunteers showing up to do monkey packaging work for Debian. (In fact, inserting some rando to fork the software makes the problem worse.)
              • XorNot6 hours ago
                Right but the point is then open source by necessity needs to constrain it's dependencies to sensible, auditable sets.

                But I'd note that a hypothetical "verified NPM" would also result in the same thing: Microsoft does not have infinite resources for such a thing, so you'd just have a limited set of approved deps yet again (which would in fact make piggy backing them relatively easy for distros).

                I can't see a way to slice it where it's reasonable to expect such a world to just support enormous nests of dependency versions.

                • LtWorf3 hours ago
                  The verified npm will be like the verified pypi: "this thing was built on github, but we actually have no fucking clue if it's a bitcoin miner or a legit library"
        • NobodyNada17 hours ago
          Cargo even supports multiple incompatible versions of the same library coexisting within a single binary. E.g. an application I maintain depended on both winit 0.28 and 0.29 for a while, since one of my direct dependencies required winit 0.28, another required winit 0.29, and the API changed significantly between the two versions. So if the set of Rust applications packaged by Debian grows large enough, this "let's just pick one canonical version of each package" approach seems like a complete non-starter.

          I'd expect it to be much less effort for Debian to use the dependencies the application says it's compatible with, and build tooling allowing them to track/audit/vendor/upgrade Cargo dependencies as needed; rather than trying to shove the Cargo ecosystem into their existing tooling that is fundamentally different (and thus incompatible).

        • jppittma21 hours ago
          I think what I’m missing is why all of this is necessary in a world without dynamic linking. We’re talking about purely build time dependencies, who cares about those matching across static binaries? If it’s this much of a headache, I’d rather just containerize the build and call it a day.
          • woodruffw21 hours ago
            This is pretty much my thought as well -- FWICT the ultimate problem here isn't a technical one, but a philosophical disagreement between how Rust tooling expects to be built and how Debian would like to build the world. Debian could adopt Rust's approach with minor technical accommodations (as evidenced by `cargo-deb`), but to do so would be to shed Debian's commitment to global version management.
          • cpuguy8318 hours ago
            Because they are also maintaining all those build time dependencies, including dealing with CVE's, back porting patches, etc.
          • oneshtein11 hours ago
            World without dynamic linking was ended eons ago. You are just too young to remember that.

            But, if you like it, find like-minded people here on HN and create a successful distro with static linking only. Maybe you will success where others failed. Thank you in advance.

          • mook20 hours ago
            I believe that's usually so they can track when a library has a security vulnerability and needs to be updated, regardless of whether the upstream package itself has a version that uses the fixed library.
        • wakawaka2823 hours ago
          Packaging a distribution efficiently requires sharing as many dependencies as possible, and ideally hosting as much of the stuff as possible in an immutable state. I think that's why Debian rejects language-specific package distribution. How bad would it suck if every Python app you installed needed to have its own venv for example? A distro might have hundreds of these applications. As a maintainer you need to try to support installing them all efficiently with as few conflicts as possible. A properly maintained global environment can do that.

          Edit: I explained lower down but I also want to mention here, static linkage of binaries is a huge burden and waste of resources for a Linux distro. That's why they all tend to lean heavily on shared libraries unless it is too difficult to do so.

          • woodruffw23 hours ago
            > Packaging a distribution efficiently requires sharing as many dependencies as possible, and ideally hosting as much of the stuff as possible in an immutable state.

            I don't think any of this precludes immutability: my understanding is Debian could package every version variant (or find common variants without violating semver) and maintain both immutability and their global view. Or, they could maintain immutability but sacrifice their package-level global view (but not metadata-level view) by having Debian Rust source packages contain their fully vendored dependency set.

            The former would be a lot of work, especially given how manual the distribution packaging process is today. The latter seems more tractable, but requires distributions to readjust their approach to dependency tracking in ecosystems that fundamentally don't behave like C or C++ (Rust, Go, Python, etc.).

            > How bad would it suck if every Python app you installed needed to have its own venv for example?

            Empirically, not that badly. It's what tools like `uv` and `pipx` do by default, and it results in a markedly better net user experience (since Python tools actually behave like hermetic tools, and not implicit modifiers of global resolution state). It's also what Homebrew does -- every packaged Python formula in Homebrew gets shipped in its own virtual environment.

            > A properly maintained global environment can do that.

            Agreed. The problem is the "properly maintained" part; I would argue that ignoring upstream semver constraints challenges the overall project :-)

            • zajio1am19 hours ago
              > Debian could package every version variant ... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.

              That would work for distributions that provide just distributions / builds. But one major advantage of Debian is that it is committed to provide security fixes regardless of upstream availability. So they essentially stand in for maintainers. And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.

              • wakawaka2811 hours ago
                >... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.

                That's not reasonable for library packages, because they may have to interact with each other. You're also proposing a scheme that would cause an explosion in resource usage when it comes to compilation and distribution of packages. Packages should be granular unless you are packaging something that is just too difficult to handle at a granular level, and you just want to get it over with. I don't even know if Debian accepts monolithic packages cobbled together that way. I suspect they do, but it certainly isn't ideal.

                >And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.

                When this is done, it is likely because updating is riskier and more work than maintaining a few old versions. Library authors that constantly break stuff for no good reason make this work much harder. Some of them only want to use bleeding edge features and have zero interest in supporting any stable version of anything. Package systems that let anyone publish easily lead to a proliferation of unstable dependencies like that. App authors don't necessarily know what trouble they're buying into with any given dependency choice.

            • wakawaka2823 hours ago
              >I don't think any of this precludes immutability: my understanding is Debian could package every version variant (or find common variants without violating semver) and maintain both immutability and their global view.

              Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something. C and C++ shared libraries support coexistence of multiple versions via semver-based name schemes. I don't know if Rust packages are structured that well.

              >Empirically, not that badly. It's what tools like `uv` and `pipx` do by default, and it results in a markedly better net user experience (since Python tools actually behave like hermetic tools, and not implicit modifiers of global resolution state). It's also what Homebrew does -- every packaged Python formula in Homebrew gets shipped in its own virtual environment.

              These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants. If you want to support as many systems as possible, you need to be mindful of resource usage.

              >I would argue that ignoring upstream semver constraints challenges the overall project :-)

              Yes it's a horrible idea. "Let's programmatically add a ton of bugs and wait for victims to report the bugs back to us in the future" is what I'm reading. A policy like that can be exploited by malicious actors. At minimum they need to ship the correct required versions of everything, if they ship anything.

              • woodruffw23 hours ago
                > Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something.

                Ah yeah, this wouldn't work -- instead, Debian would need to bite the bullet on Rust preferring static linkage and accept that each package might have different interior dependencies (still static and known, just not globally consistent). This doesn't represent a conflict risk because of the static linkage, but it's very much against Debian's philosophy (as I understand it).

                > I don't know if Rust packages are structured that well.

                Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

                > These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants.

                I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

                And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).

                • wakawaka2822 hours ago
                  >I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

                  Many software packages need root access but that is not what I was talking about. Distro users just want working software with minimal resource usage and incompatibilities.

                  >Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

                  Static linkage is deliberately avoided as much as possible by distros like Debian due to the additional overhead. It's overhead on the installation side and mega overhead on the server that has to host a download of essentially the same dependency many times for each installation when it could have instead been downloaded once.

                  >And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).

                  If the binary package is a thin wrapper around venv, then you're right. But these packages are usually designed to share dependencies with other packages where possible. So for example, if you had two packages installed using some huge library for example, they only need one copy of that library between them. Updating the library only requires downloading a new version of the library. Updating the library if it is statically linked requires downloading it twice along with the other code it's linked with, potentially using many times the amount of resources on network and disk. Static linking is convenient sometimes but it isn't free.

                  • lmm17 hours ago
                    OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.
                  • int_19h19 hours ago
                    Historically, the main reason why dynamic linking is even a thing is because RAM was too limited to run "heavy" software like, say, an X server.

                    This hasn't been true for decades now.

                    • pjmlp18 hours ago
                      Other OSes got dynamic linking before it became mainstream on UNIX.

                      Plugins and OS extensions were also a reason why they came to be.

                    • LtWorf19 hours ago
                      This is still true.

                      Static linking works fine because 99% of what you run is dynamically linked.

                      Try to statically link your distribution entirely and see how the ram usage and speed will degrade :)

                      • dralley18 hours ago
                        But there's a pretty significant diminishing return between, say, the top 80 most linked libraries and the rest.
                      • pjmlp18 hours ago
                        Or using OS IPC processes for every single plugin on something heavy like a DAW.
                    • wakawaka2819 hours ago
                      RAM is still a limited resource. Bloated memory footprints hurt performance even if you technically have the RAM. The disk, bandwidth, and package builder CPU usage involved to statically link everything alone is enough reason not to do it, if possible.
              • fragmede20 hours ago
                > Then you have higher chances of conflicts, unless I'm missing something.

                For python, you could install libraries into a versioned dir, and then create a venv for each program, and then in each venv/lib/pythonX/site-packages/libraryY dir just symlinks to the appropriate versioned global copy.

                • LtWorf19 hours ago
                  Do you think that's user friendly?
                  • fragmede19 hours ago
                    I see it as more user friendly - instead of forgetting to activate the venv and having the program fail to run/be broken/act weird, you run the program and it activates the venv for you so you don't have that problem.
                    • LtWorf10 hours ago
                      Do you think your software is so important that people will do all of that rather than use something better? (For example your software but patched by a distribution to work easily without doing all of that complication)
                • wakawaka2819 hours ago
                  That would make it difficult to tell at a system level what the exact installed dependencies of a program are. It would also require the distro to basically re-invent pip. Want to invoke one venv program from another one? Well, good luck figuring out conflicts in their environments which can be incompatible from the time they are installed. Now you're talking about a wrapper for each program just to load the right settings. This is not even an exhaustive list of all possible complications that are solved by having one global set of packages.
            • LtWorf19 hours ago
              > my understanding is Debian could package every version variant

              unlike pypi, debian patches CVEs, so having 3000 copies of the same vulnerability gets a bit complicated to manage.

              Of course if you adopt the pypi/venv scheme where you just ignore them, it's all much simpler :)

              • woodruffw18 hours ago
                This is incorrect on multiple levels:

                * Comparing the two in this regard is a category error: Debian offers a curated index, and PyPI doesn't. Debian has a trusted set of packagers and package reviewers; PyPI is open to the public. They're fundamentally different models with different goals.

                * PyPI does offer a security feed for packages[1], and there's an official tool[2] that will tell you when an installed version of a package is known to be vulnerable. But this doesn't give PyPI the ability to patch things for you; per above, that's something it fundamentally isn't meant to do.

                [1]: https://docs.pypi.org/api/json/#known-vulnerabilities

                [2]: https://pypi.org/project/pip-audit/

                • LtWorf10 hours ago
                  It's a completely fair comparison is one is assessing which is more secure. The answer is completely straightforward.

                  One project patches/updates vulnerable software and makes sure everything else works, while the other puts all the effort on the user.

          • FridgeSeal18 hours ago
            > How bad would it suck if every Python app you installed needed to have its own venv for example?

            You mean…the way many modern python apps install themselves? By setting up their own venv? Which is a sane and sensible thing to do, given pythons sub-par packaging experience.

            • wakawaka2817 hours ago
              Yes, I'm indirectly saying that the way many contemporary apps are managed sucks. Python's packaging experience is fine as far as tools in that category go. The trouble happens when packages are abandoned, or make assumptions that are bad for users. Even if everyone was a good package author, there would be inevitable conflicts.

              The problem extends way beyond Python. This is why we have Docker, Snap, Flatpack, etc.: to work around inadequate maintenance and package conflicts without touching any code. These tools make it even easier for package authors to overlook bad habits. "Just use a venv bro" or "Just run it in Docker" is a common response to complaints.

              • FridgeSeal16 hours ago
                I want to challenge this assumption that “the distro way is the good way” and anything else that people are doing is the “wrong” way.

                I want to challenge it, because I’m beginning to be of the opinion that the “distro way” isn’t actually entirely suitable for a lot of software _anymore_.

                The fact that running something via docker is easier for people than the distro way indicates that there are significant UX/usability issues that aren’t being addressed. Docker as a mechanism to ship stuff for users does suck, and it’s a workaround for sure, but I’m not convinced that all those developers are “doing things wrong”.

                • wakawaka2811 hours ago
                  >The fact that running something via docker is easier for people than the distro way indicates that there are significant UX/usability issues that aren’t being addressed.

                  It's a lack of maintenance that isn't getting addressed. Either some set of dependencies isn't being updated, or the consumers of those dependencies aren't being updated to match changes in those dependencies. In some cases there is no actual work to do, and the container user is just too lazy to test anything newer. Why try to update when you can run the old stuff forever, ignoring even the slow upgrades that come with your Linux distro every few years?

                  Some people would insist that running old stuff forever in a container is not "doing things wrong" because it works. But most people need to update for security reasons, and to get new features. It would be better to update gradually, and I think containers discourage gradual improvements as they are used in most circumstances. Of course you can use the technology to speed up the work of testing many different configurations as well, so it's not all bad. I fear there are far more lazy developers out there than industrious ones however.

          • vbezhenar21 hours ago
            > How bad would it suck if every Python app you installed needed to have its own venv for example?

            I would love to have that. Actually that's what I do: I avoid distribution software as much as possible and install it in venvs and similar ways.

            • LtWorf19 hours ago
              Now tell your grandmother to install a software that way and report back with the results please.
          • superkuh20 hours ago
            >How bad would it suck if every Python app you installed needed to have its own venv for example?

            You just described every python3 project in 2024. Pretty much none will be expected to work with system python. But your point still stands, that's not a good thing that there is no python and only pythons. And it's not a good thing that there is no rustc only rustcs, etc, let alone trying to deal with cargo.

            • woodruffw20 hours ago
              It’s not that they don’t work with the system Python, it’s that they don’t want to share the same global package namespace as the system Python. If you create a virtual environment with your system Python, it’ll work just fine.

              (This is distinct from Rust, where there’s no global package namespace at all.)

          • fragmede20 hours ago
            > How bad would it suck if every Python app you installed needed to have its own venv for example?

            Yeah I hacked together a shim that searches the python program's path for a directory called venv and shoves that into sys.path. Haven't hacked together reusing venv subdirs like pnpm does for JavaScript, but that's on my list.

        • Asooka23 hours ago
          This is a general Linux distro problem, which is entirely self-inflicted. The distro should only carry the software for the OS itself and any applications for the user should come with all their dependencies, like on Windows. Yes, it kind of sucks that I have 3 copies of Chrome (Electron) and 7 copies of Qt on my Windows system, but that sure works a hell of a lot better than trying to synchronise the dependencies of a dozen applications. The precise split between OS service and user application can be argued endlessly, but that's for the maintainers to decide. The OS should not be a vehicle for delivering enduser applications. Yes, some applications should remain as a nice courtesy (e.g. GNU chess), but the rest should be dropped. Basically the split should be "does this project want to commit to working with major distros to keep its dependencies reasonable". I really hope we can move most Linux software to flatpak and have it updated and maintained separately from the OS. After decades of running both Linux and Windows, the Windows model of an application coming with all its dependencies in a single folder really is a lot better.
          • yjftsjthsd-h22 hours ago
            > Yes, it kind of sucks that I have 3 copies of Chrome (Electron) and 7 copies of Qt on my Windows system, but that sure works a hell of a lot better than trying to synchronise the dependencies of a dozen applications.

            Does it? Even if 2 of those chromiums and all of the QTs have actively exploited vulnerabilities and it's anyone's guess if/when the application authors might bother updating?

            • Xylakant22 hours ago
              A common downside is that the distribution picks the lowest common denominator of some dependency and all apps that require a newer version are held behind. That version may well be out of support and not receive fixes at all any more, which leaves the burden of maintenance on the distribution. Depending on the package maintainer, results may vary. (We sysadmins still remember debians effort to backport a fix to OpenSSL and breaking key generation.)

              This is clearly a tradeoff with no easy win for either side.

              • yjftsjthsd-h19 hours ago
                > We sysadmins still remember debians effort to backport a fix to OpenSSL and breaking key generation.

                Could you remind those of us who don't remember that? The big one I know about is CVE-2008-0166, but that was 1. not a backport, and 2. was run by upstream before they shipped it.

                But yes, agreed that it's painful either way; I think it comes down to that someone has to do the legwork, and who exactly does it has trade-offs either way.

                • Xylakant7 hours ago
                  You’re correct. It wasn’t a backport, it was an attempt to fix a perceived issue that upstream had not fixed - a read of uninitialized memory. What the maintainer did not understand was that the call was deliberate. With this and subsequent changes, they broke the randomness in the key generation.
                  • LtWorf2 hours ago
                    Good thing openssl is so good it hasn't had any CVE at all in all the years since that happened right?
                • lmm17 hours ago
                  It wasn't run by upstream. It was run by the wrong mailing list, and only for one half of the Debian changes.
              • Arnavion19 hours ago
                That is a problem for LTS distributions. Rolling distributions do not have this problem.
                • yjftsjthsd-h19 hours ago
                  I don't think that's true? If packages foo and bar both need libbaz, and foo always uses the latest version of libbaz but bar doesn't, you're going to have a conflict no matter whether you're rolling or not. If anything, a slow-moving release-based distro could have an easier time if they can fudge versions to get an older version of foo that overlaps dependencies with bar.
                  • Arnavion15 hours ago
                    When the situation you describe happens, the easiest thing for the distro to do is to make the two versions of libbaz coinstallable via different package names, different sonames, etc. This is how every distro, LTS or rolling, handled openssl 1.0 vs 1.1 vs 3 for example.

                    Regardless, the original point was:

                    >>That version may well be out of support and not receive fixes at all any more, which leaves the burden of maintenance on the distribution.

                    ... and my response that this is only a problem for LTS distros stands. A rolling release distro will not get in the business of maintaining packages after upstream dropped support. It is only done by LTS distros, because the whole point of LTS distros is that their packages must be kept maintained and usable for N years even as upstreams lose interest and the world perishes in nuclear fire and zombies walk the Earth feasting on the brains of anyone still using outdated software.

                    ---

                    Now, to play devil's advocate, here's an OpenSUSE TW (rolling distro) bug that I helped investigate: https://bugzilla.opensuse.org/show_bug.cgi?id=1214003 . The tl;dr is that:

                    - Chromium upstream vendors all its dependencies, but OpenSUSE forces it to compile against distribution packages.

                    - Chromium version X compiles against vendored library version A. OpenSUSE updates its Chromium package to version X and also has library version A in its repos, so the Chromium compiled against distro library works fine.

                    - Chromium upstream updates the vendored library to version B, and makes that change as part of Chromium version Y. OpenSUSE hasn't updated yet.

                    - OpenSUSE updates the library package to version B. Chromium version X's package is automatically rebuilt against the new library, and it compiles fine because the API is the same.

                    - Disaster! The semantics of the library did change between versions A and B even though the API didn't, so OpenSUSE's Chromium now segfaults due to nullptr deref. Chromium version Y contains Chromium changes to account for this difference, but the OpenSUSE build of Chromium X of course doesn't have them.

                    You will note that this is caused by a distro compiling a package against a newer version of a dependency than upstream tested it against, not an older one, but you are otherwise welcome to draw parallels from it.

                    In this case it was fixed by backporting the Chromium version Y change to work with library version B, and eventually the Chromium package was updated to version Y and the patch was dropped. In a hypothetical scenario where Chromium could not be updated nor patched (say the patch was too risky), it could have worked for the distro to make the library coinstallable and then have Chromium use library version A while everything else uses version B.

              • vbezhenar20 hours ago
                Another downside might be that developer just does not test his software with OS-supplied library versions. So it can cause all the kinds of bugs. There's a reason why containers won in server-side development.
                • yjftsjthsd-h18 hours ago
                  That's why distros (at least of the kind that Debian is) aim to do everything themselves; they mirror upstream code ( https://sources.debian.org/ ), they write their own packaging scripts, they test the whole thing together, and then they handle problems in-house ( https://www.debian.org/Bugs/Reporting ). The upstream developer shouldn't need to have done any of this themselves.
            • vlovich12321 hours ago
              MacOS and Windows both seem to do quite well actually on this front. You should have OS-level defense mechanisms rather than trying to keep every single application secure. For example, qBitTorrent didn’t verify HTTPS certs for something like a decade. It’s really difficult to keep everything patched when it’s your full time job. When it’s arbitrary users with a mix of technical abilities and understandings of the issue it’s a much worse problem.
              • yjftsjthsd-h12 hours ago
                Isn't that an argument that macOS/Windows aren't doing so well? On Debian, I can run `apt upgrade` and patch every single program. On Windows, I have to have a bunch of updater background processes and that still doesn't cover everything.
              • LtWorf10 hours ago
                Possibly the not trying to keep applications secure explains why those systems get hacked all the time?
            • woodruffw21 hours ago
              The problem here is ultimately visibility and actionability: half a dozen binaries with known vulnerabilities isn't much better than a single distribution one, if the distribution isn't (or can't) provide security updates.

              Or, as another framing: everything about user-side packaging cuts both ways: it's both a source of new dependency and vulnerability tracking woes, and it's a significant accelerant to the process of getting patched versions into place. Good and bad.

          • oneshtein10 hours ago
            > any applications for the user should come with all their dependencies, like on Windows

            s/dependencies/security holes/g

          • LtWorf19 hours ago
            > any applications for the user should come with all their dependencies, like on Windows

            Because that works so well on windows right?

          • forrestthewoods20 hours ago
            You’re getting downvoted but you’re not wrong. The Linux Distro model of a single global shared library is a bad and wrong design in 2024. In fact it’s so bad and wrong that everyone is forced to use tools like Docker to work around the broken design.
        • MrBuddyCasino23 hours ago
          This seems like a fools errand to me. Unlike C/C++, Rust apps typically have many small-ish dependencies, trying to align them to a dist-global approved version seems pointless and laborious. Pointless because Rust programs will have a lot less CVEs that would warrant such an approach.
          • sshine22 hours ago
            Laborious but not pointless.

            Rust programs have fewer CVEs for two reasons: its safe design, and its experienced user base. As it grows more widespread, more thoughtless programmers will create insecure programs in Rust. They just won’t often be caused by memory bugs.

          • arccy22 hours ago
            I'd think logic bugs are the majority of CVEs, and rust doesn't magically make those go away
            • woodruffw22 hours ago
              The "majority of CVEs" isn't a great metric, since (1) anybody can file a CVE, and (2) CNAs can be tied to vendors, who are incentivized to pre-filter CVEs or not issue CVEs at all for internal incidents.

              Thankfully, we have better data sources. Chromium estimates that 70% of serious security bugs in their codebases stem from memory unsafety[1], and MSRC estimates a similar number for Microsoft's codebases[2].

              (General purpose programming languages can't prevent logic bugs. However, I would separately argue that idiomatic Rust programs are less likely to experience classes of logic bugs that are common in C and C++ programs, in part because strong type systems can make invalid states unrepresentable.)

              [1]: https://www.chromium.org/Home/chromium-security/memory-safet...

              [2]: https://msrc.microsoft.com/blog/2019/07/we-need-a-safer-syst...

      • guappa18 hours ago
        Bad developers (aka most of the developers) want to be able to break compatibility every 3 days or so, and pinning a precise version lets them do that.

        Some users commenting here are employed by python, which also has a policy of breaking compatibility all the time.

        It's very fun to develop in this way, but of course completely insecure.

        These developers might not care because they aren't in charge of security, while distributions care about security and distro maintainers understand that shipping thousands of copies of the same code means you can't fix vulnerabilities (and it's also terribly inefficient).

        My suspicion is that most developers think their own software is a special snowflake, while everyone else's software is still expected to behave normally because who's got the time to deal with that anyway?

        • woodruffw17 hours ago
          AFAICT, nobody in this thread is employed by the PSF (I assume that's what you meant by "python"). And, to the best of my knowledge, there is no "break things just because" policy within Python.
          • guappa10 hours ago
            I don't know who is signing your checks, I know what you are paid to work on :)

            The distinction between consultant and employee is pretty moot if you're not the tax office.

  • 01HNNWZ0MV43FF22 hours ago
    > The resulting breakages will be discovered by automated QA

    It's a bold move, Cotton.

    Wiping `Cargo.lock` and updating within semver seems sort-of reasonable if you're talking about security updates like for a TLS lib.

    "Massaging" `Cargo.toml` and hoping that automated tests catch breakage seems doomed to fail.

    Of course this was 3 years ago and this person is no fool. I wonder what I'm missing.

    Did this ever go into production?

  • devit21 hours ago
    The correct solution to packaging Rust crates for a distribution seems to be to compile them as Rust dynamic libraries and use package names like "librust1.82-regex-1+aho-corasick" for the regex crate, semver major 1, compiled for the Rust 1.82 ABI, for the given set of features (which should be maximal, excluding conflicting features).

    For unclear reasons it seems Debian only packages the sources and not the binaries and thus doesn't need the Rust version, and also apparently only packages the latest semver version (which might make it problematic to compile old versions but is not an issue at runtime).

    • progval21 hours ago
      Are Rust ABIs even guaranteed to be stable for a given compiler version and library features? ie. doesn't the compiler allow itself to change the ABI based on external factors, like how other dependents use it?
      • woodruffw20 hours ago
        The Rust ABI is unstable, but is consistent within a single Rust version when a crate is built with `dylib` as its type. So Debian could build shared libraries for various crates, but at the cost of a lot of shared library use in a less common/ergonomic configuration for consumers.
        • zozbot23420 hours ago
          The thing is that even `dylib` is just not very useful given the extent that monomorphized generic code is used in Rust "library" crates. You end up having to generate your binary code at the application level anyway, when all your generic types and functions are finally instantiated with their proper types. Which is also why it's similarly not sensible to split out a C-compatible API/ABI from just any random Rust library crate, which would otherwise be the "proper" solution here (avoiding the need to rebuild-the-world whenever the system rustc compiler changes).
          • pjmlp18 hours ago
            Microsoft's MFC, Borland/Embarcadero C++ OWL/VCL/FireMonkey, Qt libraries with dynamic linking and templates, as an example,

            Swift stable ABI is another.

            It is a matter of priorities where to invest developer resources and tooling.

          • woodruffw19 hours ago
            Yep, all true.
  • manquer8 hours ago
    Author says build dependencies calculation is not common in free software world .

    Aren’t emerge and other systems in source based distributions like gentoo doing this for 20 odd years now ? https://devmanual.gentoo.org/general-concepts/dependencies/

    I don’t understand how the RDEPEND , BDEPEND setup doesn’t handle this, and the problem should be hardly unique for just rust ?

    Perhaps handling this in the package layer in a distribution not designed to handle build dependencies properly is the problem ?

  • spenczar51 day ago
    > I am not aware of any dependency system that has an explicit machine-readable representation for the “unknown” state, so that they can say something like “A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work”.

    Isn’t this what Go’s MVS does? Or am I misunderstanding?

    • arccy22 hours ago
      go's MVS only means >=, capped at major version, selected from all declared versions (excluding external latest released).

      >= doesn't really mean anything below it will break

  • FridgeSeal17 hours ago
    TBH I think the distro model probably works really nicely for C-and-friends, from days of yore. When there was no package manager for C, resources like disk space were limited and the “ecosystem scale” of programming was a bit smaller. In this situation the benefits provided by distro-package-managers are large.

    However, I think programming and the ecosystem is different these days and I am beginning to think the “single globally shared dylibs” model isn’t a good fit for many of the more recent programming languages and ecosystems.

    We’ve seen it with the bcacheFS stuff and I think we’ll see it with more stuff in the future. I suspect there’s fundamental culture differences.

    As mentioned elsewhere in the thread, I’d be interested in a distro model that split out “OS” and “app” level management and does not concern itself with “user”/“app” level dependencies, beyond maybe tracking the installed versions of apps, and little else.

    • oneshtein10 hours ago
      > I’d be interested in a distro model that split out “OS” and “app” level management and does not concern itself with “user”/“app” level dependencies

      Switch to Android/Linux. :-/

    • fulafel10 hours ago
      You can get the OS / app management separation with Snap/Flatpak/AppImage.
  • josephcsible20 hours ago
    > I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian’s millions of users.

    That sounds like a recipe for a lot more security vulnerabilities like https://news.ycombinator.com/item?id=30614766

  • wakawaka2823 hours ago
    This sounds like a very unsafe approach for this allegedly "safe" language. You should not categorically ignore the advice of the library and app authors in favor of some QA-driven time wasting scheme. If the authors are not releasing software stable enough to put together a distribution, then talk to them and get them to commit to stable releases and support for stable versions of libraries. I know they might not cooperate but at that point you just wash your hands of these problems, perhaps by hosting your own mirror of the language-specific repo. When users start having problems due to space-wasting unruly apps, you can explain to them that the problem lies upstream.
  • atoav21 hours ago
    As a software developer that works in Rust as well I applaud any effort to unify dependecies where possible — if you took one of my projects however and you went all like yolo and changed my chosen dependencies in a major distro I would get all the heat for a choice I didn't make.

    This means if you change my chosen dependencies you are responsible to ensure testing exists.

    Unless debian packagers provide the tests to avoid logic errors

  • 10 hours ago
    undefined
  • [flagged]
    • Ar-Curunir23 hours ago
      Is this an AI-generated comment? It adds nothing at all to the discussion
      • PeterWhittaker19 hours ago
        Hardly. I just thought it was a great write up.