225 points | by edu4rdshl1 天前
They've also built an incredible ecosystem around podman itself. Red Hat has been absolutely cooking with containers recently.
- Pod is significantly simpler than Docker, notably Podman doesn't need to run a background process, your containers run directly as separate processes.
- Podman avoids some long-standing security design weaknesses in Docker ("rootless"). Docker rootless _is_ a thing but has compatibility limits.
FWIW, Podman has an open source alternative to Docker Desktop as well.
Meh
systemd is at it's core an app for running services, such as containers.
You should read up on podman and systemd before making up more arguments.
And then here comes Quadlets and the systemd requirements. Irony at its finest! The reality is Podman is good software if you've locked yourself into a corner with Dan Walsh and RHEL. In that case, enjoy.
For everyone else the OSS ecosystem that is Docker actually has less licensing overhead and restrictions, in the long run, than dealing with IBM/RedHat. IMO that is.
But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).
Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.
SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.
I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.
I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.
> I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Please share.
FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).
There are no lockfiles to pin and commit dependency versions.
Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.
Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.
Why on Earth does copying in data require spinning up a container?
Moving on from builds:
Containers are read-write by default, not read-only.
Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
I'm sure I could go on.
I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
> Why on Earth does copying in data require spinning up a container?
It doesn't.
> Containers are read-write by default, not read-only.
I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
Almost all of this is wrong.
> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.
> > Containers are read-write by default, not read-only.
> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
Right. The issue is that the default is wrong. In a container:
$ echo foo >the_wrong_path
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
> Almost all of this is wrong.
I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:
https://docs.docker.com/get-started/docker-concepts/running-...
and
https://docs.docker.com/get-started/docker-concepts/running-...
Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:
void do_thing();
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.At least the docs try to remind people that the whole mechanism is "insecure by default".
I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."
> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.
> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
that means your podman containers dont run as root unless you want them to.
mine runs as user services
I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?
When I bring this up online the answer is invariably "well use quadlets then" (i.e. systemd).
>systemd doesn't add a root daemon, it reuses an existing one
lol the same could be said of every docker container ive ever run....
Please try to understand the podman ecosystem before lashing out.
alias docker=podman
# If you want to still use Docker Compose
# export PODMAN_COMPOSE_PROVIDER=docker-compose
# On macOS: `brew install podman-compose`
export PODMAN_COMPOSE_PROVIDER=podman-compose
export PODMAN_COMPOSE_WARNING_LOGS=false
Most of my initial issues transitioning to Podman were actually just Docker (and Docker Desktop) issues.Quadlets are great and Podman has a tool called podlet [2] for converting Docker Compose files to Quadlets.
I prefer using a tool like kompose [3] to turn my Docker Compose files into Kubernetes manifests. Then I can use Podman's Kubernetes integration (with some tweaks for port forwarding [4]) to replace Docker Compose altogether!
[1] https://github.com/containers/podman-compose
[2] https://github.com/containers/podlet
[3] https://github.com/kubernetes/kompose
[4] https://kompose.io/user-guide/#komposecontrollerportexpose
For the most part this worked without issue. The only snag I ran into was my CI provider can't use oci formatted images. Podman lets you select the format of image to build, so I was able to work around this using the `--format=docker` flag.
Unlike other posts I've seen around I haven't really encountered issues with CI or local handling of images - though I am using the most bare bones of CI, SourceHut. And I actually feel better about using shell scripts for building the images to a Dockerfile.
I like it because I am deploying to GCP, and storing containers in Artifact Registry. Cloud Build has good interop with those other products and terraform, so its pretty convenient to live with.
The pipelines themselves are pretty straight forward. Each step gets an image that it is executed in, and you can do anything you want in that step. There is some state sharing between steps, so if you build something in one step, you can use it in another.
I was prepared to roll it all back, but I never ended up running into problems with it. It's just something that happens in the background that I don't have to think about.
In podman, you have to use the "full path" to work with docker hub. Eg `docker.io/library/nginx`.
If you already have colima lying around, that means you have lima and lima ships with both podman and podman-rootful templates:
limactl create --name=proot template://podman-rootful --vm-type=qemu --cpus=4 --memory 4 --disk 20
# it will emit the instructions at the end, but for context
podman system connection add lima-proot "unix:///$HOME/.lima/proot/sock/podman.sock"
podman system connection default lima-proot
podman version # <-- off to the races
This was for some hobby project, so I didn't spend a ton of time, but it definitely wasn't as set-and-forget as Docker was. I believe I had to set up a separate VM or something? This was on Linux as the host OS too. It's been a while, so apologies for the hazy memory.
Or it's very possible that I botched the entire setup. In my perfect world, it's a quick install and then `podman run`. Maybe it's time to give it another go.
As a side note, it is so _refreshing_ to observe the native apps popping up for Linux lately, it feels like a turning point away from the Electron-everything trend. Apps are small, starts immediately and is well integrated with the rest of the system, both functionally and visually. A few other examples of native apps; Cartero, Decibels, GitFourchette, Wike – to name a few that I'm using.
In the spirit of the OP, I also run podman rootless on a home server running the usual home lab suspects with great success. I've taken to using the 'kube play' command to deploy the apps from kubernetes yaml and been pleased with the results.
I only ever found one thing that didn't work with it at all - I think it was Gitlab's test docker images because they set up some VMs with Vagrant or something. Pretty niche anyway.
podman version
podman pull public.ecr.aws/localstack/localstack:4.1
podman run --detach --name lstack -p 4566:4566 public.ecr.aws/localstack/localstack:4.1
# sorry, I don't have awscli handy
export AWS_DEFAULT_REGION=us-east-1 AWS_ACCESS_KEY_ID=alpha AWS_SECRET_ACCESS_KEY=beta
$HOMEBREW_PREFIX/opt/ansible/libexec/bin/python -c '
import boto3
sts = boto3.client("sts", endpoint_url="http://localhost:4566")
print(sts.get_caller_identity())
'
{'UserId': 'AKIAIOSFODNN7EXAMPLE', 'Account': '000000000000', 'Arn': 'arn:aws:iam::000000000000:root', ...
I'll spare you the verbosity but 2025-02-22T18:51:56.427 INFO --- [et.reactor-0] localstack.request.aws : AWS s3.CreateBucket => 200
2025-02-22T18:52:14.332 INFO --- [et.reactor-0] localstack.request.aws : AWS s3.PutObject => 200
cat > sample-stack.yaml <<'YAML'
AWSTemplateFormatVersion: 2010-09-09
Resources:
Iam0:
Type: AWS::IAM::Role
Properties:
RoleName: Iam0
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AdministratorAccess
AssumeRolePolicyDocument:
Principal:
AWS:
Ref: AWS::AccountId
Effect: Allow
Action: sts:AssumeRole
YAML
create_stack_command_goes_here
2025-02-22T18:55:02.657 INFO --- [et.reactor-0] localstack.request.aws : AWS cloudformation.CreateStack => 200
---ed: ah, I bet you mean the lambda support; FWIW they do call out explicit support for Podman[1] but in my specific setup I had to switch it to use -e DOCKER_HOST=tcp://${my_vm_ip}:2375 and then $(podman system service tcp://0.0.0.0:2375) in the lima vm due to the podman.sock being chown to my macOS UID. My life experience is that engineering is filled with this kind of shit
I used https://github.com/aws-samples/aws-cloudformation-inline-pyt... to end-to-end test it
1: https://github.com/localstack/localstack/blob/v4.1.1/localst...
I stayed away from docker all these years and tried podman from scratch last year after docker failed to work for a project I was experimenting with.
Took an hour to read various articles and get things working.
One thing I liked was it does not need sudo privileges or screw with the networking.
Podman machine is fine, but occasionally you have to fix things _in the vm_ to get your setup working. Those bugs, along with other breakages during many upgrades, plus slower performance compared to Docker, made me switch back. This is just for local dev with a web app or two and some supporting services in their own containers via compose, nothing special. Totally not worth it IMO.
It's an extra step, but not a painful one -- the default podman machine configuration seems to work pretty well out of the box for most things.
Honestly, for my use-case (running Subabase stack locally), it was seamless enough to switch that I'm a little surprised a bash script like this is necessary. On my Mac, I think it was simply `brew install podman` followed by `podman machine start` and then I got back to work as if I were still using docker.
By far the most tedious part of the switch was fully uninstalling Docker, and all its various startup programs & background processes.
I used to use docker compose, but migrated to podman quadlets. The only thing I miss is being able to define every container I run in a pod in the .pod file itself. Having it integrate with systemd is great.
It's all daemonless, rootless and runs directly with your host kernel so it should be as simple as it an application of this kind gets. Probably you followed some instructions somewhere that involved whatever the podman equivalent for docker-machine is?
Docker Compose is really great for multi-container deployments on a single machine. And Docker Swarm takes that same Compose specification (although there were historical differences) and brings it over to clusters, all while remaining similarly simple. I'm surprised that ourside of Docker Swarm, Nomad or lightweight Kubernetes distros like K3s there haven't been that many attempts at simple clustering solutions. Even then, Kubernetes (which Podman supports) ends up being more complex.
Podman can work with local pods, using the same yaml as for K8s. Not quite docker swarm, but useful for local testing IME when k8s is the eventual target.
Can you provide any documentation about that?
https://docs.podman.io/en/latest/markdown/podman-system-serv...
In places where you're doing a `dnf install podman` all you typically need to do is start the service and then point either the podman cli or docker cli directly at it. In Fedora for example it's podman.service.
I honestly prefer using the official docker cli when talking to podman.
I did run into one issue though. Rootless mode isn't supported (or at least easy to setup) when the user account is a member of an active directory (or whatever Linux equivalent my work laptop is running).
Though root mode works, I can't use podman desktop and I have to sudo every command.
Does one prefer using WSL2 or Hyper-V as the machine provider? From what I understand, podman provides the container engine natively so nothing additional is required. Do container runtimes like containerd only come into play when using kubernetes? Not a windows specific question, but is there a reason to pick a podman native container vs one in a k8s context. I understand podman supports k8s as well. Other info: No current containers (docker or k8s) are in play.
Thanks in advance.
Podman also supports running in "rootless" mode, using kernel.unprivileged_userns_clone and subuid/subgids for the sandboxing and slirp4netns for the networking. This obviously isn't exactly the same as rootful networking, but it works well enough for 99% of the use cases.
If you are running Linux, I think using Podman instead of Docker is generally a no-brainer. I think the way they've approached rootless support is a lot better than Docker and things "just work" more often than not.
[1]: https://docs.podman.io/en/latest/markdown/podman-generate-sy...
And while we’re at it, what’s your favorite non-sudo Docker alternative? And why?
Have they started releasing packages yet?
On my Debian box, I build the podman release target in a chroot, extract the archive in /opt/, and use stow to install/uninstall the package. You'll also want the latest crun, but which I also place in stow and install with stow.
Shameless plug: Alternatively, if you are on NixOS, you can just use compose2nix.
Script does almost all of the things required for the "existing docker containers", migrating networks, blocks, restart mech,etc, that leaves out just one thing migrating any other third party script utilizing docker to podman based instructions. This would highly improve the experience. Goodluck
This is a cool tool for the decrepit hand-configured server with no documentation that has been running since 2017 untouched and needs an update... but I would encourage you to not fall into this trap to begin with.
okay, but, like... will it?
is there new maintenance stuff you've completely ignored? (I've noticed this is more common when maintenance is someone else's job.) is it completely new and none of us know about it so we get blindsided unless everything goes exactly right every time? do we get visibility into what it's doing? can we fix it when (not if, when) it breaks? can everyone work on it or is it impossible for anyone but the person who set it up? they're good at thinking up things that should fix the problem but less good at things that will.
I'm a fan of cross-functional feature teams because others in the software engineering ecosystem like QA, systems people, ops, etc. tend not to have this problem. programmers are accountable to the other stakeholders up front, bad ideas are handled in real time, and- this is the most important part- everyone learns. (I won't say all systems people are cantankerous bastards... but the mistakes they've harangued me for are usually the mistakes I don't make twice.)
How does one install podman on Debian and how does one get a Debian image to run inside podman?
It is usually easier to install - most distros ship relatively recent version of Podman, while Docker is split between docker.io (ancient), docker-ce (free but non in repos) and docker-ee.
Not everything is rosy, some tools expect to be talking to real Docker and don't get fooled by `ln -s docker podman`. But it is almost there.
Regarding Debian, just `sudo apt install podman && podman run -it debian` - see https://wiki.debian.org/Podman
I had so many problems that I went back to Docker, because current Podman didn't seem to be trivially installable on Debian 12.
If this is a e.g. webserver and I only need my fastcgi backend built by myself, I can still have reverse proxy, database, and every other package be done by the distro.
No one said you need backports. More like: If it fits 90% and one package doesn't work, you get it from somewhere else - that doesn't invalidate the concept of a distro for me. YMMV
Boring stability is the goal, but if Debian does not fit as is, then why not find a total package that is somewhat more cutting edge but does fit together? Especially given the fact that Debian does customization to upstream, so esoteric times esoteric.
Also I don't usually run "supported". I just run a system that fits my needs.
Reverse proxy, DB, etc from Debian. The application server is built and deployed with nix. The Python version (and all the dependencies) that runs the application server is the tagged one in my nix flake which is the same used in the development environment.
I make sure that PostgreSQL is never upgraded past what is available in the latest Debian stable on any on the dev machines.
2) `podman run --entrypoint="" --rm -it debian:stable /bin/bash`
in most instances you can just alias docker to podman and carry on. It uses OCI formatted images just like docker and uses the same registry infrastructure that docker uses.
I would think the Docker infrastructure is financed by Docker Inc as a marketing tool for their paid services? Are they ok when other software utilizes it?
I can't speak to what Docker Inc. is okay with or not.
t's all public infrastructure for hosting container images, I don't think Docker-the-company minds other software interfacing with it. After all, they get to call them 'Docker images', 'Dockerfiles', and put their branding everywhere. At this point
Podman has much better systemd integration: https://www.redhat.com/en/blog/quadlet-podman
podman run -it debian bash