147 points | by mvip1 day ago
Only the kernel and bootloader usually need to be specialized for most modern arm boards: the userland can be generic. Most of the problems people have with yocto are due to layers from hardware vendors which contain a lot of unnecessary cruft.
Until something somewhere deep inside the build process breaks, or you need to enable a peripheral the default device-tree for your board doesn't enable, or a gnat farts on the other side of the world, and it completely stops working.
If you buy hardware from a vendor who hands you a "meta-bigco" layer with their own fork of u-boot and the kernel, you're gonna have a bad time...
I don’t see so many mentions of Buildroot in this thread yet.
If you are interested in Yocto it might be worth having a look at Buildroot as well. I liked it a lot when I tried it.
My thread from years ago, where people told me about Buildroot:
https://news.ycombinator.com/item?id=18083506
The website of Buildroot:
It’s a project that uses buildroot to create a small Linux for a specific device that’s only used to start a container.
I’ve wanted to try it sometime after getting headaches with both Buildroot and Yocto. Particular adding more libraries tends to break things.
The lack of dependency tracking isn't great but other than working around it like you described just using ccache has worked pretty well for me. My Buildroot images at work do full recompiles in under 10 minutes that way.
Meanwhile the Yocto projects I've worked on used to have a ton of chaff that causes partial rebuilds with trivial changes to take longer than that. This probably isn't an inherent Yocto/BitBake thing but the majority of Yocto projects out there seems to take a very kitchen-sink approach so it's what you'll end up having to deal with in practice.
Also, bootstrapping your own application launcher shell on a raw kernel is usually not a difficult task (depending on vendor firmware.) Some folks just drop a full Lua environment for an OS that fits in under 2.7MB ISO even with a modern kernel.
Nir Lichtman posted a tutorial for mere mortals here:
https://www.youtube.com/watch?v=u2Juz5sQyYQ
Highly recommended exercise for students =3
I am working on professionalizing our IOT setup that currently consists of a few dozen raspberries which run docker containers. They are individually updated by sshing into them and running apt update manually. Docker containers are deployed with a commercial solution. I want to have a centralized way to update the OSes, but it does not really make sense for our small team to introduce yocto knowledge, because that would make us fall behind development schedule even more. Also, the hardware needs are just too boring to justify rolling our own os. I have not yet found a hardware independent Linux distro that can be reliably updated in an IOT context.
I am now looking if we can buy ourselves out of this problem. Ubuntu Core goes in the right direction, but we don't want to make us dependent on the snap store. Advantech has a solution for central device management with Ota updates, maybe we are going that route.
How do you guys update field devices centrally? Thanks!
I'm part of the team that builds an immutable distro based on OSTree (https://www.torizon.io) that does exactly that.
Docker/Podman support is first-class as the distro is just a binary, Yocto-based one that we maintain so users don't have to. You can try our cloud for free with the "maker" tier. To update a device you just drop a compose file with the web ui and massively update a fleet. You can even use hardware acceleration from the containers using our reference OCI images.
The layer is open (https://github.com/torizon/meta-toradex-torizon) and will get Raspberry Pi support soon but you can integrate already easily with meta-raspberrypi (we can also do this for you very quickly ;-)).
Happy to answer any questions.
Such possibilities include the various registeries available for storing OS updates and branches. Tooling for security scanning, sbom generation, signing Docker or podman for building the image.
It's important to note that the container image itself is not executed upon boot, but rather unpacked before hand.
For embedded systems, I strongly prefer the "full immutable system image update" approach over the "update individual packages with a package manager" approach. Plus you get rollbacks "for free": if the system doesn't boot into the new image, it automatically falls back to booting into the previous image.
People who suggest updating individual packages (or even worse, individual deb packages for instance) have never deployed any large scale IoT/Embedded projects. These devices are very different than servers/desktops and will break in ways you can't imagine. We started out using deb packages at Screenly before moving to Ubuntu Core, and the amount of error/recovery logic we had written to recover from broken deb package state was insane at that point.
In u-boot this is done with its boot count limit config and altbootcmd.
It's meant (I think?) for immutable style distros like Yocto. You basically create a cpio archive and a manifest of what file goes in which partition (plus bells and whistles like cryptography). It's a good idea to have double buffering, so that if boot fails to come to a reasonable state, the device will revert after a few tries.
IMO the mutable distro model is way too fragile for long term automated updated. Errors and irregularities accumulate with each change. Besides, the whole "update while the system is running" is actually not well defined behaviour even for Linux, it just happens to work most of the time.
I’ve deployed Ubuntu Core at scale. It’s great but does have its learning curve. Theirs is also somewhat of a lock in, even if you can run everting yourself. However, their security is really good.
E.g. if you ship an Ubuntu container, you have to honour the licences of all the packages that you are shipping inside that Ubuntu container. Do you?
Which is a pity, because when used correctly it's really powerful!
From the article, I can't help but mention that one third of the "key terminology" is about codenames. What do people have with codenames? I can count and easily know that 5 comes after 4. But I don't know how to compare Scarthgap and Dunfell (hell, I can't even remember them).
Out of the box configurations for Yocto images and recipes are fabulous.
Trying to modify those configurations below the application layer… you’re gonna have a bad time. Opaque error messages, the whole layers vs recipes vs meta issues, etc. I also can’t shake the feeling that yocto was made to solve a chip company’s problems (I.e. supporting Linux distros for three hundred different SOCs) rather than my problems (I.e. ship working embedded software for one or two SOC platforms).
I’ve had a lot more success with buildroot as an embedded Linux build system and I recommend it very highly.
And that's not hyperbole.
It's an odd mix of convention and bespoke madness. The convention part is that you set up a few variables and if the build system of the software is a good fit to common convention, things will just tend to work.
The bespoke madness comes in when there are slight departures from common convention and you must work out what variables to set and functions to define to fix it.
There are parts of the build system that are highly reminiscent of 1980s era BASIC programming. For example, I have seen build mechanisms where you must set variables first and then include or require a file. This is analogous to setting global variables in BASIC and then calling a subroutine with GOSUB because functions with arguments haven't been invented yet.
But now I use Buildroot and I get things done without all the extra anxiety.
That said, once you get it figured out, it's very flexible and largely logical. :)
At one time when SoCs were RAM lean... and build specific patching, stripping and static linking was considered an acceptable tradeoff in the yocto build systems for IoT etc. The use-cases are extremely difficult to justify these days with 256MB of ram on a $5 SoC...
However, the approach was commercially unsustainable from maintainability, security, and memory-page cache-hit efficiency metrics. It should be banned given it still haunts the lower systems like a rancid fart in an elevator. =3
Yocto doesn't do static linking unless you specifically ask for it, libraries end up as .so files in /usr/lib like on all other Linux systems.
When Yocto carries patches, it's typically because those patches are necessary to fix bad assumptions upstreams make which Yocto breaks, or to fix bugs, not to reduce RAM usage.
I don't understand where you're coming from at all.
In time you may, but perhaps you were confused about the primary use-case context bringing up small linux SBM. The mess Yocto can leave behind was not something manufacturers prioritized, and there are countless half-baked solutions simply abandoned within a single release cycle. Out of date package versions, and storage space-optimized stripped/kludged binaries are the consequences. Historically, the things people did to get the minimal OS on flash also meant builds that are not repeatable/serviceable, buggy/unreliable (hence custom patches), and ultimately in mountains of e-waste.
My point was Yocto has always created liabilities/costs no one including its proponents wanted to address over the long-term. Best of luck =3
Yocto launched in 2010
Buildroot launched in 2005
Both of these ecosystems coexisted in the era of sub $100 embedded Linux dev boards with way more than 256MB RAM
Yocto has no excuse for making toolchain and system configuration modifications as difficult as it does.
The difference in unit volumes drives wide variances in tolerances of additional development difficulty/cost.
Some people seem irrationally passionate about the code smell of their own brand. =3
I ended up completing the project on time and under budget by adopting a strict "compiler on-board" approach (i.e. no cross-compiling), so that's where I got a bit dissatisfied with the Yocto approach of having a massive cross-compiling tooling method to deal with.
I'll have to give it another go, but I do find that if I have to have a really beefy machine to get started on an embedded project, somethings' not quite right.
if you want to get a little weird, you can tell yocto to compile everything into deb packages and host them yourself with something like aptly
How would I make use of the countless hours I have already invested in this piece of software? Countless keywords and the dark magic of the ever changing syntax.
But when it works it works..
Background: I just switched to Ubuntu 22.04 for my daily use (mostly coding for side projects) but TBH I'm just using it as Windows. I use a Macbook Pro for work and know a bit of shell scripting, some Python, a bit of C and C++. Basically your typical incompetent software developer.
There are other tools in the same space like buildroot, but I would personally tend to recommend LFS to start from the fundamentals and work up, yes.
That sounds like sunk-cost fallacy. What if you switch jobs and they use something else that just works without needing dark magic syntax? If it's the best tool then so be it, but I question your reason for clinging to it.
Since it is easy for me I prefer the Yocto SBOM, but the security side forces blackduck binary scanning on us which while finding most things on the binary constantly misidentifies a lot of versions, resulting in a lot of manual work.
It also does not know which patches Yocto has applied for fixing CVEs.
And none of these can figure out what is in the kernel and therefor triggers an ungodly amount of CVEs in parts of the kernel we don't have compiled in.
It would seem to be a nearly impossible thing to automate.
Such as?
It's not a shell script, but it has makefile rules that make it relatively simple to build a Docker image for your architecture, export it and turn into a filesystem image, build a kernel, u-boot, etc The referenced "example project" repo builds a basic Alpine image for the Raspberry Pi (https://github.com/makrocosm/example-project/tree/main/platf...) and others
It was motivated by frustrations with Yocto at a new job after 8 or so years working on firmware for network equipment using an offshoot of uClinux. Hoping to convince new job to use Makrocosm before we settle on Yocto.
The whole point of using Yocto is that you want a custom distro. You could build a totally "standard" distro with Yocto but... at this point you can also just use Gentoo or Debian or whatever works.
A vanilla distro doesn't want to support a platform with a few hundred thousand units and maybe a few dozen people on the planet that ever log into anything but the product's GUI. That's the realm of things like OpenWRT, and even they are targeting more popular devices.
I understand the hobbyist angle, and we don't stand in their way. But it's much cheaper to buy a SBC with a better processor. For the truly dedicated, I don't think expecting a skill level of someone who can take our yocto layer on top of the reference design is asking too much.
Upstreaming also takes a very long time and is usually incomplete. Even when some upstream support is available you will often have to use the vendor specific kernel if you want to use certain features of the chip.
Nobody can wait around for upstream support for everything. It takes far too long and likely won't ever cover every feature of a modern chip.
I've got a script that does all this, but it's still a pain.
I've been thinking about putting everything in a monorepo, and adding poky, the third-party layers, and my proprietary layers as submodules. Then, when the build server needs to check out the code or a new developer needs to be onboarded, they just `git clone` and `git submodule update`. When it's time to update to the latest version of Yocto, update your layer submodules to the new branch. If you need to go back in time and build an older version of your firmware image, just roll back to the appropriate tag from your monorepo.
Anyone else have another solution to this issue?
Oh yeah, and the build times. It's crazy disk I/O bound. But if you're using something like Jenkins on an AWS instance with 96GB of RAM, set up your build job to use `/tmp` as your work directory and you can do a whole-OS CI build in minutes.
Other kas features I love: - Patching 3rd party layers with quilt - Configuration fragments - Chaining together configuration fragments
As another example, here is my kas setup for building rootfs images and container images for various different boards:
Shameless plug, there is also my own tool, yb. It's very early days though: https://github.com/Agilent/yb
I have (accident) become the yocto SME at my $dayjob. Probably the biggest positive has been free SBOM generation, and cooking things like kSLOC counts into recipes.
The learning curve stinks, the build suite is very powerful.
Bitbake is a meta-compiler, and the tool suite is very powerful. Just realize to this means you need to be an expert error-message debugger, and able to jump into (usually c/c++) code to address issues and flow patches upstream.
It really is gratifying when you finally kick out a working image.
There's nothing as disappointing as starting a build, going out for a couple hours, and coming back to a terminal full of red.
But when it works, it works.
I believe on systemd-based systems these are service-units you need to enable, and with yocto, possibly install?
systemctl enable -now getty@tty0 (etc)
Or something like that. I’ve experienced similar issues while working on a x86 based NAS and also on the RPi when enabling serial-consoles.