They've built an entirely statically linked user space for Linux . Until then i never questioned the default Linux "shared libraries for everything" approach and assumed that was the best way to deliver software.
Every little cli tool i wrote at work - i used to create distro packages for them or a tarball with a shell script that set LD_LIBRARY_PATH to find the correct version of the xml libraries etc i used.
It didn't have to be this way. Dealing with distro versioning headaches or the finnicky custom packaging of the libraries into that tar ball just to let the users run by 150 kb binary.
Since then I've mostly used static linking where i can. AppImages otherwise. I'm not developing core distro libraries. I'm just developing a tiny "app" my users need to use. I'm glad with newer languages like Go etc... static linking is the default.
Don't get me wrong. Dynamic linking definitely has it's place. But by default our software deployment doesn't need to be this complicated.
LD_LIBRARY_PATH is a debugging and software engineering tool, and shouldn't ever be part of shipped software.
It's simply that when you do want external but bundled neighboring libs, there is a good way to do it.
If you require a library, you can specify it as a dependency in your dpkg/pacman/portage/whatever manifest and the system should take care of making it available. You shouldn't need to write custom scripts that trawl around for the library. Another approach could be to give your users a "make install" that sticks the libraries somewhere in /opt and adds it as the lowest priority ld_library_path as a last resort, maybe?
This was the biggest pain point in deploying *application software* on Linux though. Distributions with different release cycles providing different versions of various libraries and expect your program to work with all of those combinations. The Big famous libraries like Qt , gtk might follow proper versioning but the smaller libraries from distro packages - guarantee. Half of them don't even use semantic versioning.
Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.
Custom start up script was to find the same xml library I've used in the tar ball i packaged the application in. They could then extract that tar ball wherever they need - including /opt and run the script to start my application and it ran as it should. Iirc we used to even use rpath for this.
This is a red herring. Distros existed before semantic versioning was defined and had to deal with those issues for ages. When packaging, you check for the behaviour changes in the package and its dependencies. The version numbers are a tiny indicator, but mostly meaningless.
To me it seems more attractive than how Nix does it, but I guess they considered such and saw conflicts, therefore went with hashes.
Recently in the python ecosystem the `uv` package manager let's you install a package as it was on a certain date. Additionally, you can "freeze" your dependencies to a certain date in pyproject.toml. So when someone clones it and installs dependencies it will install it at the date you've chosen to freeze it at.
Personally I love this method much more than versioning.
I think versioning is mostly useful to just talk about software and maybe major major additions/changes e.g io-uring shipping with linux mainline.
Yeah, but the package maintainer for a widely used library doesn't actually have the resources to do this. (heck, a package maintainer for a non-trivial application likely doesn't have the resources to do this). Basically they update and hope to get some bug reports from users.
So it is complicated and there is no solution for every context, therefore we use best approximation.
I don't believe that it causes more problems than it fixes. It's just that you didn't notice the problems being silently fixed!
There are issues related to different distros packaging different versions of libraries. But that's just an issue with trying to support different distros and/or their updates. There are tradeoffs with everything. Dynamic linking is more appropriate for things that are part of a distro, because it creates less turnover of packages when things update.
It depends a lot on ABI/API stability and actual modularity of ... components. There's not always a guarantee of that.
Shared libraries add a lot of complexity to a system for the assumption that people can actually build modular code well in any language that can create a shared library. Sometimes you have to recompile because, while a #define might still exist, its value may have changed between versions, and chaos can ensue - including new, unexpected bugs - for free!
Static linking has its place pace, no doubt, but it should not be the norm.
The former often makes a lot of sense in terms of deployment or maintenance - it's all really 1 big system so why rebuild and deploy 60 parts to change 1 when it's all done by you anyways. I'm not sure that's really what every build environment should default to assuming is the use case though but going shared libraries makes a ton of sense for that regardless.
60 programs maintained and built by 3rd parties which just happen to share major version of a library (other than something like stdlib obviously) would seem nuts to manage in a single runtime environment (regardless of static or shared) though!
That, 100% is our scenario.
Windows programs generally do link dynamically to core Windows libraries—which users are never expected to mess with anyway—and the C and C++ runtimes, but even these can be statically linked against with `cl.exe /MT`. Some programs even distribute newer versions of the C/C++ runtimes; that's where the famous Visual C++ Redistributables come from.
I agree, though—static linkage should be the default for end-user programs. I long for a time when Linux gets 'libc' redistributables and I can compile for an old-hat CentOS 6 distribution on a modern, updated, rolling-release Arch Linux without faffing with a Docker container.
Every time I tried to get a third-party binary app running on Linux, I discovered what the vendor did is they shipped half their dependencies as blobs, and relying on the system for the other half - which is an incredibly brittle system that breaks constantly.
The entry point usually is a script that sets LD_LIBRARY_PATH and then calls into the executable.
The Linux linking model is so bad. So extremely very bad. Build systems should never rely on whatever garbage happens to be around locally. The glibc devs should be ashamed.
Specifically, what is stopping libc redistributables from being possible? And why disk size limitations from back then caused this situation.
Consider how dynamic linking libc works when a critical security bug is found and fixed. To update your system you update libc.so.
If it were statically linked, you need to update your whole distribution.
Nix is much closer to a "good" dynamic linking solution IMO except that it makes it overly difficult to swap things out at runtime. I appreciate that the default is reproducible and guaranteed to correspond to the hash but sometimes I want to override that at runtime for various reasons. (It's possible this has changed since I last played with that tooling. It's been awhile.)
Depends on what you are trying to achieve though.
Suppose your dev shell contains app Foo v1 which uses lib Bar v2. It links directly against the full nix store path. So loading a different version of lib Bar (say v3) into your dev shell, which puts it on your PATH, doesn't affect Foo - the Foo executable will still link against Bar v2. That's by design and a very good thing! It assures reproducibility and fixes the version incompatibility issue that dynamic linking typically suffers from.
However, what if I want to swap it out for whatever reason? Right now I have to modify the package definition and rebuild it. That's needlessly slow and resource intensive for some one off monkey patch. It also might not be feasible at all on my hardware - most people's rigs aren't cut out to build chromium or llvm, for example.
The UX is also absolutely terrible. Have you ever tried to patch a nixpkgs provided definition? I actually gave up on using nix as my primary build tool over this.
My expectation going in was a neat and tidy collection of independent, standalone package definitions that imported one another to satisfy dependencies as necessary. The reality was a massive monolith containing all packages at once making extracting just one to patch it rather complicated in some cases.
I believe flakes were supposed to address this exact dependency modification UX issue somewhat, but they caused my code to break in other ways and I decided I was done sinking time into tinkering with tooling rather than building software.
It is also a statically linked Linux distribution. But it's core idea is reproducible nix-style builds (including installing as many different versions/build configurations of any package), but with less pl fuff (no fancy funcional language - just some ugly jinja2/shell style build descriptions; which in practice work amazingly well, because underlying package/dependency model is very solid - https://stal-ix.github.io/IX.html).
It is very opionated (just see this - https://stal-ix.github.io/STALIX.html), and a bit rough, but I was able to run it in VMs sucessfully. It would be amazing if it stabilizes one day.
For example, in places where the a filesystem-related "standard" has changed, I have old static binaries that fail to start entirely, whereas the same-dated dynamic libraries just need to bother to actually install dependencies.
I am convinced that every argument in favor of static linking is because they don't know how to package-manager.
If you want a newer version, too bad - your OS doesn't ship that so better luck in the next release. OR you can set up a private repo, and either ship a binary that has the dependencies included (shipping half the userland with your audio player), or they package the newer version of library, which will unwittingly break half your system, if not today, then surely at the next distro upgrade.
It speaks volumes of Linux package management woes, that no vendor ships anything analogous to brew or chocolatey.
What is the gap between e.g. `apt` and Homebrew?
- Packages are useful units of software in homebrew, and both end-user apps dependency libraries in apt (and weird stuff, like headers, or dependencies)
- Homebrew packages are installed for the user, apt is system-wide
- Homebrew packages are updated and maintained usually by people involved with the software they are shipping, apt packages are usually maintained by the distro
- As a result, Homebrew packages almost always work and are almost always up to date, apt ones are almost never (stuff like go, node etc.)
Apt is like a weird frankenstein monster of npm, the system update, and an app store, and all that with a global scope.
Which would be a fair reason. People who like to build things might just not want to also learn how to package stuff.
If you're linking to libX11 or libgtk or something else that's common, rely on the distro providing it.
I really don't get all the anti-shared-library sentiment. I've been using and developing software for Linux for a good 25 years now, and yes, I've certainly had issues with library version mismatches, but a) they were never all that difficult to solve, and b) I think the last time I had an issue was more than a decade ago.
While I think my experience is probably fairly typical, I won't deny that others can have worse experiences than I do. But I'm still not convinced statically linking everything (or even a hybrid approach where static linking is more common) would be an overall improvement.
At the end of the day the apps were a simple end user applications. They used a handful of library functions from different libraries. My users cared about just using my apps to do whatever the apps did. I just care that my app should work on their machines easily - no matter what version of what distro they're using.
> Variadic macros are acceptable, but remember
Maybe my brain is too smooth, but I don't understand how for(int i = 0...) is too clever but variadic macros are not. That makes no sense to me.
I think the no "loop initial declarations" is for consistency with "all declarations at the top". Other coding style guides favor "declarations as close as possible to first use", including guidelines for mission critical systems (if you resort to argument from authority I have some too...) [1].
As much as I like Suckless, this section is just pet peeves that can safely be ignored; unless you submit a patch to a project that aligns with it.
True and it would indeed be desirable that it were. Here I go out on a limb and assume it's because someone got bitten by attempting to use the loop index outside the loop (common for search operations) while declaring the index within and outside the loop. A bug (gcc and clang can warn about using -Wshadow, but which sadly isn't part of -Wall) which might easily occur when multiple people edit the code over a longer time-span.
An example of such a macro is the following macro (the loop and the variable declaration will both be optimized out by the compiler; I have tested this):
#define lpt_document() for(int lpt_document_=lpt_begin();lpt_document_;lpt_document_=(lpt_end(),0))
Another macro (which is a part of a immediate mode UI implementation) is: #define win_form(xxx) for(win_memo win_mem=win_begin_();;win_step_(&win_mem,xxx))
[0]: https://sioyek.info
No frills, super fast and small. Been using it on Windows for years.
Personally, I just use mupdf (which I sandbox through bubblewrap).
That’s… certainly a low bar for not sucking
[0] https://tilde.team/~ben/suckmore/ [1] https://dev.suckless.narkive.com/mEex8nff/cannot-run-st#post...
I'm not claiming I could write these tools as simple as these, but surely the importance of these paradigms arise when actual complicated software is needed?
I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways. Most annoying of all is that I can't do anything about it. I'm not going to spend days of my life digging into their source code to make the changes I want, nor spend time pestering the maintainers to make the changes for me.
So I ended back at my st fork I've been using for years, which sucks... less. :) It consists of... 4,765 SLOC, of which I only understand a few hundred, but that's enough for my needs. I haven't touched the code in nearly 5 years, and the binary is that old too. I hope it compiles today, but I'm not too worried if it doesn't. This program has been stable and bug-free AFAICT for that long. I can't say that about any other program I use on a daily basis. Hhmm I suppose the GNU coreutils can be included there as well. But they also share a similar Unixy philosophy.
So, is this philosophy perfect? Far from it. But it certainly comes closer than any other approach at building reliable software. I've found that keeping complexity at bay is the most difficult, yet most crucial thing.[1]
I just don't really want to use or support software by people who, at best, think it's appropriate to joke about an ideology that wants me [0] dead, or at worst, actively subscribe to that ideology. There are some things that I'm not willing to look past.
[0]: non-white, non-straight, left of the political spectrum
Oh, there are also the edgelords occasionally lured in by Luke Smith's videos (who has never sat foot in community or contributed code while I have been around and I am not sure if he ever did) who usually get laughed out of IRC after delivering an unhinged chanspeak rant.
How do the people at the center of the community react to this, though? If they are not condemning that sort of behavior, and possibly kicking people like that out of the community, then they are complicit at best, and tacitly approve at worst.
Looking at my IRC logs over the last six months I see one joke-ish comment from a fringe person that VT100 clearly must be racist as it does not support Unicode skin colour emoji merging and one core-ish member chuckling (I will not quote as I find doing so without consent to be morally questionable). This took place in a mostly technical discussion about the complexity of "improving" Unicode handling in st(1). That is it.
I have never really seen something so bad that I would argue for a ban (but that is of course a subjective judgement) and there is a line of thought that ignoring is better than trying to build a wall of rules and "feeding" the trolls by going after them. Whether this is true I am not sure, but I happily take part in both stricter and more lenient communities myself and can see advantages and disadvantages of both.
I'm so sick and tired of the woke victimism ideology, but fortunately it is crashing and burning and we'll be right back to meritocracy and focus on technology. No code of conduct is the best code of conduct! =)
I think it's possible to separate the art from the artist, and enjoy the art without being concerned about the artist's beliefs, and whether I disagree with them.
Also, you don't necessarily support them by using their software. The software is free to use by anyone, and you never have to interact with the authors in any way. Software is an amorphous entity. Unless they're using it to spread their personal beliefs, it shouldn't matter what that is. By choosing not to use free software, you're only depriving yourself.
But this is your own choice, of course, and I'm not saying it's wrong. Just offering a different perspective.
I think you're setting up a too-general argument here. "Asshole" an encompass a huge variety of things, from "actively genocidal" to just "kinda annoying", and everything in between.
I'm pretty "mainstream" demographically (white, straight, cisgender), but if the developer of software I use said something like "all atheists should be shot", I would immediately stop using their software and find something else.
> By choosing not to use free software, you're only depriving yourself.
Sometimes making a statement means enduring some sort of disadvantage or hardship in return. In fact I think that's part of the point. If it doesn't cost me anything to stop supporting something I find offensive, then my (admittedly mild) protest doesn't really have much substance behind it.
In this particular case, there's nothing that the suckless folks have built that doesn't have alternatives that are also free software, so I don't think anyone who refuses to use suckless software is depriving themselves of free software.
Who should be on the committee that decides what we may talk and joke about and how should the committee inform it self?
The new forbidden topics will be chosen from the set of topics people talk about which get smaller, stranger and more political. What people secretly believe will be much closer to the secret dialog while the public dialog floats away.
That people are saying things is the least of your concern.
Fascinating perspective tho. It is much easier if one is more secure, talks easy or has a more mundane world view. Not someone one can choose. Thicker skin however.
Also interesting, if one didn't like the people running the lunchroom at the end of the street or didn't like the visitors you use to be able to go to some other place. Today they are all part of the same chain. We've lost a lot of freedom there.
It's getting to the point that I'm considering keeping myself ignorant of developers' beliefs for my own mental wellness.
> I just don't really want to use or support software by people who, at best, think it's appropriate to joke about an ideology that wants me dead
It never ceases to amaze me that some people can dismiss ideologies that advocate for personal threats of violence against a particular group of people as "politics".
After moving to a gigantic monitor and gigantic resolutions, my poor st fork was suffering. zutty was a great replacement for me: https://git.hq.sig7.se/zutty.git
I use pure zsh with some plugins manually installed , the luke smith dot files, and the history part sometimes take a lot to load but foot is just fast
I agree. Such things are not relevant when considering to use their formats and programs and stuff like that.
What is relevant is their software and related stuff like that, and not their political leanings, etc. I do not agree with all of their ideas about computer software, although I agree with some of them.
Like them, I also don't like systemd, so I agree with them about not liking systemd.
I do use farbfeld, although I wrote all of the software for doing so by myself rather than using their software (although it should be interoperable with their software, and any other software that supports farbfeld (such as ImageMagick)). Also, I do not use farbfeld for disk files, but only with pipes. (My farbfeld utilities package also includes the only XPM encoder/decoder that I know of that supports some of the uncommon features, that most XPM encoders/decoders I know of are not compatible with or are not fully capable of.)
I may consider libzahl if I have a use for big integers, although I also might not need it. (I had written some dealing with big integers before; one program I wrote (asn1.c) that deals with big integers only converts between base 100 and base 128 in order to convert OIDs between text and binary format.)
However, I would also want software that can better handle non-Unicode text (so, it is one things I try to write), which many programs don't do properly. This should mean that any code that deals with Unicode (if any) is bypassed when non-Unicode is used. Some programs should not need to support Unicode at all (including some that should not need to care about character encoding at all, or that do not deal with text, etc). (I had considered writing my own terminal emulator for this and other reasons.)
Last time I did the same (days not hours tho lol) was somewhat surprised to find myself landing on xterm. After resolving a couple of gotchas (reliable font-resizing is somewhat esoteric; neovim needs `XTERM=''`; check your TERM) I have been very pleased and not looked back.
urxvt is OG but xterm sixel support is nice.
If you don't mind, tell more? I use kitty and it seems a big upgrade from whatever I used before...
* One of the lead devs' laptops is named after Hitler's hideout in the forest
* Their 2017 conference had a torchwalk that was a staple of Nazi youth camping (and heavily encouraged by the SS as a nationalism thing)
* Multiple of the core devs are just assholes to people on and offline.
* Most of the suckless philosophy is "It does barely what it needs to and it was built by us, so it's superior to what anyone else has written". A lot of it shows in dwm, dmenu, etc.
This is false. Or do you have a source?
[1]: https://en.wikipedia.org/wiki/Unite_the_Right_rally
[2]: https://suckless.org/conferences/2017
As I pointed out in another comment though. If these guys are diehard Nazis, they sure are keeping it for the conventions as I have seen nothing of it on the mailing lists or IRC for four or so years. What I have seen though are about a handful of Anarchists with varying degrees of involvement, but overall politics is pretty rare and it is mostly about using their software, programming in general, and how to find software that complies with their overall ideology.
At risk of putting myself out there, it shows how crazy things have gotten when neo-nazi sympathies are described as "just some political beliefs".
How is dwm, a piece of software, part of a political ideology? If the program and its source code promoted a specific belief, that would be one thing. But I haven't seen that in any of the suckless tools I use.
My comments weren't meant to trivialize anything the authors may or may not believe. I'm just saying that I personally don't care what that is, even if I may disagree with it. The software they produce is not in any way tainted by this.
Yes, there are definitely also normal torchwalks in Germany (I have been part of some as part of church youthgroups). However with all the other information that has surfaced about suckless over the years, it really doesn't look like a coincidence that they choose that as a group activity on their get together over all other possible things you could be doing.
As for "all the other information that has surfaced about suckless": there really isn't anything other than that hostname. I have actually asked the person with that hostname directly twice, and they opted not to answer. I agree it's not a good look, especially in the context of some other posts from that person. But it's not a good look for that person, not for all of suckless. If you look at all Python devs, or all Rust devs, or all HN posters, or all people 1.86cm in height: there's bound to be some unpleasant people there. It's just how things work.
And if you're going to make an accusation as serious as that then you really need to do better than "surely it can't be a coincidence..." Personally I'd say that a community which coalesced around a particular view on software also happens to have similar extreme political views as something that's rather unlikely.
The entire reason this whole "suckles are Nazis" thing is even a thing is because a single person kept bringing it up on HN, Lobsters and Twitter. As near as I can tell, it's a pretty successful campaign from one exceedingly toxic person with a grudge.
If by "their" you mean a suckless.org host, no, that's not true. A hostname in the outgoing mail headers of one person posting to the mailing list was "wolfsschanze", i.e., a machine on that user's LAN, not a suckless.org server. The person in question was FRIGN. This got attention because he personally repeatedly pestered Lennart Poettering, who noticed that string in the mail headers and called it out on Twitter. https://web.archive.org/web/20190404160024/https://twitter.c...
Lennart correctly noted that this hostname was one person's laptop, but this morphed in the public consciousness to "a suckless host is named after Hitler's HQ".
> one of them has been known to go off about "Cultural Marxism,"
This person is also FRIGN. Specifically it's a reference to this lobste.rs comment: https://lobste.rs/s/nf3xgg/i_am_leaving_llvm#c_ze5ccy
I hear these same two things repeated over and over as evidence of nazism within suckless (example, the Wikipedia talk page https://en.wikipedia.org/wiki/Talk:Suckless.org), but it is one person (who, granted, maintains at least one suckless.org project https://suckless.org/people/FRIGN/). I think badly of him as a result, but I don't see any reason to disbelieve the multiple Germans who tell me that torchlit walks are a common German tradition, or to tie it to the Charlottesville march, which was extremely untraditional in the region it took place in.
I work on Xfce in my spare time with a small group of other developers from around the world, and if I learned that one of them was a neo-nazi, I would immediately call for them to be expelled from the community. If the other maintainers refused, I would step down and leave the community myself.
To me, any other response would be tolerating and accepting neo-nazism, to the point that I would assume and expect outsiders would suspect the entire development team is ok with neo-nazism. None of that is ok in my book.
I think FRIGN is odious but my judgment is that a gross edgy joke (the hostname) and one reference to "cultural marxism"* isn't sufficient to call someone a neo-nazi. Well, more importantly, believing it isn't sufficient (as I do, and I suspect the suckless people do) does not mean people like me or the suckless people are, as you word it, "tolerating or accepting neo-nazism".
*Despite Wikipedia's only page on cultural marxism being a redirect to "Cultural Marxism anti-semitic far-right conspiracy theory", it is not unusual to rail against cultural marxism in normal conservative circles, including respectable anti-Trump anti-anti-semitic circles like National Review and Tablet. See for example https://www.tabletmag.com/sections/news/articles/just-becaus... ; but I don't want to veer this thread into off-topicness, only to provide evidence that complaining about cultural marxism doesn't make someone a neo-nazi.
He seems very comfortable repeating what is essentially the propaganda of Neonazi groups like PEGIDA.
I agree that we should be sceptical of extreme claims. Maybe it's a coincidence that they did their torchlit walk just after the Charlottesville march. Maybe it's just one of them who is willing to use Nazi references when naming his devices. Maybe it's just a weird fringe that posts Neonazi propaganda online. But the more these individual things come together, the more they build a picture, and the more it behoves us to take this picture seriously.
Yep, that seems straightforwardly bad. Even so, I don't consider metux a suckless.org guy, but rather a deplorable person who contributes to multiple projects, one of which is hosted on suckless.org. The link you posted was from a Devuan mailing list.
Someone else used the example of Xfce, but suckless.org is a much more informal place that basically is a collection of separate projects with a similar aesthetic.
> But the more these individual things come together, the more they build a picture,
Sure, judgmments like yours are reasonable. I wouldn't begrudge someone choosing to avoid suckless over it, or even publicly voicing concern about the suckless community. But it's tiresome when every comment thread ostensibly about dwm, dmenu and st receives wild accusations of suckless.org being a community of nazis.
And I really think there's zero evidence of the torch walk being anything at all.
For the record, though, no one who gives a downvote should ever feel obligated to reply. Writing out a thoughtful, supported disagreement to something takes work, and often the effort required exceeds what anyone might want to expend.
This is especially true if a comment feels like it's especially bad-faith-y (though I'm not saying that's the case for your comment).
Your comments don't "deserve" anything. You've decided to spend your time making a point on a random web forum, and that's your choice to make. But you do not get to decide that others are required to spend that time as well when agreeing or disagreeing with your words, regardless of whether or not they've used their voting privileges, in either direction.
Sure, I agree. I counter that it means not that my comments didn't deserve a reply, but that due in part to what you correctly stated, not all comments that deserve replies get them. :)
Having one far-right loon in your team might just raise some eyebrows. A second, however, shows a pattern.
Open Source Software used to be about individuality.
They're not actively campaigning to remove other window managers are they? That seems to be a feature of "community software" for whatever reason.
What I love is that I forget it's there. For years I simply forget I have a "window manager" because to me its a dozen keyboard shortcuts as a shim for managing terminals. If emacs could do that as invisibly I'd be a super happy chappie.
“The Wolf’s Lair” (but in German) sounds like it could plausibly be selected coincidentally.
There are a lot of IRC nerds who use wolves as part of a moniker, “Canis”, “Lupine” & “Aardwolf” spring immediately to mind.
I do. You are wrong. Not specific to wolves, tho.
When i learned about Hitlers base, i my first thought was whether Hitler was some kind of animal fan (what i later learned was called "being a furry")
Linus Torvalds, for example, used to be a raging asshole, but as far as I know he was just a dick to other people who had pissed him off. He wasn't advocating for genocide or stripping rights away from people or whatever.
This FRIGN guy seems like he might be a part of the latter group. If true, that makes him a very different kind of asshole, the kind we do not welcome into our communities.
[1]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
This is an odd thing to bring up though because that's quite literally the only way to make any changes to suckless software, editing source code in C.
The entire philosophy behind is entirely performative in many ways. There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
You're ignoring the part where the tools are often a fraction of the size and complexity of similar tools. I can go through a 5K SLOC program and understand it relatively quickly, even if I'm unfamiliar with the programming language or APIs. I can't do the same for programs 10 or 100x that size. The code is also well structured and documented IME, so changing it is not that difficult.
In practice, once you configure the program to your liking, you rarely have to recompile it again. Like I said, I'm using a 5 year old st binary that still works exactly how I want it to.
Maintaining a set of patches is usually not a major problem either. The patches are often small, and conflicts are rare, but easily fixable. Again, in my experience, which will likely be different from yours. Our requirements for how we want the software to work will naturally be different.
The madness you describe to me sounds like a feature. It intentionally makes it difficult to add a bunch of functionality to the software, which is also what keeps it simple.
you already said in your first post that you can't understand it.
I don't because I haven't needed to. Understanding just a few hundred lines has been enough for my needs.
When your main goal to just get on with solving your actual problems, these are the same thing.
I'm not a C programmer, so it would probably personally take me days, maybe weeks to fully grok 5K SLOC of C. Still, it is potentially possible if I made the effort, unlike with other programs, like you say.
It's a testament to the quality of the original C code that I was able to configure and use st and other suckless tools with my limited experience. An experienced C developer would probably find it a breeze.
Then I had a look around their issue tracker, and noticed others complained about this too[1]. And the dismissive and defensive response from the author just rubbed me the wrong way.
I've looked before and not found anything, but it's a niche thing on an already niche thing.
> There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
It is true, but depending on the software, sometimes this is acceptable. (Some of the internet server software that I wrote (such as scorpiond) are configured in this way, in order to take advantage of compiler optimizations.)
For some other programs, some things will have to be configured at compile time (mostly things that probably don't need to be changed after making a package of this program in some package manager), although most things can be configured at run time and do not need to be onfigured at compile time.
> I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
This is a valid criticism, and is why I don't do that for my own software. However, it is sometimes useful to make your own modifications to existing programs, but just applying sets of patches that do not necessarily match is the madness that you describe.
> Because dwm is customized through editing its source code, it's pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions.
...sucks less than what? :) Simple is good, but simpler does not necessarily mean better.
No one is forced to use it, but the overall experience is quite convincing.
The reason why (almost) everyone migrated away to preemptive multitasking + memory protection is because it only takes one piece of code behaving sightly different from what the system/developer expected to bring the entire thing down to a halt, either by simply being slower that expected, or by modifying state it's not supposed to.
There was a thing on HN like seven years ago [1] that talked about how command line tools can be many times faster than Hadoop; the streams and pipelines are just so ridiculously optimized.
Obviously you're not going to replace all your Hadoop clusters with just Bash and netcat, and I'm sure there are many cases where Hadoop absolutely outperforms something hobbled together with a Bash script, but I still think it serves a purpose: because these tools were written for such tiny amounts of RAM and crappy CPUs, they perform cartoonishly fast on modern computers.
I don't like coding like it's 1995 either, and I really don't write code like that anymore; most of the stuff I write nowadays can happily assume several gigs of memory and many CPUs, but I still respect people that can squeeze every bit of juice out of a single thread and no memory.
[1] https://adamdrake.com/command-line-tools-can-be-235x-faster-...
Also lots of 1995 assumptions lead to outrageously slow software if used today. Python in 1995 was only marginally slower than C++. It's orders of magnitude slower today.
There’s overhead with thread creation, locks can introduce a lot of contention and waiting and context switches, coordination between threads has a non-zero cost, and the list goes on.
Well-optimized multithreaded code will often be faster but thats harder than it sounds, and it’s certainly not the case that “single threading always makes it run slower”.
But I do hope the st buffer overflow fixes my st usage in builds..
But I think I like software that sucks a little bit. BSPWM with its config as shell commands to the bspc daemon is about right; re-compiling C code is a bit much.
I just went to his site and there's no pony anymore though :( Surely a sign the quality has decreased.
These days I'm off of this minimalism crap. it looks good on paper, but never survives collision with reality [1] (funny that this post is on hn front-page today as well!).
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
(I.e. show me where in the article he replaced a standard tool like the hammer or pot with a complex one customized to exactly what he wanted to solve or explain why that advanced tool wouldn't suck given that there's a lot more details than one would expect.)