As an experiment I installed SELinux on Debian and while I was eventually able to get it stable and working after a lot of trial and error, a disk swap followed by an rsync broke it irreparably. Yes I rescanned the disk or whatever to have SEL relearn/reindex the objects, didn't work. The box was basically unbootable or it would boot and rejected all logins, including root directly to the console, something that should nearly never happen. Documentation is sparse or assumes you have RedHat and it 'just works'. After hours of troubleshooting the only thing that worked was switching it off and saying good riddance.
It's also my experience that Fedora has better support for it, but Gentoo used to be good enough with hardened gentoo (they use https://gitweb.gentoo.org/proj/hardened-refpolicy.git/). Redhat and Gentoo are the only ones that officially support it afaik. I think hardened gentoo might have lost popularity since the fall of grsec, but I'm not sure how popular it is currently.
Too bad you haven't even read a part of the article.
> Redhat and Gentoo are the only ones that officially support it afaik
In my view all these compromimses only speak to the true security of SElinux. The fact that we need to have a website called https://stopdisablingselinux.com/ means it's working!
I am one of those weirdos who prefers to learn and adapt to SElinux instead of disabling it.
The article in question takes a lot of liberties with complete FS access. It's not really applicable to real world server security. The focus is on smartphones.
Like, as a user, once you grok filesystem, port and process tags, it's easy to navigate most if not all common "strange" SELinux errors.
After that I figured, I might be able to implement SELinux modules for our in-house services. Just tag config files, binaries, logfiles, state files, add a tag for the process, link them up with a few rules, easy peasy
I all in all hit a wall harder than the iron curtain. I'd have to understand my application on an individual syscall level (I think, from like one github gist and a random dudes blogpost going into a few details to implement selinux modules) to do anything at that level. I spent a week on that and figured that I either have to understand kernel interfaces and the entire SELinux stack in all details to implement my own SELinux modules, or to not implement my own SELinux modules.
At that point I figured no one else on the team would have a snowballs chance in hell to do any of what I'm looking at and accepted SELinux would not be a thing for our internal applications.
NSA used to do a lot of operating systems work, but no longer seems to do that.
[1] https://www.nsa.gov/Research/NSA-Mission-Oriented-Research/L...
.. a flexible MAC architecture was created and matured through a series of research systems. The work to bring this architecture to mainstream systems [and] experience with applying this architecture to mobile platforms is examined. The role of MAC in a larger system architecture is reviewed in the context of a secure virtualization system. The state of MAC in mainstream systems is compared before and after our work.
Review of non-public STM for constraining SMM, https://www.platformsecuritysummit.com/2018/speaker/myers/ We describe our work to demonstrate an enhanced SMI transfer monitor (STM) to provide protected execution services on the x86 platform. An STM is a hypervisor that executes in x86 system management mode (SMM) and functions as a peer to the hypervisor or operating system. The STM constrains the SMI handler, by hosting the handler in a virtual machine (VM). Otherwise, the SMI handler holds unconstrained access to the platform, which could undermine the assurance provided by DRTM or TXT.
OSS STM contributed to coreboot, https://cyberscoop.com/nsa-firmware-open-source-coreboot-stm... & https://www.osfc.io/2019/talks/implementing-stm-support-for-... Implementation of SMI transfer monitor (STM) support for Coreboot.
Amusingly, a big problem during the Cold War era was administrative. NSA's main HQ was at Fort Meade. When that filled up, various auxiliary functions, such as personnel and training, were moved out to Friendship Annex (FANX), near what was then called Friendship Air Terminal, now "Baltimore-Washington International". Working at FANX had less prestige than working at Ft. Meade. Computer security was at FANX. This had a real effect on the effort. Being at FANX meant being on the second team.
NSA had a major downsizing after the collapse of the Soviet Union, and a major rework for the War on Terror. I can't speak to the modern situation. But computer security is still at FANX.
The discussion on the topic was disappointing. Chrome does this, all Electron apps do this, VS Code does it. It was insisted that the warnings were fine because this is insecure and the developers should just fix their applications.
I do not have the patience to waste two hours researching every single complicated solution that every single application uses to make things work just to personally decide whether it's safe enough to whitelist it.
Long story short I'm using Mint now.
This is a problem in ALL linuxes as far as I know. When Linux gets IO-bound, literally everything gets stuck and it's insane. My mouse cursor position doesn't depend on my hard disk. I have a dozen cores. I don't understand. I thought this kind of issue was solvable on a single core with just a round robin scheduler. How come things still get stuck?
It could be that the problem was that I installed every DE I could find to test them, but even after uninstalling a lot of things, the system never recovered its original speed.
I installed Linux Mint on the same hard disk and things run smoothly. In particular, I love how Cinnamon actually keeps the list of applications in memory instead of starting a search every type I open the start menu. The DE has its shortcomings but it's pretty decent otherwise so for now I'm happy with it.
P.S.: also when I tried to use git with apache, checking out overwrote my SELinux types and contexts, which meant I had to figure out AGAIN how do configure all this stuff just so I could access a wordpress site in localhost that was saved in my home folder. For a while I just gave up using branches in git. Now I just use Mint.
For GUI notifications open the sealert app and disable alerts. For journal/syslog reports disable the setroubleshoot daemon dispatch plugin for auditd:
sed -i 's/active.=.yes/active = no/' /etc/audit/plugins.d/sedispatch.conf
service auditd restart
You can also uninstall the setroublshoot-server package completely, and all AVC denials will continue be reported separately outside of the journal.EDIT: a quick search found that Linux' KVM could potentially be used: https://lpc.events/event/17/contributions/1486/attachments/1...
It's especially a bit extreme when systemd gets the syslog denial.
How do we dial this back?
For example, Qubes OS: https://qubes-os.org
The point though, was if you don't want to learn to use a complex system you shouldn't complain when you use it and don't understand it.
It's annoying because it's a change in system that you are familiar with and you can trip on it you if you are not familiar with it, not because it is not useful or great challenge to configure right.
There was a document on access.redhat.com that talked about it.
Yeah, I've always been against that type of crap.
It's why Ubuntu, even Debian, and especially RedHat were never really viable distros for me when things like Slackware, Alpine and Void exist.
I don't need a ton of abstraction between me and what I'm trying to accomplish.
Or they put in the time to learn it so they don't get frustrated when they get blocked by something, because they know how to resolve it.
It's not like say, nginx or a linux distro, where you don't need to know the ins and outs to get it up and running.
Sure, not all servers need to be secured, but why use RHEL then? Why not use Debian or some RH fork that doesn't have it on by default, or even SuSE?
The people who disable it out of convenience are generally bad admins, and generally being lazy and/or incompetent.
I think claiming that is a pretty goofy argument, personally.
I'm not going to link it here given his disdain for this site but jwz at least got the CADT development right. If you stop fucking rewriting things every couple of years then maybe the average user could catch their breath and catch up but if you keep changing things that don't need changing (ALSA -> PulseAudio -> pipewire -> whatever is already in the works to replace pipewire) instead of ACTUALLY FIXING THEM then you lose the right to be surprised when someone just trying to use their computer just gives up.
This is possibly the dumbest thing I read in a while...
1. PipeWire was written to fix PulseAudio and is a drop in upgrade. How do you think things get fixed otherwise? 2. Why would anyone rewrite PipeWire at the moment, why would you think that?
2. Because writing something new is easier than trying to understand something already written. Maintenance is not sexy. Writing new things is.
Also note that PipeWire reuses the most complicated/fundamental part from PulseAudio: the device probing and mixer management. This code took 10 years to tweak and is taken directly from PulseAudio.
As a side note, there was a lot of investigations if things could be written on top of JACK2 but that also was infeasible (mostly the IPC protocol and API needed a rework, no security with memory sharing, support for Video. The scheduling ideas were taken right from jack2, though).
2. This is true but PipeWire is not just a random new rewrite. It was written with a deep understanding of both pulseaudio and JACK (later in the developments) internals and design. Many (of the good) ideas from those projects survived in PipeWire. This is actually a perfect example when a rewrite is a good thing.
Now, what is even easier than writing PipeWire is not understanding why things are done the way they are done and then complaining about it.