Port 0 is a port some operating systems can and do host services on accessible over the Internet.
Also - if there's any MariaDB devs reading this - your default setting making the database listen on port 0 to disable Internet access does not, in fact, disable Internet access of the DB for quite a few thousand systems.
https://github.com/MariaDB/server/blob/ae998c22b2ce4f1023a6c...
> if (mysqld_port)
> activate_tcp_port(mysqld_port, &listen_sockets, false);
if (mysqld_port) means "if mysqld_port is different from 0"
This seems to be at least in MariaDB 5.5 (year 2012)
So I don't know if the check you're referencing is evaluated when someone sets their instance of MariaDB to "not" be internet accessible.
Binding with port 0 as argument for AF_INET binds a random available port, not port 0. This is documented behavior of Linux and likely every other OS implementing a BSD-style socket interface.
Also note that ufw is just a tiny, non-standard wrapper for the much more powerful nftables/iptables interfaces
> Portspoof can be used as an 'Exploitation Framework Frontend', that turns your system into responsive and aggressive machine. In practice this usually means exploiting your attackers' tools and exploits
I'd actually be curious to know if this seemingly ~10 year old software still works. Also how much bandwidth it uses, CPU/RAM etc.
One day, IT will become time served but not today.
I wholeheartedly recommend reading „Immune“ by Phillip Dettmer: https://www.amazon.de/dp/0593241312
I also suffer from severe asthma and allergies, both of which are, by all accounts, not normal or wanted responses of the immune system, not to mention of the low-end of the horror spectrum when it comes to immune function that is terrifyingly harmful to the host.
It is an exceptionally complex and wonderous thing, but where we diverge is in thinking of it as a "marvel of engineering" or any other prose that implies some sort of guiding hand. It is a far from perfect system, and gets things wrong often enough that we have a global industry creating products to control it.
Heh. It's hard to talk about the way things have been shaped by evolution without implying an actor, because our vocabulary is so very shaped by our subjective experience. I, personally, am reasonably certain that there is no creator in whatever sense. Yet, I'm still awestruck at the ingenuity life on our planet has shown, and the immune system is a never-ending source of wonder to me.
And while it surely isn't perfect, if we were to look at the raw numbers of incidents versus the number of adversary action against your body—that would be pretty darn near perfect.
http://web.archive.org/web/20020610054821/http://www.sourtim...
[1] https://theswissbay.ch/pdf/Gentoomen%20Library/Security/0321...
I doubt that most script kiddies are filtering out potential honeypots/things like this from their tools.
If you assume that scanning/attacking each port on each server takes about the same effort, you are better off finding a machine where the scan/attack has a higher chance of being successful, even if you can tell which ports are spoofed and not worth attacking.
Maybe you can run portspoof locally on 127.0.0.35 and compare which responses seem different (data, timings) from what you get back, but the space is suddenly 5000x bigger than the handful of ports that normally seem to be open and ports on other servers may seem more likely to yield success.
By default, return nonsense on all ports. But once a certain access sequence has been detected from a source IP, redirect traffic to a specific port from just that IP to your real service.
If you are specifically getting targeted, there might be a slight delay by having the adversary try and exploit the honeypot ports, but if you're running a vulnerable service you still get exploited.
Also if you're a vendor, when prospective customers security teams scan you, you'll have some very annoying security questionnaires to answer.
I work with a number of overseas clients where getting extra IPv4 is nearly impossible. I'll see them setup a ip forwarding box to tons of different applications. That application may have its own reverse proxy serving even more stuff.
The real world has some scary things in it.
You could also easily tweak it to have the ports spread on a few different IP addresses instead of a single one. That would make them much less obvious.
A honeypot is used to attract and detect an attacker, usually logging their actions and patterns for analysis or blocking. This tool could use more logging beyond just iptables, and sure it’s not _by itself_ a honeypot, but the idea isn’t that far off.
All that aside, the GitHub page suggests this “enhances OS security” which I don’t buy one bit. Sure it provides some obfuscation against automated service scanners, but if you have a MySQL server listening on 3306, and an attacker connects to 3306, they’re still talking to MySQL. Doesn’t matter if all the other 65534 ports are serving garbage responses.
How does that work? Do you need to run 65535 instances to cover all ports?
Then it calls getsockopt to find out what the original port was: https://github.com/drk1wi/portspoof/blob/c3f3c34531c59df229e...
sysctl -w net.ipv4.tcp_reset_reject=0
sysctl: cannot stat /proc/sys/net/ipv4/tcp_reset_reject: No such file or directory
This also gives nothing: sysctl -a | grep tcp_reset_reject
This would indeed be pants-on-head for UDP.
CORRECTION: This was actually name "Fraggle". [3] Smurf involved ICMP flooding.
I remember seeing these on EFnet IRC in the 90s. Since the code is so ancient, I thought I'd share it. I'm sure these would be useless in modern times, but they're an interesting bit of internet history. It also hilarious to look at the comments and see old IRC handles you recognize. Who remembers napster before he developed the p2p software that made him famous?
Pepsi.c https://cdn.preterhuman.net/texts/underground/hacking/exploi...
This site has loads of old historic exploits preserved one folder up.
Smurf.c https://gist.github.com/JasonPellerin/2eecbf1f7e49750d2249
[1] https://en.wikipedia.org/wiki/Character_Generator_Protocol?w...
Fun times. I miss those days.
Tarpit (networking) https://en.wikipedia.org/wiki/Tarpit_(networking)
/? inurl:awesome tarpit https://www.google.com/search?q=inurl%3Aawesome+tarpit+site%...
"Does "TARPIT" have any known vulnerabilities or downsides?" https://serverfault.com/questions/611063/does-tarpit-have-an...
https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62f... :
> However, if implemented incorrectly, TARPIT can also lead to resource exhaustion in your own server, specifically with the conntrack module. That's because conntrack is used by the kernel to keep track of network connections, and excessive use of conntrack entries can lead to system performance issues, [...]
> The script below uses packet marks to flag packets candidate for TARPITing. Together with the NOTRACK chain, this avoids the conntrack issue while keeping the TARPIT mechanism working.
The tarpit module used to be in tree.
xtables-addons/ xt_TARPIT.c: https://github.com/tinti/xtables-addons/blob/master/extensio...
There is a small risk in that the service replies to requests on the port, though, as replies get more complicated to mimic services, you run the risk of an attacked exploiting the system making the replies. Another way of putting it, this attempts to run a server that responds to incoming requests on every port, in a way that mimics what might run on each port. If so, it technically opens up an attack surface on every port because an attacker can feed it requests but the trade-off is that it runs in user mode and could be granted nil permissions or put on a honeypot machine that is disconnected from anything useful and heavily tripwired for unusual activity. And the approach of hardcoding a response to each port to make it appear open is itself a very simple activity, so the attack surface introduced is minimal while the utility of port scanning is greatly reduced. The more you fake out the scanning by behaving realistically to inputs, the greater the attack surface to exploit, though.
And port scanning can trigger false postives in network security scans which can then lead to having to explain why the servers are configured this way and that some ports that should always be closed due to vulnerability are open but not processing requests, so they can be ignored, etc.
LaBrea.py: https://github.com/dhoelzer/ShowMeThePackets/blob/master/Sca...
La Brea Tar Pits and museum: https://en.wikipedia.org/wiki/La_Brea_Tar_Pits
The NERDctl readme says: https://github.com/containerd/nerdctl :
> Supports rootless mode, without slirp overhead (bypass4netns)
How does that work, though? (And unfortunately podman replaced slirp4netns with pasta from psst.)
rootless-containers/bypass4netns: https://github.com/rootless-containers/bypass4netns/ :
> [Experimental] Accelerates slirp4netns using SECCOMP_IOCTL_NOTIF_ADDFD. As fast as `--net=host`
Which is good, because --net=host with rootless containers is security inadvisable FWIU.
"bypass4netns: Accelerating TCP/IP Communications in Rootless Containers" (2023) https://arxiv.org/abs/2402.00365 :
> bypass4netns uses sockets allocated on the host. It switches sockets in containers to the host's sockets by intercepting syscalls and injecting the file descriptors using Seccomp. Our method with Seccomp can handle statically linked applications that previous works could not handle. Also, we propose high-performance rootless multi-node communication. We confirmed that rootless containers with bypass4netns achieve more than 30x faster throughput than rootless containers without it
RunCVM, Kata containers, GVisor all have a better host/guest boundary than rootful or rootless containers; which is probably better for honeypot research on a different subnet.
IIRC there are various utilities for monitoring and diffing VMs, for honeypot research.
There could be a list of expected syscalls. If the simulated workload can be exhaustively enumerated, the expected syscalls are known ahead of time and so anomaly detection should be easier.
"Oh, like Ghostbusters."
And if you have that port open with a vulnerable service, they'll find and exploit it, irrespective of whether this tool is running.
So, every automated portscan from a hacked machne will waste 200MB of my bandwidth?
Bit puzzled though, by the statement made immediately after stating that it is GPL2: For commercial, legitimate applications, please contact the author for the appropriate licensing arrangements.
Since the GPL2 doesn't permit restricting what others do with GPLd software, I don't think this statement is doing what the author hopes; they might want to consult a lawyer.
(IANAL, etc., but there is nothing in there to prevent me, e.g., from building a business out of this, charging gazillions, and keeping it all for myself, provided I make the source available to my customers.)
It’s not uncommon that in situations where that’s undesirable (e.g. a closed-source C library that statically links a GPL’d project) that the library owner pays a fee for a separate license allowing that closed-source distribution.
Also, this is sometimes done when it’s not strictly legally necessary, either for risk avoidance or as a way to support the project in corporate environments where “licensing fee” gets waved through but “donation” gets blocked.
No. The copyleft nature still applies to libraries. That's why the LGPL exists. And the exception in the license for gcc for programs compiled by gcc.
This is not limited to GPL, but applies to proprietary libraries as well. It's OK to require a proprietary library at runtime and you don't need a licence to do that. As long as you do not distribute some intellectual property, copyright law and its limitations are not applicable at all.
This sounds quite assertive, so compulsory "IANAL, this is just my interpretation".
The author originally created his own non-GPL library with the same interface as ReadLine and distributed that, noting that the user could (at their own option) link CLISP with GNU ReadLine instead if they wanted that functionality. Stallman argued that wasn't sufficient.
In the end, CLISP ended up being relicensed to GPL. Note though that no judge ever looked at it, so things might have turned out differently if it had gone to court.
The Linux kernel has opinions about this: symbols marked with EXPORT_SYMBOL are considered symbols that every operating system would have, so using them doesn't mean you are writing a derivative work. Symbols market with EXPORT_SYMBOL_GPL are considered implementation details so specific to Linux that if you use them, you can't say that your module isn't derivative of Linux.
Stallman was actually an advocate of doing this.
To continue my original example, I could, in theory, take this code, ensure that it works with arbitrary independent pseudo-services, create my own such services, under a proprietary licence, and distribute the whole as an aggregate, which is permitted by the GPL.
The author likely seeks to provide commercial licensing for those interested in integrating their pseudo-services as libraries, which would require either that they be GPLd or that the original code be licensed in some other way.
I hope the author achieves the success they hope for without the licensing and legal hell they may have set themselves up for. It can be a great disappointment to have one's work turned into someone else's success by a someone or someones with more legal and licence cunning than one's self.
(Note: that ain't me, I've just seen that exact scenario playout more than a fair few times....)
it binds to just ONE tcp port per a running instance !
Configure your firewall rules:
iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 1:65535 -j REDIRECT --to-ports 4444
> your attackers will have a tough time while trying to identify your real services.
So... Security through obscurity?
> the only way to determine if a service is emulated is through a protocol probe (imagine probing protocols for 65k open ports!).
So... Security through obscurity?
> it takes more than 8hours and 200MB of sent data in order to properly go through the reconessaince phase for your system ( nmap -sV -p - equivalent).
So... Security through obscurity?
Idk... Maybe I am not versed enough in infosec but this also begs the question are you not attracting more interest if your system lights up green for an exposed Redis instance for an adversary to notice you and take a closer look for anything else vulnerable.
This is not a valid criticism on its own.
Security through obscurity is bad when obscurity is the only thing stopping an attacker. It's a meme because obscurity is not a substitute for stronger security mechanisms. That does not mean it cannot be an appropriate compliment to them, however.
If I wanted to hide a gold bar, sticking it in an open hole behind a painting on the wall wouldn't be particularly great security. As soon as a robber found the hole, the entirety of my security is compromised.
If I put it in a safe on the wall, it's much more secure. The robber has to drill through the lock to get the gold bar.
If I put it in a safe behind a painting on the wall, the robber has to discover that there's a safe there before they're able to attempt drilling through it. Bypassing the painting is trivial compared to bypassing the safe, but the painting reduces the chance of the actual safe being attacked (up until it doesn't!)
Security should be layered. Obscurity will generally be the weakest of those layers, but that doesn't mean that it has no value. As long as you're not using obscurity as a replacement for stronger mechanisms, there's nothing wrong with leveraging it as part of a larger overall security posture.
Other parts of infosec are the same, but often with less well-quantified measures of effectiveness. E.g. memory hardening techniques like FORTIFY_SOURCE and MTE are effective in raising the difficulty of exploiting memory vulnerabilities, but under some conditions the vulnerabilities may still be exploitable.
Before using labels like "security through obscurity" one has to first answer: how much does the technique raise the cost for attackers? This is what articles about security systems (including this one) should focus on. In the end, hacking, like most things, comes down to economics.
The next level of value for this is to TLS-encrypt random traffic between ports and hosts on the network, generated and injected by the switch into each network port, so that sniffing traffic is not an effective discovery mechanism. After that, address and port randomization of servers using a time-linked randomization seed stored in an HSM, so that attackers have no way to pierce the onion skin if they lose control of the HSM-bearing host.
This is all the natural outgrowth of container approaches, but in labor terms is nightmarishly complicated if you aren’t willing to spend for it.
Either that or they're researchers or adversaries playing a game. Because trying to figure out WTF is going on is hard, so any clues you can extract from your targets makes things easier.
Bad actors you can either block or counter attack. Security tools should be registering their address with whatever internal tracker you’re using so they can be white listed.
That is not what the tool is for though... It is a tool specifically made to hinder... IDK... Making any information out of an NMAP scan?
You light up in a skid's Internet-wide scan for let's say Redis. They try and fail to dump anything from it so they proceed and run a vulnerability scanner on your host (skids gonna skid)... It proceeds to discover IDK... a trivial SQLi you coded like a dumbass...
For a similar concept, look at the delay you get after entering a password wrong to a login prompt: That technically does not add any barrier whatsoever, but it does make it much harder for an attacker to brute force the password.
If more servers use the tool, they waste attacker's time. A bit like herd immunity
Similar principle (only on the other end).