>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
---
However, this proposal is not entirely pointless. The point is:
1. Existing UTC timekeeping is unmodified. (profoundly non-negotiable)
2. Any two timestamps after 2035 different by an accurate number of physical seconds.
---
Given that MST is already a feature of UTC, I agree removing it seems silly.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
[0] https://webtai.bipm.org/ftp/pub/tai/annual-reports/bipm-annu...
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
[1] https://en.wikipedia.org/wiki/Coordinated_Universal_Time#His...
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
Also natural events are the other way around, we can know they're X in the future but not the exact calendar date/time.
If the definition of a future time was limited to hours, minutes and/or seconds, then it would be true that the only hard part is answering "what calendrical time and date is that?"
But if you can say "1 day in the future", you're already slamming into problems before even getting to ask that question.
If you want to know the timestamp of "two days from now" then you need to know all kinds of things like what time zone you're talking about and if there are any leap seconds etc. That would tell you if "two days from now" is in 172800 seconds or 172801 seconds or 169201 or 176400 etc.
But the seconds-counting thing should be doing absolutely nothing other than counting seconds and doing otherwise is crazy. The conversion from that into calendar dates and so on is for a separate library which is aware of all these contextual things that allow it to do the conversion. What we do not need and should not have is for the seconds counting thing to contain two identical timestamps that refer to two independent points in time. It should just count seconds.
"2 days from now" could either mean "after 2*86400 seconds have ticked" or it could mean "when the wall clock looks like it does now, after 2 sunset events". These are not the same thing.
The intent of the thing demanding a future event matters. So you can have the right software abstractions all you like and people will still use the wrong thing.
The problem is that programmers are human, and humans don't reason in monotonic counters :)
Which is why you need some means to specify which one you want from the library that converts from the monotonic counter to calendar dates.
Anyone who tries to address the distinction by molesting the monotonic counter is doing it wrong.
Seconds are numbers; calendrical units are quantities.
[0] Bateson was, in some ways, anticipating the divide between the digital and analog worlds.
You are very right that future calendar arithmetic is undefined. I guess that the only viable approach is to assume that it works based on what we know today, and to treat future changes as unpredictable events (as if earth would slow its rotation). Otherwise, we should just stop using calendar arithmetic, but in many fields this is just unfeasible...
If you say something will happen in three days, that's a big time window.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
https://www.slac.stanford.edu/~rkj/crazytime.txt
To make these dates fit in computer memory in the 1950s, they offset the calendar by 2.4 million days, placing day zero on November 17, 1858.
https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...
The macOS/Swift Foundation API NSDate.timeIntervalSinceReferenceDate uses an epoch of January 1, 2001.
edit: Looks like Wikipedia has a handy list https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...
The Open Group Base Specifications Issue 7, 2018 edition says that "time_t shall be an integer type". Issue 8, 2024 edition says "time_t shall be an integer type with a width of at least 64 bits".
C merely says that time_t is a "real type capable of representing times". A "real type", as C defines the term, can be either integer or floating-point. It doesn't specify how time_t represents times; for example, a conforming implementation could represent 2024-12-27 02:17:31 UTC as 0x20241227021731.
It's been suggested that time_t should be unsigned so a 32-bit integer can represent times after 2038 (at the cost of not being able to represent times before 1970). Fortunately this did not catch on, and with the current POSIX requiring 64 bits, it wouldn't make much sense.
But the relevant standards don't forbid an unsigned time_t.
> If year < 1970 or the value is negative, the relationship is undefined.
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
https://www.erlang.org/doc/apps/erts/time_correction.html#ho...
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
TAI is a separate time scale and it is used to define UTC.
There is now CLOCK_TAI in Linux [1], tai_clock [2] in c++ and of course several high level libraries in many languages (e.g. astropy.time in Python [3])
There are three things you want in a time scale: * Monotonically Increasing * Ticking with a fixed frequency, i.e. an integer multiple of the SI second * Aligned with the solar day
Unfortunately, as always, you can only chose 2 out of the 3.
TAI is 1 + 2, atomic clocks using the caesiun standard ticking at the frequency that is the definition of the SI second forever Increasing.
Then there is UT1, which is 1 + 3 (at least as long as no major disaster happens...). It is purely the orientation of the Earth, measured with radio telescopes.
UTC is 2 + 3, defined with the help of both. It ticks the SI seconds of TAI, but leap seconds are inserted at two possible time slots per year to keep it within 1 second of UT1. The last part is under discussion to be changed to a much longer time, practically eliminating future leap seconds.
The issue then is that POSIX chose the wrong standard for numerical system clocks. And now it is pretty hard to change and it can also be argued that for performance reasons, it shouldn't be changed, as you more often need the civil time than the monotonic time.
The remaining issues are:
* On many systems, it's simple to get TAI * Many software systems do not accept the complexity of this topic and instead just return the wrong answer using simplified assumptions, e.g. of no leap seconds in UTC * There is no standardized way to handle the leap seconds in the Unix time stamp, so on days around the introduction of leap second, the relationship between the Unix timestamp and the actual UTC or TAI time is not clear, several versions exist and that results in uncertainty up to two seconds. * There might be a negative leap second one day, and nothing is ready for it
[1] https://www.man7.org/linux/man-pages/man7/vdso.7.html [2] https://en.cppreference.com/w/cpp/chrono/tai_clock [3] https://docs.astropy.org/en/stable/time/index.html
I don't think that's true? You need to time something at the system level (e.g. measure the duration of an operation, or run something at a regular interval) a lot more often than you need a user-facing time.
In my original comment, when I wrote timezone, I actually didn’t really mean one of many known civil timezones (because it’s not), but I meant “timezone string configuration in Linux that will then give TAI time, ie stop adjusting it with timezones, daylight savings, or leap seconds”.
I hadn’t heard of the concept of timescale.
Personally i think item (3) is worthless for computer (as opposed to human facing) timekeeping.
Your explanation is very educational, thank you.
That said, you say it’s simple to get TAI, but that’s within a programming language. What we need is a way to explicitly specify the meaning of a time (timezone but also timescale, I’m learning), and that that interpretation is stored together with the timestamp.
I still don’t understand why a TZ=TAI would be so unreasonable or hard to implement as a shorthand for this desire..
I’m thinking particularly of it being attractive for logfiles and other long term data with time info in it.
In theory, if you keep your clock set to TAI instead of UTC, you can use the /etc/zoneinfo/right timezones for civic time and make a (simpler) TAI zone file. I learned of that after I'd created the above though, and I can imagine all sorts of problems with getting the NTP daemon to do the right thing, and my use case was more TZ=TAI date, as you mentioned.
There's a contentious discussion on the time zone mailing list about adding a TAI entry. It really didn't help that DJB was the one wanting to add it and approached the issue with his customary attitude. There's a lot of interesting stuff in there though - like allegedly there's a legal requirement in Germany for their time zone to be fixed to the rotation of the earth (and so they might abandon UTC if it gives up leap seconds).
A remaining issue is that it is not easy to get proper TAI on most systems.
Local noon just doesn't matter that much. It especially doesn't matter to the second.
Before, it was simply the best clock available.
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
Computer clock speeds are not really that consistent, so “dead reckoning” style approaches don’t work.
NTP can only really sync to ~millisecond precision at best. I’m not aware of the state-of-the-art, but NTP errors and smearing errors in the worst case are probably quite similar. If you need more precise synchronisation, you need to implement it differently.
If you want 2 different computers to have the same time, you either have to solve it at a higher layer up by introducing an ordering to events (or equivalent) or use something like atomic clocks.
Google explicitly built spanner (?) around the idea that you can get distributed consistency and availability iff you control All The Clocks.
Smearing is fine, as long as it's interaction with other systems is thought about (and tested!). Nobody wants a surprise (yet actually inevitable) outage at midnight on New year's day.
> and I don't record the timezone information on the date field
Very few databases actually make it possible to preserve timezone in a timestamp column. Typically the db either has no concept of time zone for stored timestamps (e.g. SQL server) or has “time zone aware” timestamp column types where the input is converted to UTC and the original zone discarded (MySQL, Postgres)Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
It's highly complicated topic and it's amazing PostgreSQL decided to use instant time for 'datetime with timezone' type instead of Oracle mess.
For what it's worth, the libraries that are generally considered "good" (e.g. java.time, Nodatime, Temporal) all offer a "zoned datetime" type which stores an IANA identifier (and maybe an offset, but it's only meant for disambiguation w.r.t. transitions). Postgres already ships tzinfo and works with those identifiers, it just expects you to manage them more manually (e.g. in a separate column or composite type). Also let's not pretend that "timestamp with time zone" isn't a huge misnomer that causes confusion when it refers to a simple instant.
I suspect you might be part of the contingent that considers such a combined type a fundamentally bad idea, however: https://errorprone.info/docs/time#zoned_datetime
Postgres has timezone aware datetime fields, that translate incoming times to UTC, and outgoing to a configured timezone. So it doesnt store what timezone the time was in originally.
The claim was that the docs explain why not, but they don't.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.
Thankfully for me it was just a bunch of non-production-facing stuff.
Everything would be derived from that.
I suppose it would make some math more complex but overall it feels simpler.
I guess they just didn't foresee the problem, or misjudged the impact. I can imagine it being very "let's kick that problem down the road and just do a simple thing for now" approach.
Random example, the wonderful RealTime1987A project (https://bmonreal.github.io/RealTime1987A/) talks about detecting neutrinos from the supernova, and what information can be inferred from the timing of the detections. A major source of that data is the Super Kamiokande project. The data was recorded to tape by a PDP-11, timestamped by its local clock. That clock was periodically synced with UTC with a sophisticated high-tech procedure that consisted of an operator making a phone call to some time service, then typing the time into the computer. As such, the timestamps recorded by this instrument have error bars of something like +/- one minute.
If that’s the sort of world you’re in, trying to account for leap seconds probably seems like a complete waste of effort and precious computer memory.
The problem is leap seconds. Software just wasn't designed to handle 86401 seconds in a day, and caused incidents at Google, Cloudflare, Quantus, and others. Worried that resolving all possible bugs related to days with 86401 seconds in them was going to be impossible to get right, Google decided to smear that leap second so that the last "second" isn't.
And if you've not seen it, there's the falsehoods programmers believe about time article.
Eg in most computing contexts, you can synchronize clocks close enough to ignore a few nanos difference.
All the satellites in all of the GNSS constellations are synchronized to each other and every device tracking them to within a few tens of nanoseconds. Yes, atomic clocks are involved, but none of them are corrected locally and they're running at a significantly different rate than "true" time here on earth.
A better analogy to practical networked computing scenarios would be this: receive a timestamp from a GNSS signal, set your local clock to that, wait a few minutes, then receive a GNSS timestamp again and compare it to your local clock. Use the difference to measure how far you've travelled in those few minutes. If you did that without a local atomic clock then I don't think it would be very accurate.
The receiver in your phone also needs pretty good short term stability to track the signal for all of the higher processing. It'd be absolutely fine to depend on PPS output with seconds or minutes between measurements.
https://en.m.wikipedia.org/wiki/Precision_Time_Protocol
It does require hardware support, though.
'cal 9 1752' is .. funny. I guess instead of doing this annoying a-periodic leap second business, they accumulated a bunch of leap seconds owed, and skipped 11 days at one go. Sysadmins at the time were of divided opinion on the matter.
In any case, dates only have to make sense in the context they are used.
Eg we don't know from just the string of numbers whether it's Gregorian, Julian, or Buddhist or Japanese etc calendar.
But seriously, https://xkcd.com/1179/
The offset between UTC and TAI is 37 seconds.
Pretty soon we'll have to defer to deep experts and fundamental libraries to do anything at all with time in our applications, a la security and cryptography.
[1] https://googleblog.blogspot.com/2011/09/time-technology-and-...
Seems like there's another corner cut here, where the behavior of leap years at the end of a century (where they're skipped if a year is divisible by 100 unless it's divisible by 400) is not accounted for.
I suppose using Unix time for dates in the far future isn't a good idea. I guess I'll file that away.
(For the curious, the way this seems to work is that it's calibrated to start ticking up in 1973 and every 4 years thereafter. This is integer math, so fractional values are rounded off. 1972 was a leap year. From March 1st to December 31st 1972, the leap day was accounted for in `tm_yday`. Thereafter it was accounted for in this expression.)
The article cites the original edition of POSIX from 1988.
The bug in question was fixed in the 2001 edition:
https://pubs.opengroup.org/onlinepubs/007904975/basedefs/xbd...
Not just Unix time, converting future local time to UTC and storing that is also fraught with risk, as there's no guarantee that the conversion you apply today will be the same as the one that needs to be applied in the future.
Often (for future dates), the right thing to do is to store the thing you were provided (e.g. a local timestamp + the asserted local timezone) and then convert when you need to.
(Past dates have fewer problems converting to UTC, because we don't tend to retroactively change the meaning of timezones).
the issue is so wide spread and complicated that they decided to stop introducing extra leap seconds so people can come up with something better in the coming decades - probably way later than the arrival of AGI.
Suppose you had a clock that counted seconds (in the way we understand seconds, moving forward one unit per second). If you looked at it in a few days at midnight UTC on NYE (according to any clock), it would not be a multiple of 86400 (number of seconds per day). It would be off by some 29 seconds due to leap seconds. In that way, Unix time is not seconds since the epoch.
Because of leap seconds, this is wrong. Midnight UTC tonight is in fact NOT a multiple of 86,400 real, physical seconds since midnight UTC on 1970-01-01.
Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second
> This is not true. Or rather, it isn’t true in the sense most people think.
I find that assertion odd, because it works exactly as I did assume. Though, to be fair, I'm not thinking in the scientific notion that the author may.
If we think of a second as a tick of some amount of time, it makes sense to just count up once each tick. That scientists inject a second here or there wouldn't interfere with such logic.
All of that said, the leap second is going away anyways, so hopefully whatever replaces it is less troublesome.
It would, but Unix timestamps don't. It works exactly not how you assume.
The article is claiming POSIX ignores injected leap seconds.
So maybe the author was right. Because different people are claiming different things.
In that example, Unix time goes from 915148799 -> 915148800 -> 915148800 -> 915148801. Note how the timestamp gets repeated during leap second.
You just need to read the docs to understand their behavior. Some will smooth it out for you, some will jump for you. It would be a problem if you have 3rd party integrations and you rely on their timestamp.
Science and culture will rarely move hand-in-glove, so the rule of separation or concerns, to decouple human experience from scientific measurement, applies.
Much more important, though, is how it affects the future. The fact that timestamps in the past might be a few seconds different from the straightforward “now minus N seconds” calculation is mostly a curiosity. The fact that clocks might all have to shift by one more second at time point in the future is more significant. There are plenty of real-world scenarios where that needs some substantial effort to account for.
It most certainly matters to a lot of people. It sounds like you've never met those people.
This is ignoring the fact that due equation of time, solar noon naturally shifts around tens of minutes over the course of the year.
To drive the point, for example local mean solar time at Buckingham palace is already more than 30 seconds off from Greenwich time.
It is up to you to keep TAI for everything and let your representations of physical coordinates drift away into the galaxy or something, but that's not the majority choice. Overwhelming majority choose UTC time.
TAI is still nice for many high precision applications, weirdly including a lot of precisely those geo-spatial use cases, so we have both.
There's very good reasons that are important behind why we try and keep UTC near UT1, so saying "it doesn't matter to anyone" without even entertaining that some people might care isn't very constructive.
Generally, it's useful for midnight to be at night, and midday during the day. UT1 is not regular, so you need some form of correction. Then the debate is about how big and how often.
if it can do this to cloudflare, imagine everything left on legacy signed 32bit integers
https://blog.cloudflare.com/how-and-why-the-leap-second-affe...
I don’t think that the definition that software engineers believe is wrong or misleading at all. It really is the number of seconds that have passed since Unix’s “beginning of time”.
But to address the problem the article brings up, here’s my attempt at a concise definition:
POSIX time, also known as Unix time, is the number of seconds since the Unix epoch, which was 1970-01-01 at 00:00:00, and does not include leap seconds that have been added periodically since the 1970s.
Seconds are a fraction of a day which is Earth rotating, and count 86400 seconds and then roll over to the next day, but Earth's rotating speed changes so how much "time passing" is in 86400 seconds varies a little. Clocks based on Earth rotating get out of sync with atomic clocks.
Leap seconds go into day-rotation clocks so their date matches the atomic clock measure of how much time has passed - they are time which has actually passed and ordinary time has not accounted for; so it's inconsistant for you to say "Unix time really is the number of seconds that have passed" and "does not include leap seconds" because those leap seconds are time that has passed.
If a day has 86,401 or 86,399 seconds due to leap seconds, POSIX time still advances by exactly 86,400.
If you had a perfectly accurate stopwatch running since 1970-01-01 the number it shows now would be different from POSIX time.
[0] https://pubs.opengroup.org/onlinepubs/9799919799/functions/c...
A delta between two monatomic values should always be non-negative. This is not true for Unix time.
If, however, you think it's a float, then you can.
Imagine a timestamp defined as days since January 1, 1970, except that it ignores leap years and says all years have 365 days. Leap days are handled by giving February 29 the same day number as February 28.
If you do basic arithmetic with these timestamps to answer the question, “how many days has it been since Nixon resigned? then you will get the wrong number. You’ll calculate N, but the sun has in fact risen N+13 times since that day.
Same thing with leap seconds. If you calculate the number of seconds since Nixon resigned by subtracting POSIX timestamps, you’ll come up short. The actual time since that event is 20-some seconds more than the value you calculate.
It works out to be that unix time spits out the same integer for 2 seconds.
However, after looking hard at the tables in that Wikipedia article comparing TAI, UTC, and Unix time, I think you might actually be correct-- TAI is the atomic time (that counts "real seconds that actually happened"), and it gets out of sync with "observed solar time." The leap seconds are added into UTC, but ultimately ignored in Unix time.* ~~So Unix time is actually more accurate to "real time" as measured atomically than solar UTC is.~~
The only point of debate is that most people consider UTC to be "real time," but that's physically not the case in terms of "seconds that actually happened." It's only the case in terms of "the second that high noon hits." (For anyone wondering, we can't simply fix this by redefining a second to be an actual 24/60/60 division of a day because our orbit is apparently irregular and generally slowing down over time, which is why UTC has to use leap seconds in order to maintain our social construct of "noon == sun at the highest point" while our atomic clocks are able to measure time that's actually passed.)
*Edit: Or maybe my initial intuition was right. The table does show that one Unix timestamp ends up representing two TAI (real) timestamps. UTC inserts an extra second, while Unix time repeats a second, to handle the same phenomenon. The table is bolded weirdly (and I'm assuming it's correct while it may not be); and beyond that, I'm not sure if this confusion is actually the topic of conversation in the article, or if it's just too late in the night to be pondering this.
It's wrong and misleading in precisely the way you (and other commenters here) were wrong and misled, so it seems like that's a fair characterization.