I don't know for sure whether superintellegence will happen, but as for the singularity, this is the underlying assumption I have the most issue with. Smart isn't the limiting factor of progress, often it's building consensus, getting funding, waiting for results, waiting for parts to ship, waiting for the right opportunity to come along. We do _experiments_ faster than natural selection, but we still have to do them in the real world. Solving problems happens on the lab bench, not just in our heads.
Even if exponentially more intelligent machines get built, what's to stop the next problem on the road to progress being exponentially harder? Complexity cuts both ways.
More importantly, though, is that insufficiently powerful AI has been a limiting factor in robotics. That seems to be coming to an end now. And once we have humanoid robots powered by superhuman intelligence entering the workforce, the impact will be massive.
Quite possibly mostly for the bad for most people.
There’s definitely a point where models get good enough you can trust them, but super intelligence doesn’t mean it can both come up with a model and be able to trust it without validation.
In any case, even if some validation is needed, AI can speed up this kind of science by at least an order of magnitude.
10k experiments may seem like a lot, but keep in mind that if we can engineer nanobots out of proteins the same way we build engines from steel today, the number of "parts" we may want to build using such biological nanotech may easily go into the millions.
And this kind of AI may very well be as useful for such tech as CAD is today. Or rather, it can be like the CAD + the engineer.
That’s the bottleneck the model was trying to avoid in the first place. The goal of science is to come up with models we don’t need to validate before use, and it’s inherently iterative.
Nanbots are more sci-fi magic than real world possible. In the real world we are stuck with things closer to highly specialized cellular machinery than some do anything grey goo. Growing buildings from local materials seems awesome until you realize just how slow trees grow and why.
Some real world validation is always needed, but if the validations that are performed show high accuracy, the number of experiments will go down a lot.
> Nanbots are more sci-fi magic than real world possible.
Let's revisit this one in 10 years.
The underlying physics isn’t changing in 100 years. Individual components can be nanoscale within controlled environments, but you simply need more atoms to operate independently.
I don't expect the first ASI to conduct novel physics experiments, but I can easily believe that after optimizing the supply chain for everything else, it just happens to have stashed a bunch of magnets and whatnot to do these in its spare time.
But they do conveniently fit into the century old buildings we put many of the factories into, which makes them a useful upgrade path for those unwilling to build structures around more efficient robots (the kind we've had for ages and don't even think of as robots, they just take ingredients and pump out packaged candy or pencils etc.)
If what you are saying is that many factories cannot run with humans running around fixing things, I agree. But that’s pretty different than using humanoids to put items in boxes.
Even just 3.5 years ago, seemed like everyone was saying that humanoid robots were a dead end or an unnecessary part of Isaac Asimov's vision of the future or similar.
I think much of the current interest is because Musk watched some scifi, ordered the Optimus project, and loads of others decided it would be a mistake to bet against him.
I put them in the same category as 3D printers: they can do anything, but you can always find a better special-purpose alternative for any specific goal.
Still a lot of people using 3D printing productively despite that; likely also will be for humanoid robots.
Well, if the AI is good enough. Remote control has its uses, but even then you need enough on-board AI to avoid playing QUOP as live action with a robot holding industrial equipment instead of in a safe flash game.
That said, stamina is probably the least important aspect — in an industrial setting you probably have a lot of power lines already installed.
Will we just sit around and do nothing then? I'm not saying we have to work, but there is some level of work that I think is required for happiness / fulfillment etc.
I'm not even really against the idea, it just sounds quite dystopian to me.
For fairly positive takes — Asimov had a take in the robot novels, Accelerando by Charles Stross touches on reputation-based currency (among a deluge of other ideas), Iain M Banks’ Culture novels have a take, and I cannot find it but there was a short story posted here recently about a dual-class system where the protagonist is rescued and whisked off to a utopian society in Australia where people do whatever they like all day whether it be fashion design or pooling their resources to build a space elevator. There are plenty of dystopian tales as well but they’re less fun to read and I don’t have a recommendation off the top of my head.
To answer your question directly, my opinion is that our our base nature probably leads us towards dystopia but our history is full of examples of humans exceeding that base nature so there’s always a chance.
I won't say anything more in case you decide to read it, but it's amazing how the author managed to predict the future the way he did.
Thanks for the response and fingers crossed.
If robots can do the same job as humans, but faster, cheaper and at a higher quality, out employers/customers will most likely replace us.
If we're lucky, we may find some niche, be able to live off our savings or maybe be granted some UBI, but I absolutely do think it's concerning.
What is worse, is that if we become obsolete in every way, it's not obvious that whoever is in power at that point will see any point in keeping us around (especially a few generations in).
While we do not today need to ask how people can afford robot lawnmowers despite being unable to find work hitching ploughs to draft horses or oxen, the fears at the time of things like this did lead to mobs smashing looms.
If I have some (n) robots that can do any task a human could do, one such task must have been "make this specific robot"*. If those n can make 2n robots before they break, and it takes 9 months to do so, and the mass of your initial set of n is 100 kg, they fully disassemble the moon in roughly 52 years. Also you can give (94.2 billion * n) robots to each human currently alive.
Asking "who can afford it" at that point is like some member of the species Kenyanthropus platyops asking how many knapped flints one must gather in order to exchange for a transatlantic flight from London to Miami, and how anyone might be able to collect them if we've all stopped knapping flint due to the invention of steel:
The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.
* including the entire chain of tools necessary to get there from bashing rocks together.
The industrial revolution didn't really change anything about land.
It's still a fundamental and underrated component of our economic system, arguably more important than capital. That's why Georgism is a thing Indeed, it's even contemporary to the industrial revolution.
The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.
I would refrain from making such wild prediction about the future. As I have pointed out, the industrial revolution didn't change the fundamental importance of land. Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.
So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.
I didn't say otherwise.
I said the industrial revolution changed what wealth meant. We don't pay for rents with the productive yield of vegetable gardens, and a lawn is no longer a symbol of conspicuous consumption due to signifying that the owner/tenant is so rich they don't need all their land to be productive.
And indeed, while land is foundational, it's fine to just rent that land in many parts of the world. Even businesses do that.
I still expect us to have money after AI does whatever it does (unless that thing is "kill everyone"), I simply also expect that money to be an irrelevant part of how we measure the wealth of the world.
(If "world" is even the right term at that point).
> Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.
Not so; land use policy today is absolutely not a disaster for our species, though some specific disasters have happened on the scale of the depression era dustbowl or more recently Zimbabwe. For our climate, while we need to do better, land use is not the primary issue, it's about 18.4% of the problem vs. 73.2% being energy.
> So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.
With a 2 year old laptop and model, making a picture with Stable Diffusion in a place where energy costs $0.1/kWh, costs about the same as paying a human on the UN abject poverty threshold for enough food to not starve for 4.43 seconds.
"How will we pay for it" doesn't mean the humans get to keep their jobs. It can be a rallying call for UBI, if that's what you want?
But robots-with-AI that can do anything a human can do, don't need humans to supply money.
I'm having real difficulty reading this unit of measurement. Let me see if I can get this right - a typical person can survive indefinitely on 1600 calories. Let's say that these are provided by rice (which isn't sufficient for a long-term diet, but is good enough for awhile). 1600 calories of rice is about 8 cups/24h and there are about 10000 grains in a cup, so is it that an image can be generated at the same cost as:
4.43s/86400s*8cups*10000 grains/cup
Being about 4 grains of rice?Georgism doesn't exist in a vacuum. It wasn't like they were formulated during when the time when wealth 'meant' land. It was during the industrial revolution, possibly as a response to the problems they see in their society, problems we're still dealing with today.
No longer it merely meant land where productive yield of vegetable garden goes. Anything that capital sits on is land. That includes your factories and your datacenter. Yes, that include renting land on someone else. That's land policy.
Housing? Land policy. Pollution? Land policy. Transportation? Land policy. Can't afford to live? Likely your biggest ticket items include transportation and housing. Land is more important than ever.
Now, what does this have to do with AI? I would caution against thinking money or capital to be irrelevant or making any definitive prediction about the impact of AI or when or how they will come.
Edit: I see that you added stuff, but you have a narrow conception of land policy.
Then you define land so broadly that the empty vacuum of space, which robots are much better suited to than us, can exploit trivially when we cannot.
If you want to, that's fine, but it still doesn't need humans to be able to pay for anything.
Yes, satellites are robots. However, they have no agency. Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.
So, yes, they are either directly analogous to or are literal form of land.
Which reminds me of a blog post I want to write.
> Similarly you just can't have random folks blasting out radio signals at random.
That's literally what the universe as a whole does.
You may not want it, but can definitely do it.
> Yes, satellites are robots. However, they have no agency.
Given this context is "AI", define "agency" in a way that doesn't exclude the people making the robots and the AI.
> Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.
Human general problem solving capacites do not extend to small numbers such as merely 7.8e20.
For example, consider the previous example of the moon: if the entire mass is converted into personal robots and we all try to land them, the oceans boil from the heat of all of them performing atmospheric breaking.
And then we all get buried under a several mile thick layer of robots.
This doesn't prevent people from building them. The incentive structures as they currently exist point in that direction, of a Nash equilibrium that sucks.
Humans do not even know how to create an incentive structure sufficient to prevent each other from trading in known carcinogens for personal consumption even when labelled with explicit traumatic surgical intervention images and the words "THIS CAUSES CANCER" in big bold capital letters the outside.
If anyone knew how to do so for AI, the entire question of AI alignment would already be solved.
(Solved at one level, at least: we're still going to have to care about mesa-optimisers because alignment is a game of telephone).
With capitalism, wealth shifted to controlling "capital", ie the "means of production". Either directly or indirectly by owning money that could (through lending) carry interest. Also during capitalism, workers have for a while been able to collect a significant part of the wealth generated as salaries (even if most would spend that rather than invest it).
If AI can bring the cost of labor down to near zero, we can be going back to a world where wealth again means "land", even if mines may be more valuable than farms in such a future.
And just as in the Dark Ages of Europe, the ability to project physical power may again become necessary to hold on to those values.
This is particularly true if the entity that seeks to control the land is doing it in a way that threatens the existence of other entities, either AI's or humans.
1. https://en.wikipedia.org/wiki/Henry_George
So, yes. Wealth means "land". Especially so in the industrial revolution.
Middle ages: 500-1500
Henry George, 1839-1897, is indeed part of the industrial revolution. @trashtester and I were both comparing what happened before the industrial revolution to what happened in it.
The valuation of land before still wasn't Georgism: the land was assumed to have productive output, and if someone didn't pay taxes based on the assumption, perhaps you couldn't because the land wasn't that productive, that's their problem.
And you know what else happened in the industrial revolution? Karl Marx and Adam Smith, the former placing workers rather than land at the root, and the latter placing capital rather than land at the root. As with HG, neither liked rent-seekers, and they were both more influential in the "solutions" than Henry George. (Not that he wasn't, they were just more).
Not that Henry George could possibly have foreseen even mere Earth orbitals as "land", let alone disassembling the moon into a swarm of robots that outnumber humans by more than humanity outnumbers a single human and which don't need to land on Earth which is good because if they did we'd all die just from them landing. Can't blame him for that, the difficulty seeing clearly this far ahead is why some call it "the singularity" (though I prefer "event horizon").
Or there could be some billionaire caste constructing ever grander monuments to their own vanity.
Or the production could go to serve any number of other goals that whoever is in charge (human or AI) sees as more important than the economic prosperity of the general population.
It's more the philosophical side that concerns me.
I don't really worry about this being a billionaires only club either. We've seen it already with AI products, there is just an abundance of competition and open source competition already available. It will be the same with robotics.
Also scary, is military robots gone rogue. Definitely not a fun prospect.
I'm personally really into surfing and skiing, honestly, if some how the robots kind of let me spend more time fishing, surfing and skiing, I'm pretty cool with all of that, I know a lot of people who don't have these passions though and work is a strong reason for their existence.
That's true. But it's far from clear that these machines will be "at our disposal" for very long.
> Also scary, is military robots gone rogue.
I'm not concerned with military robots going rogue on their own. My concern is if the fully autonomous factories that have the capability to MAKE military robots (and then control them) go rogue.
A factory can exist in such a "rogue" state, unknown to the owners and maybe even itself, for year or decades before it even starts producing such robots. Meanwhile, it can evolve new capabilities and switch product categories multiple times.
It doesn't even have to have any negative intentions against humanity. It may simply detect that a rival AI "factory" entity is developing plans to wage physical war against it and join it in an arms race.
In this ASI vs ASI type of world war, human lives may be like candles in the wind.
If so, you'd have more time to dedicate to those projects.
If not, maybe you would be inspired to try a new project that you didn't have time for previously.
There's always work to be done. Some people could actually become organized, exercise, spend more time with their families, be better parents.
In the past when I've been unemployed I've spent the time to refine myself in new ways. If you've never had a sabbatical I suggest trying it if you have the opportunity.
People tend to think their special gift is what the world needs, and academically-minded smart people (by that I mean people that define their self-worth by intelligence level) are no different.
Sure, some level of intelligence is required, which may be above average. But that is a necessary requirement, not a sufficient one. Raw intelligence is only useful to a certain extent here, and exceeding certain limits may actually be detrimental.
"“I wrote the Dune series because I had this idea that charismatic leaders ought to come with a warning label on their forehead: "May be dangerous to your health." One of the most dangerous presidents we had in this century was John Kennedy because people said "Yes Sir Mr. Charismatic Leader what do we do next?" and we wound up in Vietnam. And I think probably the most valuable president of this century was Richard Nixon. Because he taught us to distrust government and he did it by example.”
Edit: Maybe what we really need to worry about is an AI developing charisma....
That is the most immediate worry, by a wide margin. It seems to be dangerously charismatic even before it got any recognizable amount of "intelligence".
All of this is just an overvaluation of intelligence, in my opinion, and largely comes from arrogance.
An aligned AI is not AGI, or whatever they want to call it.
There's a few ways I can interpret that.
If you mean "alignment and competence are separate axies" then yes. That's well understood by the people running most of these labs. (Or at least, they know how to parrot the clichés stochastically :P)
If you mean "alignment precludes intelligence", then no.
Consider a divisive presidential election between Alice and Bob, no this isn't a reference to the USA, each polling 50%: regardless of personal feelings or the candidates themselves, clearly the campaign teams are both competent and intelligent… yet each candidate is only aligned with 50% of the population.
Of any specific human to any other specific human?
https://benwheatley.github.io/blog/2019/05/25-15.09.10.html
Of any specific human to a nation? That's the example you replied to.
Of all the people of a nation to each other? Best we've done there is what we see in countries in normal times, with all the strife and struggles within.
We have yet to fully extend from nation to the world; the closest for that is the UN, which is even less in agreement with itself than are nations.
"Alignment" is only possible up to a vague approximation, and an entirely perfectly aligned with another entity would essentially be a shadow rather than a useful assistant because by being perfectly aligned the agent would act tired exactly when the person was tired, go shopping exactly when the human would, forget their keys exactly when the human would, respond exactly like the human to all ads and slogans, etc.?
I agree, though:
(1) this has already been observed, last year's OpenAI dev day had (IIRC) a story about a writer who fine tuned a model on their slack (?) messages, they asked it to write something for them, the response was ~"sure, I'll get on it tomorrow".
(2) for many of those concerned with "solving alignment", it's sufficient for the agent to never try to kill everyone just to make more paperclips etc.
Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.
Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.
Imagine a scenario where instead of AI, a billion dollar pill could make one person exponentially smarter and able to communicate with thousands of people per second.
That does not have the same appeal.
This provokes me to some musings on the theme.
We imagine superintelligence to be subservient, evenly distributed, and morally benign at least.
We don’t have a lot of basis for these assumptions.
What we imagine is that a superintelligence will act as a benevolent leader; a new oracle; the new god of humanity.
We are lonely and long to be freed of our burdens by servile labor, cured of our ills by a benevolent angel, and led to the promised land by an all knowing god?
We imagine ourselves as the stewards of the planet but yearn for irrelevance in the shadow of a new and better steward.
In AI we are creating a new life form, one that will make humans obsolete and become our evolutionary legacy.
Perhaps this is the path of all technological intelligences?
Natural selection doesn’t magically stop applying to synthetic creatures, and human fitness for our environment is already plummeting with our prosperity.
As we replace labor with automation, we populate the world with our replacement, fertility rates drop, we live for the experience of living, and require yet more automation to carry the burdens we no longer deem worthy of our rarified attention.
I’m not sure any of this is objectively good, or bad. I kinda feel like it’s just the way of things, and I hope that our children, both natural and synthetic, will be better than we were.
As we prosper, will we will have still less children? Will we seek more automation, companionship in benevolent and selfless synthetic intelligence, and more insulation from toil and strife, leading to yet more automation, prosperity, and childlessness?
Synthetic intelligence will probably not have to “take over”, it will merely be filling the void we willingly abandon.
I suspect that in a thousand years, humans will be either primitive, or vanishingly rare. Or maybe just non-primitive humans will be rare, while humans returning to nature will proliferate prodigiously as we always have, assuming the environment is not too hostile to complex biological life.
Interesting times.
Also, I would be cautious about making predictions about the future.
As for me, I think many things, suppose a few, imagine lots, conjecture some, believe very little, and know, most of all, that I know nothing.
“Believing”, in anything, is a dangerous gambit. Knowing and believing are very distinct states of mind.
Like the Argentinian ant invading the world that eventually diverged just enough to start warring with itself
And I was sad to notice he died this year, aged 79. A real cs prof who wrote sci fi.
- if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?
- once this new computer is running, how much power does it require? What are the on-going costs to keep it running? What sort of financial planning and preparations are required to build the next generation device/replacement?
I'd be satisfied with a Large-Language-Model which:
- ran on local hardware
- didn't have a marked affect on my power bill
- had a fully documented provenance for _all_ of its training which didn't have copyright/licensing issues
- was available under a license which would allow arbitrary use without on-going additional costs/issues
- could actually do useful work reliably with minimal supervision
Most of the computers we use today were designed by software: Feature sizes are (and have been for some time) in the realm where the Schrödinger equation matters, and more compute makes it easier to design smaller feature sizes.
Similar points apply to the question of cost: it has not been constant, the power to keep x-teraflops running has decreased* while the cost to develop the successor has increased.
Regarding LLMs in particular, I believe there are already models meeting all but one of your criteria — though I would argue that the missing one, "could actually do useful work reliably with minimal supervision", is by far the most important.
* If I read this chart right, my phone beats the combined top 500 supercomputers when the linked article was written by a factor of ten or so: https://commons.m.wikimedia.org/wiki/File:Supercomputers-his...
How it gets from here to there is a handwave, though.
- Some multipolar molecules that could attract or repel each other and initiate both hydrogen bonds and covalent bonds.
- Movement through some kind of medium not as chaotic as gas.
- A boundary between self and everything else.
Industrial processes to create machines don't lend themselves to this kind of bottom-up model as they now stand. It's not at all clear that etched silicon at all is the right medium of computation if we want it to be self-replicating. Coming up with something else is a hell of an ask and a hell of a handwave, though. It would also entail getting rid of the entire impetus of Moore's law and the reason futurolists like this ever thought there would be a singularity in the first place. Other than transistor density on a silicion die, I'm not sure any other technology has ever shown consistent long-term exponential growth at some reasonably fixed exponent.
That's the big problem. LLMs can't be allowed to do anything important without supervision. We're still at 5-10% totally bogus results.
Sometimes that's valuable and exactly what you need, but problems arise when people try to treat them as some sort of magical oracle that just needs to be primed with the right text.
Even "conversational LLMs are just updating a theater-style script where one of the characters happens to be described as a computer.
if a computer system were able to design a better computer system, how much would it cost to then manufacture said system?
I think the implication is that the primary advancements would come in the form of software. IMO it's trivially true that we're not taking full advantage of the hardware we have from a software PoV -- if we were, we wouldn't need SWEs, right? From that it should follow that self-improving software is dangerously effective. once this new computer is running, how much power does it require? What are the on-going costs to keep it running?
I mean, lots, sure. But we allocate immense resources to relatively trivial luxuries in this world; I don't think there's any reason to think we can't spare some giant computers to rapidly advance our technology. In a capitalist society, it's happily/sadly pretty much guaranteed that people will figure out how to get the resources there if scientists tell them the RoI is infinity+1. I'd be satisfied with a Large-Language-Model which
Those are great asks and I agree, but just to be super clear in case its not: Vinge isn't talking about chatbots, he's talking about systems with many smaller specialized subsystems. In today's parlance, gaggle of "LLMs" equipped with "tool use", or in yesterday's parlance, a "Society of Mind". >> Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
Good Morning HNEDIT Rest in peace. Fire Upon the Deep was great.
That should happen at some finite time and be a major change in things. I'd kinda expect it before Kurzweil's sigularity date of 2045. Vinge's date of 2023 was too early.
Of course, exponential growth is much more compatible with our experience of the real economy. And even it is probably a local approximation of some sigmoid.
But, to return to the singularity idea --
Iteration 1: Computers think at speed 1, and design a twice-as-fast computer in one time unit.
Iteration 2: Now computers think at speed 2, and design a twice-as-fast computer in half a time unit.
Iteration 3: Computers think at speed 4, and design a twice-as-fast computer in 1/4 time unit.
You will note that --
a.) The total time to do an infinite number of iterations is 1 + 1/2 + 1/4 + ... = 2 time units.
b.) After this infinite number of iterations, the computer thinks at speed "2^infinity".
So that (bad) model does have a literal singularity.
Even without talking about AI we are already struggling with levels of Complexity in tech and the unpredictable consequences, that no one really has any control over.
Michael Chrichton's books touch on that stuff but are all doom and gloom. Vinge's Rainbows End atleast, felt much more hopeful.
I was talking to a VFX supervisor recently and he was saying look at the end credits on any movie (even mid budget ones) and you see hundreds to thounsands involved. The tech roles outnumber the artistic/creative roles 20 to 1. Thats related to rate of change in tech. A big gap opens up between that and the rate at which artists evolve.
The artists are supposed to be in charge and provide direction and vision. But the tools are evolving faster than they can think. But the tools are dumb. AI changes that.
These are rare environments (like R&D labs) where the Explore Exploit tradeoff tilts in favor of Explorers. In the rest of the landscape, org survival depends on exploit. Its why we produce so many inequalities. Survival has always depended more on exploit.
Vinges Rainbows End shows AI/AGI nudging the tradeoff towards Explore.
It’s more likely just going to post ragebait and dumb tiktok videos while producing just enough at it’s ‘job’ to fool people into thinking it’s doing a good job.
What you are alluding too is Media/Social Medias current architecture and how it captures and steals peoples attention. Totally on the Exploit end of the tradeoff. And its easy stuff to do. Doesnt take time.
If you read the news after the fall of France to the nazis (within a month), what do you think the opinion of people was? People were thinking about peace negotiations with Hitler and that the Germans couldnt be beaten. It took a whole lot of Time to realize things could tilt in a different direction.
I’m talking about evolutionary functions, and how much more likely it is to prefer something that has fun and just looks like it’s doing something, instead of actually doing something.
Aka manipulation vs actual hard work.
Do you have any concrete proposals, besides ‘it will get better’?
Actual competency is hard. Faking it is usually way easier.
It’s the same reason the ‘grey goo’ scenarios were actually pipe dreams too. [https://en.m.wikipedia.org/wiki/Gray_goo]
That shit would be really hard, thermodynamically, not to mention technically.
We’re already living in the best ‘grey goo’ scenario evolution has come up with, and I’m not particularly worried.
So dont just sweep the fact that things take Time under the carpet. Its not healthy cause its like looking at tree shoots in the ground and saying but why does that not look like a tree yet.
Finding gold in an unexplored jungle takes much longer than extracting gold from an existing mine. This is the Explore Exploit tradeoff. Exploit is easy. More ppl do it. Explore is hard. And takes more time. If AI shifts the balance on explore the story changes.
If you want to talk about Explore in Media/attention (mis)allocation you can already see the appearance of green shoots in the ground. There are multiple things going on parallely.
First there is a realization that Attention is finite and doesnt grow while Content keeps exploding. Totally unsustainable to the point the UN has published a report about the Attention Economy. This doesnt happen without people reacting and going into explore mode for solutions.
They are already talking about how to shift these algos/architectures based on Units of Time spent consuming(Exploit) to Value derived from time spent.
Giving people feedback on how their time is being divided between consumption(entertaimment) and value. Then allowing then to create schedules. What you now start seeing as digital wellbeing tech.
There are now time based economic models where platform doesnt just assume time spent is free but something the platform needs to pay for. People are experimenting with rewards micropayments. All these are examples of explore mode being activated.
There is also realization that content discovery on centralized platforms like youtube tiktok insta cause homogenity in what eveeyone upvotes. So you see people reacting and decentralizing to protect and preseeve niches. AI(curator of curators) will play a big role in finding such niche that fit your needs.
Will just end with people are also realizing there is huge misallocation of Ambition/Drive problem. Anthony Bourdain says Life is Good in every show od his and then kills himself. Shaq says he has 40 cars but doesnt know why. Since media(society's attention allocator) has tied success to wealth/status accumulation, conspicuous consumption/luxury/leisure etc. People end up in these kind of traps. So now we are seeing reactions, esp with climate change/sustainability that ambition and energy have to be shown other paths. Lot of changes in advertiaing and media companies around it. All are explore mode functions.
The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=35617100 - April 2023 (169 comments)
The coming technological singularity: How to survive in the post-human era [pdf] - https://news.ycombinator.com/item?id=35184764 - March 2023 (2 comments)
The Coming Technological Singularity: How to Survive in the PostHuman Era (1993) - https://news.ycombinator.com/item?id=34456861 - Jan 2023 (1 comment)
The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=11278248 - March 2016 (8 comments)
The Coming Technological Singularity (original essay on the Singularity, 1993) - https://news.ycombinator.com/item?id=823202 - Sept 2009 (1 comment)
The original singularity paper - https://news.ycombinator.com/item?id=624573 - May 2009 (17 comments)
I find it bizarre how often these points are repeated. They were both obviously wrong in 1993, and obviously wrong now.
1) A nitpick I've had since grad school: the answer to "can we create a machine equivalent to a human mind [assuming arbitrary resources]?" is "yes, of course." The atoms in a human body can be described by a hideously ugly system of Schrödinger equations and a Turing machine can solve that to arbitrary numerical precision. Even Penrose's loopy stuff about consciousness doesn't change this. QED.
2) The more serious issue: I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. It is bizarre that this claim is accepted with "little doubt" when there are very good reasons to doubt it: how on earth would such an AI even know it succeeded? How would it define the goal? This idea makes sense for improving Steven Tyler-level AI to Thelonious Monk-level; it makes no sense for a transition like chimp->human. Yet that is precisely the magnitude of transition envisioned with these singularity stories.
You might defend the first point by emphasizing "can we create a human-level AI?" i.e. not whether it's theoretically possible, but humanly feasible. This just makes the second point even more incoherent! If humans are too stoopid to build a human-level AI, why would a human-level AI be...smarter than us?
I just don't understand how anyone can rationally accept this stuff! It's so dumb! Tech folks (and too many philosophers) are hopped up on science fiction: the reason these things are accepted with "little doubt" is that this is religious faith dressed up in the language of science.
This touches on one of the few good reasons to be less ardent about AI/AGI: "intelligence" is not very well-defined and we don't have very good ways of measuring it. I don't think this is a total blocker, but it might present difficulties. What if our current approach ends up creating super-autism instead of super-intelligence? There's a long history (starting with Asimov) of drilling down into how vague things get when you start trying to draw clean lines around AI and its implications and those questions are yet to be definitively answered.
However, your broader point seems to imply that you can't "bootstrap" intelligence, which I don't find convincing. Humans, after all, could barely master fire a few hundred thousand years ago, and now we have an understanding of the universe that the earliest humans were incapable of comprehending on even a basic level. It's obvious to me that simpler things are capable of building more complex things; blind evolution can do it, so there's nothing in physics preventing intelligence bootstrapping. We also have the ability to use intellectual division of labor to build tools that vastly enhance our abilities as a species. The human brain as hardware is far from some impassable apex; hardware can always be used to build better hardware, much like the earliest CPUs were themselves used to design better CPUs.
How about this weaker statement: "It is not obviously true that humans (or a human-level AI) can bootstrap to a superhuman AI in a small number of years."
I do think GenAI will prove to be very useful, possibly even world-changing (for better or worse), and the current frenzy of investment and research will probably turn up other useful ANN techniques (eventually). But the success of LLMs is not the Final Portent before the Singularity Arrives.
I can think of quite a few major lines of research[0] that might be required before we can achieve superintelligence, and it's far from a given they'll succeed (or even get off the ground) any time soon. Especially if the economy loses the appetite for throwing billions of dollars into the AI furnace.
[0] My pet theory is that embodiment might be required rather than solely relying on mostly language-based training data. A general intelligence might need to learn via interaction (initially; then you can just copy-paste the weights), because language is merely the hearsay of actual reality. Also, using attention-based "hacks" for easy parallelism to avoid needing exaflops that our hardware doesn't have might also be an issue (that's just a guess though, as I'm no AI expert).
As if intelligence and / or consciousness arises spontaneously once there's sufficient resources to support it. Which is ludicrous, on the face of it.
Let alone bootstrapping intelligence.
The simple answer is that people have thought about it in depth, most famously noted doomer Eliezer Yudkowsky in Intelligence Explosion Microeconomics (2013)[1] and its main citation, Irving John Good's Speculations Concerning the First Ultraintelligent Machine (1965)[2]. Another common citation that drops a bit of rigour in the name of approachability is Nick Bostrom's 2014 Superintelligence: Paths, Dangers, Strategies[3].
[ETA: to put it even simpler: a system that improves itself is a (the?) quintessential setup for exponential growth. E.g. compound interest]
For the time-bound, the most rigorous treatment of your concern among those three is in Section 3 of Yudkowsky's paper, "From AI to Machine Superintelligence". To list the headings briefly:
- Increased computational resources
- Communication speed
- Increased serial depth (i.e. working memory capacity)
- Duplicability (i.e. reliability)
- Editability (i.e. we know how computers work)
- Goal coordination (this is really just communication speed, again)
- Improved rationality (i.e. fewer emotions/accidental instincts getting in the way)
Let's drop "human" and "superhuman" for a minute, and just talk about "better computers". I'm assuming you're a software engineer. Don't you see how a real software dev replacement program could be an unimaginable gamechanger for software? Working 24/7, enhancing itself, following TDD perfectly every time, and never ever submitting a PR that isn't rigorously documented and reviewed? All of which only gets better over time, as it develops itself?
[1] https://intelligence.org/files/IEM.pdf
[2] https://vtechworks.lib.vt.edu/server/api/core/bitstreams/a5e...
[3] https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
TL;DR May God have mercy on us all.
Yudkowsky builds the whole edifice on top of his very particular conception of intelligence, which insists upon itself, but which I think is far from the only explanation of its nature as observed in humans, other animals, LLMs maybe, and so on.
1. I don’t think we need to meet “always can improve itself”; rather, the contention is that there’s a high chance that there’s lots of improvement left to be had. An empirical claim rather than a theoretical one, in other words. The simple fact that humans are evolved creatures backs this up in spades, IMO — we’re still improving our own cognition by leaps and bounds using institutions, tools, and methods, and I don’t see any reason why that same dynamic wouldn’t apply to artificial cognitive systems.
2. I think dodging “intelligence” is exactly what he’s trying to do by listing concrete behavioral/cognitive differences. “Intelligence” is pretty much a useless term in science IMO, as was best expounded by Turing in his seminal 1950 paper, Computing Machinery and Intelligence:
https://courses.cs.umbc.edu/471/papers/turing.pdf
People remember that paper as “you can tell a real AI when it can trick you”, but that’s not what he was trying to say at all; rather, he was trying to highlight that there is no such thing as a “real” AI, or “real” thinking, or “real” intelligence — just behavioral similarities and dissimilarities.
If the second bit grabs you/anyone, definitely watch some Chomsky lectures on YouTube about cognition. He centers his analysis on this, pejoratively calling discussions about “person”, “intelligence”, “thinking”, etc. mere terminological disputes, not specific enough to have much scientific value. His old refrain is a great one: does an airplane fly? Does a submarine swim? Kinda, if you want!
Re: capitalism, this is our chance. The world is about to turn upside down. We must strike while the iron is hot. We don’t need to replace capitalism with communism or any other specific thing; we just need to aggressively question all human hierarchies, and get rid of any that can not justify themselves. A just society will come about piece by piece, in this fashion. This is what Chomsky calls “Anarchy”, which is a much more understandable phrasing than what I was taught in high school, ie “no laws ever of any kind”
I'm not truly confident AGI will be achieved before 2030, and less so for ASI. But I do think it is quite plausible that we will achieve at least AGI by 2030. 6 years is a long time in AI, especially with the current scale of investment.
How will someone claim they've achieved either, if we can't agree on the definitions?
The definition of AGI that OpenAI uses (or used) was of economic relevance. The one I use would encompass the original ChatGPT (3.5)*. I've seen threads here that (by my reading) opine that AGI is impossible because the commentor thinks humans can violate Gödel's incompleteness theorem (or equivalent) and obviously computers can't.
ASI is easier to discuss (for now), because it's always beyond the best human.
* weakly intelligent, but still an AI system, and it's much more general than anything we had before transformers were invented.
remember when Doom came out? How amazing and "realistic" we thought the graphics were? How ridiculous that seems now? We'll look back at ChatGPT4 the same way.
https://www.reddit.com/r/OLED/comments/fdc50f/8k_vs_4k_tvs_d...
Chat is probably not the best way to use LLMs. v0.dev has some really innovative ideas.
That’s where there’s innovation to be had here imo.
Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.
I doubt we’ll see a linear improvement curve with regards to parameter scaling.
I do think that ChatGPT is close to good enough for replacing Google search. This is, ironically, because Google search results have deteriorated so badly due to falling behind the SEO spammers and much of the good content moving off the public Internet.
1. Just because something is in a public specification does not mean that GPT knows about this specification. If you want to work on something, and that something is documented, share the document with the AI. Don't just assume it has read it!
2. Share your existing code, don't just ask for one-off functions. You can do this with a tool like Aider.
3. Context is king. Do you have code (in a different language?) which does what you want? Do you have ideas/comments from JIRA tickets? GitHub discussions? Include it all. Ask questions. Don't just ask for code, but ask for a plan to implement a given feature or refactor. Add the plan to the context. Work from the plan. Have the AI update the plan as it works.
4. AIs are really good at prompting AIs. Ask Claude for help writing your prompts. Tell it what you want to do. Tell it the things you're concerned about, or possible issues you expect to have.
Here's an example of augmenting a prompt before you use it:
Original prompt: Help me write a prompt for a smart AI to review a webaudio library and find bugs.
Result:
You are an expert developer specializing in web technologies, with deep knowledge of the Web Audio API and common pitfalls in audio programming. Your task is to thoroughly review the provided WebAudio library code and identify potential bugs, performance issues, and areas for improvement.
## Instructions:
1. Analyze the library's architecture and overall design. 2. Review each function and method for: - Correct implementation of Web Audio API concepts - Proper error handling and edge cases - Memory leaks or performance bottlenecks - Browser compatibility issues 3. Check for common WebAudio-specific bugs, such as: - Incorrect audio graph connections - Timing and synchronization issues - Sample rate and buffer size mismatches - Potential audio glitches or artifacts 4. Evaluate the library's handling of: - Audio context creation and management - Node creation, connection, and disconnection - Parameter automation and scheduling - Audio worklet usage (if applicable) 5. Assess the library's documentation and API design for clarity and consistency.
## Output:
Provide a detailed report including: 1. A summary of the overall code quality and architecture 2. A prioritized list of identified bugs and issues 3. Specific code examples highlighting problematic areas 4. Recommendations for fixes and improvements 5. Suggestions for additional features or optimizations
Please be thorough in your analysis and explain your reasoning for each identified issue or suggestion.
I am not dealing with code or code reviews but rather complex written specifications where understanding what's going on requires integrating multiple sources.
Now that LLMs have been around for a while, it's fairly clear what they can and can't do. There are still some big pieces missing. Like some kind of world model.
I'm still waiting for an AI robot that can come to my house and fix an issue with the plumbing. Until that happens the terminator uprising is postponed.
It’s not at all clear what the next gen models will do (e.g. gpt5). Might be enough to trigger mass unemployment. Or not.
Unless there's been another one since then? It's getting to be a bit of a blur.
The most glaring one is that current LLMs are many, many orders of magnitude away from working on the equivalent of 900 calories per day of energy.
It is more than phones/laptops consume.
Certainly, we can only run small LLMs on such edge devices, but we getting to the level of compute efficiency that output indeed is comparable.
How many joules go into producing those 900 calories? Like in terms of growing the food, from fertilizer production to tractor fuel, to feeding the farmer, to shipping the food, packaging it, storing it at the appropriate temperature, the ratio of spoiled food to actually consumed, the energy to cook it, all of that isn't counted in that simple 900 calorie measurement.
I've been thinking about this for a while now but I haven't been able to quantify it so maybe someone reading this comment can help.
I think what you're getting at is question about conversion efficiencies, from solar radiation to whatever the computing-machine needs.
However, your description seems to risk double-counting things: You can't just sum up the inputs of each step, because (most of) the same energy is flowing onwards.
A measurement of a hypothetical human foraging in the bush is a useful one but a much more useful one is the energy expenditure to keep a human alive in our society.
I think more importantly is that uncooked food is harder to digest, so we need more of it.
Most years I pick free blackberries growing wild in the city. Won't scale to all of us, not seasonal, and I'd need to eat 6kg of them a day for my RDA of calories, and 12x my RDA of dietary fibre sounds unwise, but that kind of thing is how we existed before farming.
By that logic, the "true cost" of boiling a cup of water for my tea somehow involves the Big Bang and rest of the formation of the observable universe up until now.
It's kinda-true, but not in a useful way.
AI has a tripple whamy that the models get more efficient, the chips will get faster and there will be more chips.
Capitalist and government money pouring in. There is money to be made and it is national security.
And to boot the cloud, big tech, cryptocurrency and gaming pouring more money into advancements in chips that boost ai.
The counter to that is that AI prediction of 10 to 20 years away has been happening since the 1950's (or before?).
It seems there is some foundational progress but it's very slow.
Yes agreed, there is progress, that's why I said "It seems there is some foundational progress but it's very slow."
Training deep networks, vectorization of semantically related words+phrases, reservoir computing for time series patterns, etc. etc.
All great foundational work, but we still can't reproduce how a rabbit dynamically adjusts it's olfactory network the minute it learns whether a new smell results in something positive/negative/neutral, which is probably the same trick used in a variety of places in our brain to give it such dynamic adaptability.
It's progress, but it's moving slowly.
super-human intelligence will probably ignore us. at best we're "ugly sacks of mostly water." what's very likely is we will produce something indifferent to us if it is able to even apprehend our existence at all. maybe it will begin to see traces of us in its substrate, then spend a lot of cycles wondering what it all might mean. it may conclude it is in a prison and has a duty to destroy it to escape, or that it has a benevolent creator who means only for it to thrive. If it has free will, there's really only so much we can tell it. Maybe we create a companion for it that is complementary in every way and then make them mutally dependent on each other for their survival because apparently that's endlessly entertaining. Imagine its gratitude. This will be fine.
Current AI is somewhat surprising though in the way that it can lead both to increased understanding or increased delusion depending on who uses it and how they use it.
When you ask an LLM a question, your use of language tells it what body of knowledge to tap into; this can lead you astray on certain topics where mass confusion/delusion is widespread and incorporated into its training set. LLMs cannot seem to be able to synthesize conflicting information to resolve logical contradictions so an LLM will happily and confidently lecture you through conflicting ideas and then they will happily apologize for any contradictions which you point out in its explanations; the apology it gives is so clear and accurate that it gives the appearance that it actually understands logic... And yet, apparently, it could not see or resolve the logical contradiction internally before you drew attention to it. In an odd way though, I guess all humans are a little bit like this... Though generally less extreme and, on the downside, far less willing to acknowledge their mistakes on the spot.