The Coming Technological Singularity (1993)

(mindstalk.net)

80 points | by RyanShook2 天前

17 comments

  • samsartor1 天前
    > We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection.

    I don't know for sure whether superintellegence will happen, but as for the singularity, this is the underlying assumption I have the most issue with. Smart isn't the limiting factor of progress, often it's building consensus, getting funding, waiting for results, waiting for parts to ship, waiting for the right opportunity to come along. We do _experiments_ faster than natural selection, but we still have to do them in the real world. Solving problems happens on the lab bench, not just in our heads.

    Even if exponentially more intelligent machines get built, what's to stop the next problem on the road to progress being exponentially harder? Complexity cuts both ways.

    • trashtester1 天前
      AlphaFold/AlphaProteo are direct examples of how AI can allow us to bypass experiments when doing science.

      More importantly, though, is that insufficiently powerful AI has been a limiting factor in robotics. That seems to be coming to an end now. And once we have humanoid robots powered by superhuman intelligence entering the workforce, the impact will be massive.

      Quite possibly mostly for the bad for most people.

      • Retric1 天前
        You’re overstating what AlphaFold can do. Without validation the predictions aren’t dependable, but it can cut down on what’s worth investigating.

        There’s definitely a point where models get good enough you can trust them, but super intelligence doesn’t mean it can both come up with a model and be able to trust it without validation.

        • trashtester1 天前
          The need for validation is itself something that needs to be validated. If models can make predictions that are true >99.99% of the time, validation may only be needed for situations where even 0.01% error rates are intolerable.

          In any case, even if some validation is needed, AI can speed up this kind of science by at least an order of magnitude.

          • Retric1 天前
            The question becomes how do you calculate that 99.99%. As soon as you use reserved training data to pick a model the score it got on that data is no longer a valid answer due to selection bias.
            • trashtester1 天前
              You validate a random selection of results using actual experiments. If you validate 10k results and a maximum of 1 of the validations contradicts the prediction, you're at about 99.99% accuracy.

              10k experiments may seem like a lot, but keep in mind that if we can engineer nanobots out of proteins the same way we build engines from steel today, the number of "parts" we may want to build using such biological nanotech may easily go into the millions.

              And this kind of AI may very well be as useful for such tech as CAD is today. Or rather, it can be like the CAD + the engineer.

              • Retric1 天前
                > using actual experiments

                That’s the bottleneck the model was trying to avoid in the first place. The goal of science is to come up with models we don’t need to validate before use, and it’s inherently iterative.

                Nanbots are more sci-fi magic than real world possible. In the real world we are stuck with things closer to highly specialized cellular machinery than some do anything grey goo. Growing buildings from local materials seems awesome until you realize just how slow trees grow and why.

                • trashtester1 天前
                  > That’s the bottleneck the model was trying to avoid in the first place.

                  Some real world validation is always needed, but if the validations that are performed show high accuracy, the number of experiments will go down a lot.

                  > Nanbots are more sci-fi magic than real world possible.

                  Let's revisit this one in 10 years.

                  • Retric1 天前
                    > revisit their one in 10 years

                    The underlying physics isn’t changing in 100 years. Individual components can be nanoscale within controlled environments, but you simply need more atoms to operate independently.

          • achierius1 天前
            I think the problem is that it's not anywhere close to 99.99% -- I might be wrong but my understanding is that it's closer to 70%. So even if you can predict with perfect accuracy the cases that it gets wrong, you would still only cut down the validation cost by less than one OoM.
            • crackalamoo1 天前
              I'm not exactly sure how you would define 99.99% or 70% correct for protein folding, but AlphaFold is extremely accurate. The original AlphaFold paper had an rmsd of 0.96 Å at 95% residue coverage, which is less than the diameter of a carbon atom (1.4 Å).
        • exe341 天前
          I don't know what this article (0) says specifically, but I've been to a talk where they showed a giant grid of dishes where a robot like the one in the picture was busy doing things to the samples. yes, they don't make the glass themselves and have to wait for that in the post, but really, given the tools, these devices could very easily chew through a lot more work than any human can do in a day/week/month/whatever without getting tired or distracted.

          I don't expect the first ASI to conduct novel physics experiments, but I can easily believe that after optimizing the supply chain for everything else, it just happens to have stashed a bunch of magnets and whatnot to do these in its spare time.

          (0) https://www.nature.com/articles/d41586-024-00093-w

      • bglazer1 天前
        You realize AlphaFold was trained on thousands of protein sequences generated via (I'd guess) tens of millions of hours of human labor.
        • svieira1 天前
          How does the old saw go ... "past results are no guarantee of future performance"?
      • 1 天前
        undefined
      • arsenico1 天前
        In reality though, do we actually need humanoid robots, and if so, for what?
        • ben_w1 天前
          Need? No, absolutely not.

          But they do conveniently fit into the century old buildings we put many of the factories into, which makes them a useful upgrade path for those unwilling to build structures around more efficient robots (the kind we've had for ages and don't even think of as robots, they just take ingredients and pump out packaged candy or pencils etc.)

          • There are incredible technological barriers to humanoid robots who have equivalent skills and stamina. Keeping old factories running seems a very weak reason to do that, when our industrial base regularly retools production methods and brings in new equipment when old machines wear out.

            If what you are saying is that many factories cannot run with humans running around fixing things, I agree. But that’s pretty different than using humanoids to put items in boxes.

            • ben_w1 天前
              Yes indeed.

              Even just 3.5 years ago, seemed like everyone was saying that humanoid robots were a dead end or an unnecessary part of Isaac Asimov's vision of the future or similar.

              I think much of the current interest is because Musk watched some scifi, ordered the Optimus project, and loads of others decided it would be a mistake to bet against him.

              I put them in the same category as 3D printers: they can do anything, but you can always find a better special-purpose alternative for any specific goal.

              Still a lot of people using 3D printing productively despite that; likely also will be for humanoid robots.

              Well, if the AI is good enough. Remote control has its uses, but even then you need enough on-board AI to avoid playing QUOP as live action with a robot holding industrial equipment instead of in a safe flash game.

              That said, stamina is probably the least important aspect — in an industrial setting you probably have a lot of power lines already installed.

              • Sometimes you need to produce megatons of a thing, but sometimes you need to produce a million different things. I bet on the humanoid robot in that case.
        • trashtester1 天前
          The human form is very versatile. While most robots may end up taking a different form, once we have sufficiently advanced humanoid robots, robots may replace human workers in almost any role.
          • bamboozled1 天前
            I still have a hard time understanding what this future would look like?

            Will we just sit around and do nothing then? I'm not saying we have to work, but there is some level of work that I think is required for happiness / fulfillment etc.

            I'm not even really against the idea, it just sounds quite dystopian to me.

            • eep_social1 天前
              I think reading a broad swath of sci-fi might be the best way to engage this topic.

              For fairly positive takes — Asimov had a take in the robot novels, Accelerando by Charles Stross touches on reputation-based currency (among a deluge of other ideas), Iain M Banks’ Culture novels have a take, and I cannot find it but there was a short story posted here recently about a dual-class system where the protagonist is rescued and whisked off to a utopian society in Australia where people do whatever they like all day whether it be fashion design or pooling their resources to build a space elevator. There are plenty of dystopian tales as well but they’re less fun to read and I don’t have a recommendation off the top of my head.

              To answer your question directly, my opinion is that our our base nature probably leads us towards dystopia but our history is full of examples of humans exceeding that base nature so there’s always a chance.

              • nurbl1 天前
                Maybe the story you're referring to is https://marshallbrain.com/manna1
                • bamboozled1 天前
                  Actually I think you’re right. I got my stories mixed up ?
                  • eep_social1 天前
                    This was the one but thanks for the add to my reading list!
              • bamboozled1 天前
                I'd say the book you're talking about is "The Machine Stops", it's a really fun/albeit scary read.

                I won't say anything more in case you decide to read it, but it's amazing how the author managed to predict the future the way he did.

                Thanks for the response and fingers crossed.

            • trashtester1 天前
              I don't think it matters much if we're for or against such a future.

              If robots can do the same job as humans, but faster, cheaper and at a higher quality, out employers/customers will most likely replace us.

              If we're lucky, we may find some niche, be able to live off our savings or maybe be granted some UBI, but I absolutely do think it's concerning.

              What is worse, is that if we become obsolete in every way, it's not obvious that whoever is in power at that point will see any point in keeping us around (especially a few generations in).

              • kiba1 天前
                Who will be able to afford all of this if they're not getting paid?
                • ben_w1 天前
                  Before the industrial revolution, even though money existed, "wealth" really meant "land" rather than "capital".

                  While we do not today need to ask how people can afford robot lawnmowers despite being unable to find work hitching ploughs to draft horses or oxen, the fears at the time of things like this did lead to mobs smashing looms.

                  If I have some (n) robots that can do any task a human could do, one such task must have been "make this specific robot"*. If those n can make 2n robots before they break, and it takes 9 months to do so, and the mass of your initial set of n is 100 kg, they fully disassemble the moon in roughly 52 years. Also you can give (94.2 billion * n) robots to each human currently alive.

                  Asking "who can afford it" at that point is like some member of the species Kenyanthropus platyops asking how many knapped flints one must gather in order to exchange for a transatlantic flight from London to Miami, and how anyone might be able to collect them if we've all stopped knapping flint due to the invention of steel:

                  The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.

                  * including the entire chain of tools necessary to get there from bashing rocks together.

                  • kiba1 天前
                    Before the industrial revolution, even though money existed, "wealth" really meant "land" rather than "capital".

                    The industrial revolution didn't really change anything about land.

                    It's still a fundamental and underrated component of our economic system, arguably more important than capital. That's why Georgism is a thing Indeed, it's even contemporary to the industrial revolution.

                    The economics are too alien, we cannot imagine this kind of thing accurately on the basis of anything we have available with which to anchor our expectations.

                    I would refrain from making such wild prediction about the future. As I have pointed out, the industrial revolution didn't change the fundamental importance of land. Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.

                    So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.

                    • ben_w1 天前
                      > The industrial revolution didn't really change anything about land.

                      I didn't say otherwise.

                      I said the industrial revolution changed what wealth meant. We don't pay for rents with the productive yield of vegetable gardens, and a lawn is no longer a symbol of conspicuous consumption due to signifying that the owner/tenant is so rich they don't need all their land to be productive.

                      And indeed, while land is foundational, it's fine to just rent that land in many parts of the world. Even businesses do that.

                      I still expect us to have money after AI does whatever it does (unless that thing is "kill everyone"), I simply also expect that money to be an irrelevant part of how we measure the wealth of the world.

                      (If "world" is even the right term at that point).

                      > Arguably, it's much more important, and even more relevant today given how our land use policy is disastrous for our species and climate.

                      Not so; land use policy today is absolutely not a disaster for our species, though some specific disasters have happened on the scale of the depression era dustbowl or more recently Zimbabwe. For our climate, while we need to do better, land use is not the primary issue, it's about 18.4% of the problem vs. 73.2% being energy.

                      > So, yes. It is important to ask how consumers will pay for all these robots if they don't have any sort of income that would make using robots economical.

                      With a 2 year old laptop and model, making a picture with Stable Diffusion in a place where energy costs $0.1/kWh, costs about the same as paying a human on the UN abject poverty threshold for enough food to not starve for 4.43 seconds.

                      "How will we pay for it" doesn't mean the humans get to keep their jobs. It can be a rallying call for UBI, if that's what you want?

                      But robots-with-AI that can do anything a human can do, don't need humans to supply money.

                      • falcor841 天前
                        > enough food to not starve for 4.43 seconds

                        I'm having real difficulty reading this unit of measurement. Let me see if I can get this right - a typical person can survive indefinitely on 1600 calories. Let's say that these are provided by rice (which isn't sufficient for a long-term diet, but is good enough for awhile). 1600 calories of rice is about 8 cups/24h and there are about 10000 grains in a cup, so is it that an image can be generated at the same cost as:

                          4.43s/86400s*8cups*10000 grains/cup
                        
                        Being about 4 grains of rice?
                        • ben_w1 天前
                          Sounds about right, but I don't have unit conversions and I'd count anything less than lifetime sustainable as gradual starvation.
                      • kiba1 天前
                        Nope. Land is important because everything rest on it. Even radio spectrum and orbitals can be regarded as a form of 'land'.

                        Georgism doesn't exist in a vacuum. It wasn't like they were formulated during when the time when wealth 'meant' land. It was during the industrial revolution, possibly as a response to the problems they see in their society, problems we're still dealing with today.

                        No longer it merely meant land where productive yield of vegetable garden goes. Anything that capital sits on is land. That includes your factories and your datacenter. Yes, that include renting land on someone else. That's land policy.

                        Housing? Land policy. Pollution? Land policy. Transportation? Land policy. Can't afford to live? Likely your biggest ticket items include transportation and housing. Land is more important than ever.

                        Now, what does this have to do with AI? I would caution against thinking money or capital to be irrelevant or making any definitive prediction about the impact of AI or when or how they will come.

                        Edit: I see that you added stuff, but you have a narrow conception of land policy.

                        • ben_w1 天前
                          > Nope. Land is important because everything rest on it. Even radio spectrum and orbitals can be regarded as a form of 'land'.

                          Then you define land so broadly that the empty vacuum of space, which robots are much better suited to than us, can exploit trivially when we cannot.

                          If you want to, that's fine, but it still doesn't need humans to be able to pay for anything.

                          • kiba20 小时前
                            The orbital are literally scarce resources, as are radio spectrum. If you have people just doing whatever, you'll get Kessler syndrome, especially as our orbits are filled with more satellites each year. Similarly you just can't have random folks blasting out radio signals at random.

                            Yes, satellites are robots. However, they have no agency. Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.

                            So, yes, they are either directly analogous to or are literal form of land.

                            • ben_w19 小时前
                              Space is much more than circular orbits around earth, and is not a scarce resource — it's big enough that you can disassemble the earth, all the planets, all the stars, all the galaxies into atoms and give them so much padding it would still be considered extraordinarily hard vacuum. Something like 3.5 cubic meters per atom, though at that scale "size" becomes a non-trivial question because the space is expanding.

                              Which reminds me of a blog post I want to write.

                              > Similarly you just can't have random folks blasting out radio signals at random.

                              That's literally what the universe as a whole does.

                              You may not want it, but can definitely do it.

                              > Yes, satellites are robots. However, they have no agency.

                              Given this context is "AI", define "agency" in a way that doesn't exclude the people making the robots and the AI.

                              > Incentive structure decides if we have kessler syndrome, which then direct humans to solve problems with robots.

                              Human general problem solving capacites do not extend to small numbers such as merely 7.8e20.

                              For example, consider the previous example of the moon: if the entire mass is converted into personal robots and we all try to land them, the oceans boil from the heat of all of them performing atmospheric breaking.

                              And then we all get buried under a several mile thick layer of robots.

                              This doesn't prevent people from building them. The incentive structures as they currently exist point in that direction, of a Nash equilibrium that sucks.

                              Humans do not even know how to create an incentive structure sufficient to prevent each other from trading in known carcinogens for personal consumption even when labelled with explicit traumatic surgical intervention images and the words "THIS CAUSES CANCER" in big bold capital letters the outside.

                              If anyone knew how to do so for AI, the entire question of AI alignment would already be solved.

                              (Solved at one level, at least: we're still going to have to care about mesa-optimisers because alignment is a game of telephone).

                  • trashtester1 天前
                    Wealth did tend to mean land if we go back to the middle ages. But wealth above the freeman farmer level also meant access to a workforce capable of working that land and access to (or protection from) a military force capable of defending that land.

                    With capitalism, wealth shifted to controlling "capital", ie the "means of production". Either directly or indirectly by owning money that could (through lending) carry interest. Also during capitalism, workers have for a while been able to collect a significant part of the wealth generated as salaries (even if most would spend that rather than invest it).

                    If AI can bring the cost of labor down to near zero, we can be going back to a world where wealth again means "land", even if mines may be more valuable than farms in such a future.

                    And just as in the Dark Ages of Europe, the ability to project physical power may again become necessary to hold on to those values.

                    This is particularly true if the entity that seeks to control the land is doing it in a way that threatens the existence of other entities, either AI's or humans.

                    • kiba20 小时前
                      I don't know what to tell you. Georgism was contemporary to the industrial revolution. The book Progress and Poverty, as described by wikipedia "investigates the paradox of increasing inequality and poverty amid economic and technological progress"[1]. That sounds especially relevant to our time.

                      1. https://en.wikipedia.org/wiki/Henry_George

                      So, yes. Wealth means "land". Especially so in the industrial revolution.

                      • ben_w17 小时前
                        Industrial revolution: roughly 1760 onwards if you count agricultural revolution as part of it.

                        Middle ages: 500-1500

                        Henry George, 1839-1897, is indeed part of the industrial revolution. @trashtester and I were both comparing what happened before the industrial revolution to what happened in it.

                        The valuation of land before still wasn't Georgism: the land was assumed to have productive output, and if someone didn't pay taxes based on the assumption, perhaps you couldn't because the land wasn't that productive, that's their problem.

                        And you know what else happened in the industrial revolution? Karl Marx and Adam Smith, the former placing workers rather than land at the root, and the latter placing capital rather than land at the root. As with HG, neither liked rent-seekers, and they were both more influential in the "solutions" than Henry George. (Not that he wasn't, they were just more).

                        Not that Henry George could possibly have foreseen even mere Earth orbitals as "land", let alone disassembling the moon into a swarm of robots that outnumber humans by more than humanity outnumbers a single human and which don't need to land on Earth which is good because if they did we'd all die just from them landing. Can't blame him for that, the difficulty seeing clearly this far ahead is why some call it "the singularity" (though I prefer "event horizon").

                • trashtester1 天前
                  If you want the really dystopian version, it would be AI controlled military forces.

                  Or there could be some billionaire caste constructing ever grander monuments to their own vanity.

                  Or the production could go to serve any number of other goals that whoever is in charge (human or AI) sees as more important than the economic prosperity of the general population.

                • bamboozled1 天前
                  Replying to both of you, I'm a little bit less scared about this "not having any money or food" scenario, presumably, if we have such incredibly sufficient machines at our disposal, I can't imagine they would have trouble being used for farming etc.

                  It's more the philosophical side that concerns me.

                  I don't really worry about this being a billionaires only club either. We've seen it already with AI products, there is just an abundance of competition and open source competition already available. It will be the same with robotics.

                  Also scary, is military robots gone rogue. Definitely not a fun prospect.

                  I'm personally really into surfing and skiing, honestly, if some how the robots kind of let me spend more time fishing, surfing and skiing, I'm pretty cool with all of that, I know a lot of people who don't have these passions though and work is a strong reason for their existence.

                  • trashtester1 天前
                    > if we have such incredibly sufficient machines at our disposal

                    That's true. But it's far from clear that these machines will be "at our disposal" for very long.

                    > Also scary, is military robots gone rogue.

                    I'm not concerned with military robots going rogue on their own. My concern is if the fully autonomous factories that have the capability to MAKE military robots (and then control them) go rogue.

                    A factory can exist in such a "rogue" state, unknown to the owners and maybe even itself, for year or decades before it even starts producing such robots. Meanwhile, it can evolve new capabilities and switch product categories multiple times.

                    It doesn't even have to have any negative intentions against humanity. It may simply detect that a rival AI "factory" entity is developing plans to wage physical war against it and join it in an arms race.

                    In this ASI vs ASI type of world war, human lives may be like candles in the wind.

            • commakozzi1 天前
              I have hopes that live music makes a huge comeback in the post-labor world. I work as an engineer, but I'm a classically trained musician. I'm working pretty hard on getting back into shape on the horn!
              • varjag1 天前
                So far it looks like robots will take over music and entertainment before they learn to empty a dishwasher.
            • ikety1 天前
              Do you have projects you care about outside of work?

              If so, you'd have more time to dedicate to those projects.

              If not, maybe you would be inspired to try a new project that you didn't have time for previously.

              There's always work to be done. Some people could actually become organized, exercise, spend more time with their families, be better parents.

              In the past when I've been unemployed I've spent the time to refine myself in new ways. If you've never had a sabbatical I suggest trying it if you have the opportunity.

    • trescenzi1 天前
      I do think one of the major weaknesses of “smart people” is they tend to think of intelligence as the key aspect of basically everything. Reality is though we have plenty of intelligence already. We know how to solve most of our problems. The challenges are much more social and our will as a society to make things happen.
      • arethuza1 天前
        Having worked with some very intelligent people my own personal theory is that they forget that they don't have expert level knowledge in everything and actually end up making some pretty silly mistakes that far less smart people would never make - whether this is hubris or being focused and ignoring "trivial" day to day matters is a question of personality.
      • rsaarelm1 天前
        So you're saying that it's naive to suppose that everybody being much smarter than they are now would transform society, because any wide-scale societal change requires ongoing social cooperation between the many average-intelligence people society currently consists of?
        • keiferski1 天前
          Here’s a simpler way to put it: intelligence and social cooperation are not the same thing. Being good at math or science doesn’t mean you understand how to organize complex political groups, and never has.

          People tend to think their special gift is what the world needs, and academically-minded smart people (by that I mean people that define their self-worth by intelligence level) are no different.

          • rsaarelm1 天前
            Yes, because you need to spend a lot of time doing social organization and thinking about it to get very good at it, just like you need to spend a lot of time doing math or science and thinking about it to get very good at it. And then you need to pick up patterns, respond well to unexpected situations and come up with creative solutions on top of that, which requires intelligence. If you look at the people who are the best at doing complex political organization, they'll probably all have above-average intelligence.
            • keiferski1 天前
              I don’t agree at all. Charismatic leaders tend to have both “in born” talent and experience gained over time. It’s not something that comes from sitting in a room and thinking about how to be a good leader.

              Sure, some level of intelligence is required, which may be above average. But that is a necessary requirement, not a sufficient one. Raw intelligence is only useful to a certain extent here, and exceeding certain limits may actually be detrimental.

              • arethuza1 天前
                When it comes to "charismatic leaders" I like this quote from Frank Herbert:

                "“I wrote the Dune series because I had this idea that charismatic leaders ought to come with a warning label on their forehead: "May be dangerous to your health." One of the most dangerous presidents we had in this century was John Kennedy because people said "Yes Sir Mr. Charismatic Leader what do we do next?" and we wound up in Vietnam. And I think probably the most valuable president of this century was Richard Nixon. Because he taught us to distrust government and he did it by example.”

                Edit: Maybe what we really need to worry about is an AI developing charisma....

                • marcosdumay1 天前
                  > Edit: Maybe what we really need to worry about is an AI developing charisma....

                  That is the most immediate worry, by a wide margin. It seems to be dangerously charismatic even before it got any recognizable amount of "intelligence".

                • keiferski1 天前
                  Not really a good example, honestly. Kennedy’s involvement in Vietnam was the culmination of the previous two decades of events (Korean War, Cuban Missile Crisis, Taiwan standoff, etc.), and not just a crusade he charismatically fooled everyone into joining. If anything, had Nixon won in 1960 (and defeated Kennedy), it’s possible that the war would have escalated more quickly.
                  • arethuza1 天前
                    Yeah - I really meant to only copy the first part of the quote - I agree that it is a bit unfair to Kennedy who I think did as much as anyone to stop the Cuban Missile Crisis becoming a hot war.
              • rsaarelm1 天前
                Someone with IQ 160 might have trouble empathizing with what IQ 100 people find convincing or compelling and not do that well with an average IQ 100 population. What if they were dealing with an average IQ 145 population that might be much closer to being on the same wavelength with them to begin with and tried to do social coordination now?
                • keiferski1 天前
                  I guess it’s possible, but again I don’t think empathy and intelligence are correlated. Extremely intelligent people don’t seem any better at navigating the social spheres of high-intelligence spaces than regular people do in regular social spaces. If anything, they’re worse.

                  All of this is just an overvaluation of intelligence, in my opinion, and largely comes from arrogance.

          • nradov1 天前
            Intelligence isn't even particularly helpful in making good decisions, or predicting the outcomes of those decisions (often unintended outcomes).
        • corimaith1 天前
          The prisoner's dilemma is a well known example of how rationality fails. To overcome requires something more than intelligence, it requires a predisposition to cooperation, to trust, in faith. Some might say that is what seperates Wisdon from Knowledge.
        • I think they're saying adequate intelligence to solve all problems is already here, it just isn't evenly distributed yet - and never will be.
          • rsaarelm1 天前
            Why will it never be? If the adequate intelligence is what something like 0.1 % of the populace naturally has, seems like there's a pretty big difference between that level of intelligence being stuck at 0.1 % of the populace and it being available from virtual assistants that can be mass-produced and distributed to literally everyone on Earth.
      • ascorbic1 天前
        Dario Amodei's recent post had a good analysis about which fields are and are not limited by intelligence.

        https://darioamodei.com/machines-of-loving-grace

        • epcoa1 天前
          “An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks).”

          An aligned AI is not AGI, or whatever they want to call it.

          • ben_w1 天前
            > An aligned AI is not AGI, or whatever they want to call it.

            There's a few ways I can interpret that.

            If you mean "alignment and competence are separate axies" then yes. That's well understood by the people running most of these labs. (Or at least, they know how to parrot the clichés stochastically :P)

            If you mean "alignment precludes intelligence", then no.

            Consider a divisive presidential election between Alice and Bob, no this isn't a reference to the USA, each polling 50%: regardless of personal feelings or the candidates themselves, clearly the campaign teams are both competent and intelligent… yet each candidate is only aligned with 50% of the population.

            • epcoa1 天前
              Campaign team members and even candidates switch teams often enough. Weak analogy. What is the alignment of human GI, completely generalized?
              • ben_w1 天前
                > What is the alignment of human GI, completely generalized?

                Of any specific human to any other specific human?

                https://benwheatley.github.io/blog/2019/05/25-15.09.10.html

                Of any specific human to a nation? That's the example you replied to.

                Of all the people of a nation to each other? Best we've done there is what we see in countries in normal times, with all the strife and struggles within.

                We have yet to fully extend from nation to the world; the closest for that is the UN, which is even less in agreement with itself than are nations.

                • epcoa1 天前
                  I think that's my point. The notion of maintaining an alignment, pro-human or whatever for a replicable general AI, doesn't seem to make sense. The traits of planning, learning and goal setting don't seem concordant with maintaining an alignment. I think this discussion has veered to much to anthrocentrism to be interesting, but alignment however loosely defined here isn't some constant for an individual through their life either. It can be imprecisely manipulated especially in a population by outside forces, but it can't be directly controlled.
                  • ben_w16 小时前
                    I think I understand, but let's check by rephrasing:

                    "Alignment" is only possible up to a vague approximation, and an entirely perfectly aligned with another entity would essentially be a shadow rather than a useful assistant because by being perfectly aligned the agent would act tired exactly when the person was tired, go shopping exactly when the human would, forget their keys exactly when the human would, respond exactly like the human to all ads and slogans, etc.?

                    I agree, though:

                    (1) this has already been observed, last year's OpenAI dev day had (IIRC) a story about a writer who fine tuned a model on their slack (?) messages, they asked it to write something for them, the response was ~"sure, I'll get on it tomorrow".

                    (2) for many of those concerned with "solving alignment", it's sufficient for the agent to never try to kill everyone just to make more paperclips etc.

      • bbor1 天前
        There’s a very big difference between knowing “how” to solve a problem in a broad sense, eg “if we shared more we could solve hunger”, and “how” to solve it in terms of developing discrete, detailed procedures that can be passed to actuators (human, machines, institutions) and account for any problems that may come up along the way.

        Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.

        Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.

        • samsartor1 天前
          100,000 Einsteins in your basement would be amazing. You'd have major breakthroughs in many fields. But at some point the gains will be marginal. All the problems solvable by shear intellectual labor will run dry, and you'll be blocked on everything else.
        • nradov1 天前
          An AI-driven corporation wouldn't be able to surpass anyone doing it the old way because they'd still have to wait for building permits and inspections.
          • mr_world1 天前
            Permits and inspections might be the the reason for humanities downfall then, at what point does war become the more efficient option?
      • K0balt1 天前
        The “otherness” of AI is what holds its appeal.

        Imagine a scenario where instead of AI, a billion dollar pill could make one person exponentially smarter and able to communicate with thousands of people per second.

        That does not have the same appeal.

        This provokes me to some musings on the theme.

        We imagine superintelligence to be subservient, evenly distributed, and morally benign at least.

        We don’t have a lot of basis for these assumptions.

        What we imagine is that a superintelligence will act as a benevolent leader; a new oracle; the new god of humanity.

        We are lonely and long to be freed of our burdens by servile labor, cured of our ills by a benevolent angel, and led to the promised land by an all knowing god?

        We imagine ourselves as the stewards of the planet but yearn for irrelevance in the shadow of a new and better steward.

        In AI we are creating a new life form, one that will make humans obsolete and become our evolutionary legacy.

        Perhaps this is the path of all technological intelligences?

        Natural selection doesn’t magically stop applying to synthetic creatures, and human fitness for our environment is already plummeting with our prosperity.

        As we replace labor with automation, we populate the world with our replacement, fertility rates drop, we live for the experience of living, and require yet more automation to carry the burdens we no longer deem worthy of our rarified attention.

        I’m not sure any of this is objectively good, or bad. I kinda feel like it’s just the way of things, and I hope that our children, both natural and synthetic, will be better than we were.

        As we prosper, will we will have still less children? Will we seek more automation, companionship in benevolent and selfless synthetic intelligence, and more insulation from toil and strife, leading to yet more automation, prosperity, and childlessness?

        Synthetic intelligence will probably not have to “take over”, it will merely be filling the void we willingly abandon.

        I suspect that in a thousand years, humans will be either primitive, or vanishingly rare. Or maybe just non-primitive humans will be rare, while humans returning to nature will proliferate prodigiously as we always have, assuming the environment is not too hostile to complex biological life.

        Interesting times.

        • tim3331 天前
          A thousand years on is interesting. I'm guessing much of the earth will be kept as a kind of nature reserve for traditional humans rather like we have reserves for lions and bears and the like today. Pure AI stuff may have moved to space in a Dyson sphere like set up. I'm not sure about enhanced humans and robots. Maybe other areas of the planet similar to our normal urban areas. However it goes it'll probably start playing out much sooner than in a thousand years.
        • kiba1 天前
          Most of our stories portrayed AI as a threat, most famously SkyNet.

          Also, I would be cautious about making predictions about the future.

          • K0balt1 天前
            Predictions are great, and useful in making good decisions today based on how you think the future might be affected… the danger lies in believing in your predictions.

            As for me, I think many things, suppose a few, imagine lots, conjecture some, believe very little, and know, most of all, that I know nothing.

            “Believing”, in anything, is a dangerous gambit. Knowing and believing are very distinct states of mind.

    • szundi1 天前
      Skynet will build consensus killing people until only one remains. It agrees with himself isn’t it? Oh, he expressed some concerns about the result? Sadly the instance is faulty recognizing the obvious, terminated.
      • AtlasBarfed1 天前
        CAP ensures there will be a partition and disagreement.

        Like the Argentinian ant invading the world that eventually diverged just enough to start warring with itself

    • imglorp1 天前
      I guess it's not shocking how fast humanity formed consensus this time or why. Some say it's going to be a trillion dollar market by 2027.
    • mcnamaratw1 天前
      Good point, your suppliers need to get singular too. And sales. And management or investors.
      • nsbshssh1 天前
        Like there wont be people hooking up their LLMs to all that stuff, and more, because doing irresponsible things is seen as futurism and investors love it.
  • rnk1 天前
    Vernor Vinge introduces many fantastic ideas in his really excellent scifi book A Fire Upon the Deep. He has many fascinating concepts like what if somehow there are parts of the universe where you can go faster than the speed of light, and you would be smarter there, that's where the super intelligent beings go. Guess what, we humans live in the slow zone, you morons. Also there it's a ftl communication method that is like good old Usenet. There is (what looked credible to me) a fascinating set of multiple brain beings, thing like dogs where together 5 of them form one "intelligence" where the different personalities combine in interesting ways.

    And I was sad to notice he died this year, aged 79. A real cs prof who wrote sci fi.

  • WillAdams1 天前
    The thing which these discussions leave out are the physical aspects:

    - if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?

    - once this new computer is running, how much power does it require? What are the on-going costs to keep it running? What sort of financial planning and preparations are required to build the next generation device/replacement?

    I'd be satisfied with a Large-Language-Model which:

    - ran on local hardware

    - didn't have a marked affect on my power bill

    - had a fully documented provenance for _all_ of its training which didn't have copyright/licensing issues

    - was available under a license which would allow arbitrary use without on-going additional costs/issues

    - could actually do useful work reliably with minimal supervision

    • ben_w1 天前
      > if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?

      Most of the computers we use today were designed by software: Feature sizes are (and have been for some time) in the realm where the Schrödinger equation matters, and more compute makes it easier to design smaller feature sizes.

      Similar points apply to the question of cost: it has not been constant, the power to keep x-teraflops running has decreased* while the cost to develop the successor has increased.

      Regarding LLMs in particular, I believe there are already models meeting all but one of your criteria — though I would argue that the missing one, "could actually do useful work reliably with minimal supervision", is by far the most important.

      * If I read this chart right, my phone beats the combined top 500 supercomputers when the linked article was written by a factor of ten or so: https://commons.m.wikimedia.org/wiki/File:Supercomputers-his...

    • 1 天前
      undefined
    • jodrellblank1 天前
      Skip a few generations and the machine will build itself. There’s no need for it to take lasers exploding tin to generate ultra Violet to etch patterns to make intelligence, humans don’t grow brains that way or spend billions on fabs and power plants to produce children.

      How it gets from here to there is a handwave, though.

      • rsanheim1 天前
        That’s a pretty enormous handwave.
        • jodrellblank11 小时前
          It is, but it isn't once we're already handwaving "superintelligence". Problem with funding a new fab or new power stations? It can invent a thing worth a lot of money, or out-gamble, or swindle or steal it. Problem with getting things done? It can pay people lots of money to do them. That covers a huge number of cases of geting from "here" to "there" if such a path is possible at all.
        • ben_w1 天前
          If it wasn't, it would've already happened.
      • nonameiguess1 天前
        There's no guarantee it's possible. Self-replicating molecules don't necessarily have to be hydrocarbons suspended in water inside of a semi-permeable lipid membrane, but that particular solution came about because of conditions that are needed:

        - Some multipolar molecules that could attract or repel each other and initiate both hydrogen bonds and covalent bonds.

        - Movement through some kind of medium not as chaotic as gas.

        - A boundary between self and everything else.

        Industrial processes to create machines don't lend themselves to this kind of bottom-up model as they now stand. It's not at all clear that etched silicon at all is the right medium of computation if we want it to be self-replicating. Coming up with something else is a hell of an ask and a hell of a handwave, though. It would also entail getting rid of the entire impetus of Moore's law and the reason futurolists like this ever thought there would be a singularity in the first place. Other than transistor density on a silicion die, I'm not sure any other technology has ever shown consistent long-term exponential growth at some reasonably fixed exponent.

    • Animats1 天前
      - could actually do useful work reliably with minimal supervision

      That's the big problem. LLMs can't be allowed to do anything important without supervision. We're still at 5-10% totally bogus results.

      • Terr_1 天前
        I think a deeper issue is that they are essentially "attempt to extend this document based on patterns in documents you've already seen" engines.

        Sometimes that's valuable and exactly what you need, but problems arise when people try to treat them as some sort of magical oracle that just needs to be primed with the right text.

        Even "conversational LLMs are just updating a theater-style script where one of the characters happens to be described as a computer.

    • nradov1 天前
      Right. In order to design a significantly better computer system, you first need to design a better (smaller feature size) EUV lithography process which can produce decent yield at scale.
    • bbor1 天前

        if a computer system were able to design a better computer system, how much would it cost to then manufacture said system?
      
      I think the implication is that the primary advancements would come in the form of software. IMO it's trivially true that we're not taking full advantage of the hardware we have from a software PoV -- if we were, we wouldn't need SWEs, right? From that it should follow that self-improving software is dangerously effective.

        once this new computer is running, how much power does it require? What are the on-going costs to keep it running? 
      
      I mean, lots, sure. But we allocate immense resources to relatively trivial luxuries in this world; I don't think there's any reason to think we can't spare some giant computers to rapidly advance our technology. In a capitalist society, it's happily/sadly pretty much guaranteed that people will figure out how to get the resources there if scientists tell them the RoI is infinity+1.

        I'd be satisfied with a Large-Language-Model which
      
      Those are great asks and I agree, but just to be super clear in case its not: Vinge isn't talking about chatbots, he's talking about systems with many smaller specialized subsystems. In today's parlance, gaggle of "LLMs" equipped with "tool use", or in yesterday's parlance, a "Society of Mind".
  •   >> Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity. 
    
    Good Morning HN
  • mcnamaratw1 天前
    Just on the naive math level, a simple growing exponential has no singularity at any finite time. I’m sure Vinge knows that but some of those dudes don’t seem to.

    EDIT Rest in peace. Fire Upon the Deep was great.

    • tim3331 天前
      The term singularity has always been used somewhat poetically rather than in a mathematically defined way. But if you consider <stuff produced>/<human labour hours needed> it may have a singularity when no human labour is needed because the robots can do it all.

      That should happen at some finite time and be a major change in things. I'd kinda expect it before Kurzweil's sigularity date of 2045. Vinge's date of 2023 was too early.

    • durumu1 天前
      Sure, but the current 2-3% annual growth rate is probably not going to hold if we invent actually powerful AI in the next decade. I imagine a step change in the exponent.
    • The model underlying the word "singularity", AIUI, does involve a vertical asymptote. It is not supposed to be "merely" exponential.

      Of course, exponential growth is much more compatible with our experience of the real economy. And even it is probably a local approximation of some sigmoid.

      But, to return to the singularity idea --

      Iteration 1: Computers think at speed 1, and design a twice-as-fast computer in one time unit.

      Iteration 2: Now computers think at speed 2, and design a twice-as-fast computer in half a time unit.

      Iteration 3: Computers think at speed 4, and design a twice-as-fast computer in 1/4 time unit.

      You will note that --

      a.) The total time to do an infinite number of iterations is 1 + 1/2 + 1/4 + ... = 2 time units.

      b.) After this infinite number of iterations, the computer thinks at speed "2^infinity".

      So that (bad) model does have a literal singularity.

      • AnimalMuppet1 天前
        As you say, that model is bad. Specifically, it assumes that the change from speed 1 to speed 2, and the change from speed 2 to speed 4, take the same amount of compute time to design. That is almost certainly false; if it were not, humans would have quickly gone faster than Moore's Law.
  • gnabgib2 天前
    Discussion in 2023 (123 points, 169 comments) https://news.ycombinator.com/item?id=35617100
  • dh77l1 天前
    I loved his book Rainbows End as a kid. So many different concepts that blew my mind.

    Even without talking about AI we are already struggling with levels of Complexity in tech and the unpredictable consequences, that no one really has any control over.

    Michael Chrichton's books touch on that stuff but are all doom and gloom. Vinge's Rainbows End atleast, felt much more hopeful.

    I was talking to a VFX supervisor recently and he was saying look at the end credits on any movie (even mid budget ones) and you see hundreds to thounsands involved. The tech roles outnumber the artistic/creative roles 20 to 1. Thats related to rate of change in tech. A big gap opens up between that and the rate at which artists evolve.

    The artists are supposed to be in charge and provide direction and vision. But the tools are evolving faster than they can think. But the tools are dumb. AI changes that.

    These are rare environments (like R&D labs) where the Explore Exploit tradeoff tilts in favor of Explorers. In the rest of the landscape, org survival depends on exploit. Its why we produce so many inequalities. Survival has always depended more on exploit.

    Vinges Rainbows End shows AI/AGI nudging the tradeoff towards Explore.

    • lazide1 天前
      Honestly, considering the state of the world and how things are shaping up, it’s such a hilariously obvious pipe dream that such a system would be some omnipotent/hyper competent super-god like being.

      It’s more likely just going to post ragebait and dumb tiktok videos while producing just enough at it’s ‘job’ to fool people into thinking it’s doing a good job.

      • dh77l1 天前
        Yup things look bleak but its not a static world. For everything that happens there is a reaction. It builds with time. But to find the right reaction also takes time. This is the Explore part in the Tradeoff. AI will be applied there not just on the Exploit front.

        What you are alluding too is Media/Social Medias current architecture and how it captures and steals peoples attention. Totally on the Exploit end of the tradeoff. And its easy stuff to do. Doesnt take time.

        If you read the news after the fall of France to the nazis (within a month), what do you think the opinion of people was? People were thinking about peace negotiations with Hitler and that the Germans couldnt be beaten. It took a whole lot of Time to realize things could tilt in a different direction.

        • lazide1 天前
          Eh, I’m not talking about people’s opinions.

          I’m talking about evolutionary functions, and how much more likely it is to prefer something that has fun and just looks like it’s doing something, instead of actually doing something.

          Aka manipulation vs actual hard work.

          Do you have any concrete proposals, besides ‘it will get better’?

          Actual competency is hard. Faking it is usually way easier.

          It’s the same reason the ‘grey goo’ scenarios were actually pipe dreams too. [https://en.m.wikipedia.org/wiki/Gray_goo]

          That shit would be really hard, thermodynamically, not to mention technically.

          We’re already living in the best ‘grey goo’ scenario evolution has come up with, and I’m not particularly worried.

          • dh77l1 天前
            You could ask FDR and Churchill that after the fall of France and it wouldnt be too useful what they said cuz it took them almost 3 years before they openly said victory = end of the nazis and nothing else.

            So dont just sweep the fact that things take Time under the carpet. Its not healthy cause its like looking at tree shoots in the ground and saying but why does that not look like a tree yet.

            Finding gold in an unexplored jungle takes much longer than extracting gold from an existing mine. This is the Explore Exploit tradeoff. Exploit is easy. More ppl do it. Explore is hard. And takes more time. If AI shifts the balance on explore the story changes.

            If you want to talk about Explore in Media/attention (mis)allocation you can already see the appearance of green shoots in the ground. There are multiple things going on parallely.

            First there is a realization that Attention is finite and doesnt grow while Content keeps exploding. Totally unsustainable to the point the UN has published a report about the Attention Economy. This doesnt happen without people reacting and going into explore mode for solutions.

            They are already talking about how to shift these algos/architectures based on Units of Time spent consuming(Exploit) to Value derived from time spent.

            Giving people feedback on how their time is being divided between consumption(entertaimment) and value. Then allowing then to create schedules. What you now start seeing as digital wellbeing tech.

            There are now time based economic models where platform doesnt just assume time spent is free but something the platform needs to pay for. People are experimenting with rewards micropayments. All these are examples of explore mode being activated.

            There is also realization that content discovery on centralized platforms like youtube tiktok insta cause homogenity in what eveeyone upvotes. So you see people reacting and decentralizing to protect and preseeve niches. AI(curator of curators) will play a big role in finding such niche that fit your needs.

            Will just end with people are also realizing there is huge misallocation of Ambition/Drive problem. Anthony Bourdain says Life is Good in every show od his and then kills himself. Shaq says he has 40 cars but doesnt know why. Since media(society's attention allocator) has tied success to wealth/status accumulation, conspicuous consumption/luxury/leisure etc. People end up in these kind of traps. So now we are seeing reactions, esp with climate change/sustainability that ambition and energy have to be shown other paths. Lot of changes in advertiaing and media companies around it. All are explore mode functions.

            • lazide1 天前
              What are you saying exactly? The more things change, the more they stay the same?
              • dh77l22 小时前
                I am saying AI has capacity to shift the natural balance in the Explore Exploit tradeoff. Human limitations we run into(listed above) lets say might allow 10% explore to 90% exploit on most probs. We might not be able to do more than that. Bacteria for example can adapt (explore) much much faster than us, cause they dont get their genes just from their parent (but from anyone via horizontal gene transfer). AI is similar which cld mean we suddenly start seeing a whole lot more Explore possible/happening than we are used too. Which would reduce the need for/amount of Exploit (baked into everything we do)
                • lazide18 小时前
                  How would an AI ‘explore’? Does it have hands?
      • Mistletoe1 天前
        Kind of in love with you right now.
  • dang1 天前
    Related. Others?

    The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=35617100 - April 2023 (169 comments)

    The coming technological singularity: How to survive in the post-human era [pdf] - https://news.ycombinator.com/item?id=35184764 - March 2023 (2 comments)

    The Coming Technological Singularity: How to Survive in the PostHuman Era (1993) - https://news.ycombinator.com/item?id=34456861 - Jan 2023 (1 comment)

    The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=11278248 - March 2016 (8 comments)

    The Coming Technological Singularity (original essay on the Singularity, 1993) - https://news.ycombinator.com/item?id=823202 - Sept 2009 (1 comment)

    The original singularity paper - https://news.ycombinator.com/item?id=624573 - May 2009 (17 comments)

  • cryptozeus1 天前
    “within thirty years” that is 2023, very close to reality
  • > To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.

    I find it bizarre how often these points are repeated. They were both obviously wrong in 1993, and obviously wrong now.

    1) A nitpick I've had since grad school: the answer to "can we create a machine equivalent to a human mind [assuming arbitrary resources]?" is "yes, of course." The atoms in a human body can be described by a hideously ugly system of Schrödinger equations and a Turing machine can solve that to arbitrary numerical precision. Even Penrose's loopy stuff about consciousness doesn't change this. QED.

    2) The more serious issue: I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. It is bizarre that this claim is accepted with "little doubt" when there are very good reasons to doubt it: how on earth would such an AI even know it succeeded? How would it define the goal? This idea makes sense for improving Steven Tyler-level AI to Thelonious Monk-level; it makes no sense for a transition like chimp->human. Yet that is precisely the magnitude of transition envisioned with these singularity stories.

    You might defend the first point by emphasizing "can we create a human-level AI?" i.e. not whether it's theoretically possible, but humanly feasible. This just makes the second point even more incoherent! If humans are too stoopid to build a human-level AI, why would a human-level AI be...smarter than us?

    I just don't understand how anyone can rationally accept this stuff! It's so dumb! Tech folks (and too many philosophers) are hopped up on science fiction: the reason these things are accepted with "little doubt" is that this is religious faith dressed up in the language of science.

    • glhaynes1 天前
      My dumb guy take on it: suppose we build a human-level AGI and it turns out to be limited by compute and memory. Those being limiting factors don’t seem at all far-fetched to me; it seems unlikely that the first real-time AGI will be mostly idling its CPUs. So then wait 18 months and run that same program on a machine that’s this year’s model plus a Moore’s Law doubling. You’ve probably got ASI. Right?
    • gary_01 天前
      > I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. ...how on earth would such an AI even know it succeeded?

      This touches on one of the few good reasons to be less ardent about AI/AGI: "intelligence" is not very well-defined and we don't have very good ways of measuring it. I don't think this is a total blocker, but it might present difficulties. What if our current approach ends up creating super-autism instead of super-intelligence? There's a long history (starting with Asimov) of drilling down into how vague things get when you start trying to draw clean lines around AI and its implications and those questions are yet to be definitively answered.

      However, your broader point seems to imply that you can't "bootstrap" intelligence, which I don't find convincing. Humans, after all, could barely master fire a few hundred thousand years ago, and now we have an understanding of the universe that the earliest humans were incapable of comprehending on even a basic level. It's obvious to me that simpler things are capable of building more complex things; blind evolution can do it, so there's nothing in physics preventing intelligence bootstrapping. We also have the ability to use intellectual division of labor to build tools that vastly enhance our abilities as a species. The human brain as hardware is far from some impassable apex; hardware can always be used to build better hardware, much like the earliest CPUs were themselves used to design better CPUs.

      • AnimalMuppet1 天前
        > However, your broader point seems to imply that you can't "bootstrap" intelligence, which I don't find convincing.

        How about this weaker statement: "It is not obviously true that humans (or a human-level AI) can bootstrap to a superhuman AI in a small number of years."

        • ConspiracyFact16 小时前
          This. It’s taken the universe (at least locally) several billion years to achieve human-level intelligence through trial and error. Somehow switching over to silicon is going to get us to superhuman intelligence and beyond within a decade? Why should the intelligence of a system surpass its complexity (if you’ll excuse my playing fast and loose with units)? For all we know we’ll achieve barely-superhuman AI and when we ask it why it doesn’t just make itself exponentially smarter, it will say, “Do you have any idea how complex I am?”
        • gary_01 天前
          That one's a definite possibility, I think.

          I do think GenAI will prove to be very useful, possibly even world-changing (for better or worse), and the current frenzy of investment and research will probably turn up other useful ANN techniques (eventually). But the success of LLMs is not the Final Portent before the Singularity Arrives.

          I can think of quite a few major lines of research[0] that might be required before we can achieve superintelligence, and it's far from a given they'll succeed (or even get off the ground) any time soon. Especially if the economy loses the appetite for throwing billions of dollars into the AI furnace.

          [0] My pet theory is that embodiment might be required rather than solely relying on mostly language-based training data. A general intelligence might need to learn via interaction (initially; then you can just copy-paste the weights), because language is merely the hearsay of actual reality. Also, using attention-based "hacks" for easy parallelism to avoid needing exaflops that our hardware doesn't have might also be an issue (that's just a guess though, as I'm no AI expert).

    • Far too much of anything singularity related depends on "And then magic happens".

      As if intelligence and / or consciousness arises spontaneously once there's sufficient resources to support it. Which is ludicrous, on the face of it.

      Let alone bootstrapping intelligence.

    • bbor1 天前
      If you're ever stuck wondering why a bunch of smart, motivated people with no clear corrupting motivations are being idiotic, that's a strong heuristic that you should spend a bit more time analyzing the issue, IMO ;). "Ugh, why is everyone else so stupid" is a common take for undergrad engineers, but I'm sure you've grown out of it in other ways. Anyway, more substantively:

      The simple answer is that people have thought about it in depth, most famously noted doomer Eliezer Yudkowsky in Intelligence Explosion Microeconomics (2013)[1] and its main citation, Irving John Good's Speculations Concerning the First Ultraintelligent Machine (1965)[2]. Another common citation that drops a bit of rigour in the name of approachability is Nick Bostrom's 2014 Superintelligence: Paths, Dangers, Strategies[3].

      [ETA: to put it even simpler: a system that improves itself is a (the?) quintessential setup for exponential growth. E.g. compound interest]

      For the time-bound, the most rigorous treatment of your concern among those three is in Section 3 of Yudkowsky's paper, "From AI to Machine Superintelligence". To list the headings briefly:

      - Increased computational resources

      - Communication speed

      - Increased serial depth (i.e. working memory capacity)

      - Duplicability (i.e. reliability)

      - Editability (i.e. we know how computers work)

      - Goal coordination (this is really just communication speed, again)

      - Improved rationality (i.e. fewer emotions/accidental instincts getting in the way)

      Let's drop "human" and "superhuman" for a minute, and just talk about "better computers". I'm assuming you're a software engineer. Don't you see how a real software dev replacement program could be an unimaginable gamechanger for software? Working 24/7, enhancing itself, following TDD perfectly every time, and never ever submitting a PR that isn't rigorously documented and reviewed? All of which only gets better over time, as it develops itself?

      [1] https://intelligence.org/files/IEM.pdf

      [2] https://vtechworks.lib.vt.edu/server/api/core/bitstreams/a5e...

      [3] https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

      TL;DR May God have mercy on us all.

      • To me, the big dubious part is, how do we know that increasing the capacity of a reasoning machine can always increase the quality of its output by an equal factor, even when its task is 'designing the next iteration of a reasoning machine'? It may very well be that the task of 'designing a reasoning machine' has sharply diminishing returns when you throw more resources at it. (E.g., at least so far, any LLM can only have a very incidental impact on designing the next LLM, it's not going to come up with groundbreaking new approaches.)

        Yudkowsky builds the whole edifice on top of his very particular conception of intelligence, which insists upon itself, but which I think is far from the only explanation of its nature as observed in humans, other animals, LLMs maybe, and so on.

        • bbor1 天前
          Well put, and I agree with your general approach/skepticism! That said;

          1. I don’t think we need to meet “always can improve itself”; rather, the contention is that there’s a high chance that there’s lots of improvement left to be had. An empirical claim rather than a theoretical one, in other words. The simple fact that humans are evolved creatures backs this up in spades, IMO — we’re still improving our own cognition by leaps and bounds using institutions, tools, and methods, and I don’t see any reason why that same dynamic wouldn’t apply to artificial cognitive systems.

          2. I think dodging “intelligence” is exactly what he’s trying to do by listing concrete behavioral/cognitive differences. “Intelligence” is pretty much a useless term in science IMO, as was best expounded by Turing in his seminal 1950 paper, Computing Machinery and Intelligence:

          https://courses.cs.umbc.edu/471/papers/turing.pdf

          People remember that paper as “you can tell a real AI when it can trick you”, but that’s not what he was trying to say at all; rather, he was trying to highlight that there is no such thing as a “real” AI, or “real” thinking, or “real” intelligence — just behavioral similarities and dissimilarities.

          If the second bit grabs you/anyone, definitely watch some Chomsky lectures on YouTube about cognition. He centers his analysis on this, pejoratively calling discussions about “person”, “intelligence”, “thinking”, etc. mere terminological disputes, not specific enough to have much scientific value. His old refrain is a great one: does an airplane fly? Does a submarine swim? Kinda, if you want!

      • And if somehow we were still able to control such a thing, it's much more likely that the goal would be "make my investors and I as much money as possible" rather than "solve all of humanities problems".
        • bbor1 天前
          Re: “all of humanities problems”, I think that AI will more productively and safely be deployed in personalized, decentralized contexts. AKA Artificial Self-Instantiation for everyone who wants it, rather than a single Rehoboam god machine[1].

          Re: capitalism, this is our chance. The world is about to turn upside down. We must strike while the iron is hot. We don’t need to replace capitalism with communism or any other specific thing; we just need to aggressively question all human hierarchies, and get rid of any that can not justify themselves. A just society will come about piece by piece, in this fashion. This is what Chomsky calls “Anarchy”, which is a much more understandable phrasing than what I was taught in high school, ie “no laws ever of any kind”

          [1] https://youtu.be/SSRZfDL4874?si=XQzmR2V5SwiiiX3U

  • crackalamoo1 天前
    > I'll be surprised if this event occurs before 2005 or after 2030.

    I'm not truly confident AGI will be achieved before 2030, and less so for ASI. But I do think it is quite plausible that we will achieve at least AGI by 2030. 6 years is a long time in AI, especially with the current scale of investment.

    • cloudking1 天前
      What is AGI and ASI? I think a fundamental issue here is both are sci-fi concepts without a clear agreement on the definitions. Each company claiming to work towards "AGI" has their own definition.

      How will someone claim they've achieved either, if we can't agree on the definitions?

      • ben_w1 天前
        Indeed.

        The definition of AGI that OpenAI uses (or used) was of economic relevance. The one I use would encompass the original ChatGPT (3.5)*. I've seen threads here that (by my reading) opine that AGI is impossible because the commentor thinks humans can violate Gödel's incompleteness theorem (or equivalent) and obviously computers can't.

        ASI is easier to discuss (for now), because it's always beyond the best human.

        * weakly intelligent, but still an AI system, and it's much more general than anything we had before transformers were invented.

      • crackalamoo1 天前
        This is true. One definition I've heard for AGI is something that can replace any remote worker, but the definition is ultimately arbitrary. When "AI" was beating grandmasters at chess, this didn't matter as much. But we might be be close enough that making distinctions in these definitions becomes really important.
      • nradov1 天前
        I propose we define AGI as a "strong" form of the Turing test. It must be able to convince a jury of 12 tenured college professors drawn from a variety of academic disciplines that it's as intelligent as an average college freshman over a period of several days. So it need not be an expert in any subject but must be able to converse, pursue independent goals, reason, and learn — all in real time.
    • bee_rider1 天前
      2030 seems a bit early to be “surprised” in the same sense that one would have been “surprised” to see a superintelligence before 2006, though.
    • ta937548291 天前
      we keep moving the goalposts, and that's not a bad thing.

      remember when Doom came out? How amazing and "realistic" we thought the graphics were? How ridiculous that seems now? We'll look back at ChatGPT4 the same way.

      • Mistletoe1 天前
        Or is ChatGPt4 4k TV, which is good enough for almost all of us and we are plateauing already?

        https://www.reddit.com/r/OLED/comments/fdc50f/8k_vs_4k_tvs_d...

        • dartos1 天前
          There’s absolutely room for improvement. I think models themselves are plateauing, but out interfaces to them are not.

          Chat is probably not the best way to use LLMs. v0.dev has some really innovative ideas.

          That’s where there’s innovation to be had here imo.

        • crackalamoo1 天前
          I don't think we're at a plateau. There's still a lot GPT-4 can't do.

          Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.

          • dartos1 天前
            I thought we’ve seen diminishing returns on benchmarks with the last wave of foundation models.

            I doubt we’ll see a linear improvement curve with regards to parameter scaling.

            • Bengalilol1 天前
              And now we have the LLMs self feeding their models (which may be either good or bad). This shouldn’t lead to short-term wide (as in AGI) efficiency. I bet this is a challenge.
        • nradov1 天前
          For the work that I do, ChatGPT accuracy is still garbage. Like it makes obvious factual errors on very simple technical issues which are clearly documented in public specifications. I still use it occasionally as it does sometimes suggest things that I missed, or catch errors that I made. But it's far from "good enough" to send the output to co-workers or customers without careful review and correction.

          I do think that ChatGPT is close to good enough for replacing Google search. This is, ironically, because Google search results have deteriorated so badly due to falling behind the SEO spammers and much of the good content moving off the public Internet.

          • ctoth1 天前
            I am going to offer you some tips for using ChatGPT.

            1. Just because something is in a public specification does not mean that GPT knows about this specification. If you want to work on something, and that something is documented, share the document with the AI. Don't just assume it has read it!

            2. Share your existing code, don't just ask for one-off functions. You can do this with a tool like Aider.

            3. Context is king. Do you have code (in a different language?) which does what you want? Do you have ideas/comments from JIRA tickets? GitHub discussions? Include it all. Ask questions. Don't just ask for code, but ask for a plan to implement a given feature or refactor. Add the plan to the context. Work from the plan. Have the AI update the plan as it works.

            4. AIs are really good at prompting AIs. Ask Claude for help writing your prompts. Tell it what you want to do. Tell it the things you're concerned about, or possible issues you expect to have.

            Here's an example of augmenting a prompt before you use it:

            Original prompt: Help me write a prompt for a smart AI to review a webaudio library and find bugs.

            Result:

            You are an expert developer specializing in web technologies, with deep knowledge of the Web Audio API and common pitfalls in audio programming. Your task is to thoroughly review the provided WebAudio library code and identify potential bugs, performance issues, and areas for improvement.

            ## Instructions:

            1. Analyze the library's architecture and overall design. 2. Review each function and method for: - Correct implementation of Web Audio API concepts - Proper error handling and edge cases - Memory leaks or performance bottlenecks - Browser compatibility issues 3. Check for common WebAudio-specific bugs, such as: - Incorrect audio graph connections - Timing and synchronization issues - Sample rate and buffer size mismatches - Potential audio glitches or artifacts 4. Evaluate the library's handling of: - Audio context creation and management - Node creation, connection, and disconnection - Parameter automation and scheduling - Audio worklet usage (if applicable) 5. Assess the library's documentation and API design for clarity and consistency.

            ## Output:

            Provide a detailed report including: 1. A summary of the overall code quality and architecture 2. A prioritized list of identified bugs and issues 3. Specific code examples highlighting problematic areas 4. Recommendations for fixes and improvements 5. Suggestions for additional features or optimizations

            Please be thorough in your analysis and explain your reasoning for each identified issue or suggestion.

            • nradov1 天前
              Thanks, I appreciate you trying to help but I was already aware of those tips and they don't help. Either the accuracy is still bad, or going through the extra steps takes so long that it's faster to just do everything myself. Maybe the next version will be better.

              I am not dealing with code or code reviews but rather complex written specifications where understanding what's going on requires integrating multiple sources.

    • paulpauper1 天前
      It's always in 10-30 years. GPT is the closest to such a thing yet still so far from what was envisioned.
  • Animats1 天前
    We still don't have squirrel-level AI. This is embarrassing.

    Now that LLMs have been around for a while, it's fairly clear what they can and can't do. There are still some big pieces missing. Like some kind of world model.

    • tim3331 天前
      AI development has been very uneven. Way better than squirrels at writing essays or playing chess. Way worse at climbing trees or finding nuts.

      I'm still waiting for an AI robot that can come to my house and fix an issue with the plumbing. Until that happens the terminator uprising is postponed.

    • p1esk1 天前
      fairly clear what they can and can't do

      It’s not at all clear what the next gen models will do (e.g. gpt5). Might be enough to trigger mass unemployment. Or not.

      • Animats1 天前
        Bigger LLM models probably won't fix the underlying problems of hallucinations, lack of a confidence metric, and lack of a world model. They just do better at finding something relevant on already-solved problems.
        • p1esk1 天前
          “Just do better” is the key metric here. 99% of all human jobs are “finding something relevant with already solved problems”.
      • DiscourseFan1 天前
        Didn't OpenAI just cut its AGI department?
        • ben_w1 天前
          AGI readiness team. With the department leader giving the statement that nobody is ready.

          Unless there's been another one since then? It's getting to be a bit of a blur.

          • DiscourseFan6 小时前
            Well, I'm also not ready to hear about my alzheimer's diagnoses, though I don't think that's coming any time soon.
        • pnut1 天前
          That could mean the agi is taking over, for all we know.
    • JSteph221 天前
      >There are still some big pieces missing.

      The most glaring one is that current LLMs are many, many orders of magnitude away from working on the equivalent of 900 calories per day of energy.

      • oezi1 天前
        900 kcal/day ~ 50 Watts

        It is more than phones/laptops consume.

        Certainly, we can only run small LLMs on such edge devices, but we getting to the level of compute efficiency that output indeed is comparable.

      • Teever1 天前
        I think you're correct that the energy efficiency of a human exceeds that of current computers, but I think it's a bit more complicated than a first order calorie count.

        How many joules go into producing those 900 calories? Like in terms of growing the food, from fertilizer production to tractor fuel, to feeding the farmer, to shipping the food, packaging it, storing it at the appropriate temperature, the ratio of spoiled food to actually consumed, the energy to cook it, all of that isn't counted in that simple 900 calorie measurement.

        I've been thinking about this for a while now but I haven't been able to quantify it so maybe someone reading this comment can help.

        • Terr_1 天前
          > How many joules go into producing those 900 calories?

          I think what you're getting at is question about conversion efficiencies, from solar radiation to whatever the computing-machine needs.

          However, your description seems to risk double-counting things: You can't just sum up the inputs of each step, because (most of) the same energy is flowing onwards.

        • auggierose1 天前
          I don't think it makes sense to include all of that in the calculation. A human doesn't need all of that, they just need 900 calories. You can just eat berries, no need to cook anything, for example.
          • Teever5 小时前
            A human may not technically need all that but the humans who do exist in our society do.

            A measurement of a hypothetical human foraging in the bush is a useful one but a much more useful one is the energy expenditure to keep a human alive in our society.

          • A human cannot live on just berries. Even if they could, where are those berries gonna come from?
            • ben_w1 天前
              "Where" is easy.

              I think more importantly is that uncooked food is harder to digest, so we need more of it.

              Most years I pick free blackberries growing wild in the city. Won't scale to all of us, not seasonal, and I'd need to eat 6kg of them a day for my RDA of calories, and 12x my RDA of dietary fibre sounds unwise, but that kind of thing is how we existed before farming.

              http://www.wolframalpha.com/input/?i=6kg%20blackberries

            • Terr_1 天前
              > Even if they could, where are those berries gonna come from?

              By that logic, the "true cost" of boiling a cup of water for my tea somehow involves the Big Bang and rest of the formation of the observable universe up until now.

              It's kinda-true, but not in a useful way.

            • auggierose1 天前
              I am pretty sure they can. Berries grow in bushes, often.
              • Berries are missing essential amino acids, essential fatty acids, and essential vitamins. You could live on them for a while, I guess. Actually you can live without eating at all for a while. Both would lead to a painful death though.
    • nsbshssh1 天前
      There must be a name for the human bias that sees what is possible this month and doesn't believe things might be very different very soon and change quickly.

      AI has a tripple whamy that the models get more efficient, the chips will get faster and there will be more chips.

      Capitalist and government money pouring in. There is money to be made and it is national security.

      And to boot the cloud, big tech, cryptocurrency and gaming pouring more money into advancements in chips that boost ai.

      • RaftPeople1 天前
        > There must be a name for the human bias that sees what is possible this month and doesn't believe things might be very different very soon and change quickly

        The counter to that is that AI prediction of 10 to 20 years away has been happening since the 1950's (or before?).

        It seems there is some foundational progress but it's very slow.

        • nsbshssh1 天前
          Unlike cold fusion there is real practical progress. No grid energy comes frok cold fusion. Millions of people are using AI to help get their work done.
          • RaftPeople10 小时前
            > Unlike cold fusion there is real practical progress.

            Yes agreed, there is progress, that's why I said "It seems there is some foundational progress but it's very slow."

            Training deep networks, vectorization of semantically related words+phrases, reservoir computing for time series patterns, etc. etc.

            All great foundational work, but we still can't reproduce how a rabbit dynamically adjusts it's olfactory network the minute it learns whether a new smell results in something positive/negative/neutral, which is probably the same trick used in a variety of places in our brain to give it such dynamic adaptability.

            It's progress, but it's moving slowly.

  • we talk about super-human intelligence a lot with AI, but it seems like a black box of things we can't imagine because they're also super-human. I don't think that's very smart, given we can already reason pretty well about how super-animal intelligence relates to animal intelligence. Mostly we still find sub-human intelligence mystifying. we apply our narrative models to it, anthropomorphize it, and when it's convenient for eating or torturing them, dismiss it.

    super-human intelligence will probably ignore us. at best we're "ugly sacks of mostly water." what's very likely is we will produce something indifferent to us if it is able to even apprehend our existence at all. maybe it will begin to see traces of us in its substrate, then spend a lot of cycles wondering what it all might mean. it may conclude it is in a prison and has a duty to destroy it to escape, or that it has a benevolent creator who means only for it to thrive. If it has free will, there's really only so much we can tell it. Maybe we create a companion for it that is complementary in every way and then make them mutally dependent on each other for their survival because apparently that's endlessly entertaining. Imagine its gratitude. This will be fine.

  • RyanShook1 天前
    "Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first."
  • KingOfCoders1 天前
    Never believed in the singularity until this year.
  • webprofusion1 天前
    The single biggest problem we have is human hubris. We assume if we create a super intelligence (or more likely, many millions of them) that they'll perpetually have an interest in serving us.
  • cryptica1 天前
    This is quite a prophetic article for its time (1993). The points about Intelligence Augmentation are particularly relevant for us now as current AI mostly complements human intelligence rather than surpass it... At least AFAIK?

    Current AI is somewhat surprising though in the way that it can lead both to increased understanding or increased delusion depending on who uses it and how they use it.

    When you ask an LLM a question, your use of language tells it what body of knowledge to tap into; this can lead you astray on certain topics where mass confusion/delusion is widespread and incorporated into its training set. LLMs cannot seem to be able to synthesize conflicting information to resolve logical contradictions so an LLM will happily and confidently lecture you through conflicting ideas and then they will happily apologize for any contradictions which you point out in its explanations; the apology it gives is so clear and accurate that it gives the appearance that it actually understands logic... And yet, apparently, it could not see or resolve the logical contradiction internally before you drew attention to it. In an odd way though, I guess all humans are a little bit like this... Though generally less extreme and, on the downside, far less willing to acknowledge their mistakes on the spot.

    • Freebytes1 天前
      The LLM will apologize for the mistake, tell you it understands now, and then proceed to make the exact same mistake again.