ChatGPT Search

(openai.com)

849 points | by marban9 hours ago

148 comments

  • freediver7 hours ago
    Been thinking about this a lot [1]. Will this fundamentally change how people find and access information? How do you create an experience so compelling that it replaces the current paradigm?

    The future promised in Star Trek and even Apple's Knowledge Navigator [2] from 1987 still feels distant. In those visions, users simply asked questions and received reliable answers - nobody had to fact-check the answers ever.

    Combining two broken systems - compromised search engines and unreliable LLMs - seems unlikely to yield that vision. Legacy, ad-based search, has devolved into a wasteland of misaligned incentives, conflict of interest and prolifirated the web full of content farms optimized for ads and algos instead of humans.

    Path forward requires solving the core challenge: actually surfacing the content people want to see, not what intermiediaries want them to see - which means a different business model in seach, where there are no intermediaries. I do not see a way around this. Advancing models without advancing search is like having a michelin star chef work with spoiled ingredients.

    I am cautiously optimistic we will eventually get there, but boy, we will need a fundamentally different setup in terms of incentives involved in information consumption, both in tech and society.

    [1] https://blog.kagi.com/age-pagerank-over

    [2] https://www.youtube.com/watch?v=umJsITGzXd0

    • MollyRealized6 minutes ago
      "The future promised in Star Trek [...] nobody had to fact-check the answers ever."

      You're actually a bit mistaken, there.

      https://en.wikipedia.org/wiki/Court_Martial_(Star_Trek:_The_...

    • freedomben5 hours ago
      I genuinely think Kagi has led the way on this one. Simplicity is beautiful and effective, and Kagi has (IMHO) absolutely nailed it with their AI approach. It's one of those things that in hindsight seems obvious, which is a pretty good measure of how good an idea is IMHO.

      Google could have done it and kind of tried, although they're AI sucks too much. I'm very surprised that OpenAI hasn't done this sooner as well. They're initial implementation of web search was sad. I don't mean to be super critical as I think generally OpenAI is very, very good at what they do, but they're initial browse the web was a giant hack that I would expect from an intern who isn't being given good guidance by their mentors.

      Once mainstream engines start getting on par with Kagi, there's gonna be a massive wave of destruction and opportunity. I'm guessing there will be a lot of new pay walls popping up, and lots of access deals with the search engines. This will even further raise the barrier of entry for new search entrants, and will further fragment information access between the haves and have-nots.

      I'm also cautiously optimistic though. We'll get there, but it's gonna be a bit shakey for a minute or two.

      • hn_throwaway_993 hours ago
        > I'm also cautiously optimistic though. We'll get there, but it's gonna be a bit shakey for a minute or two.

        But I don't understand how all of these AI results (note I haven't used Kagi so I don't know if it's different) don't fundamentally and irretrievably break the economics of the web. The "old deal" if you will is that many publishers would put stuff out on the web for free, but then with the hope that they could monetize it (somehow, even just with something like AdSense ads) on the backend. This "deal" was already getting a lot worse over the past years as Google had done more and more to keep people from ever needing to click through in the first place. Sure, these AI results have citation results, but the click-through rates are probably abysmal.

        Why would anyone ever publish stuff on the web for free unless it was just a hobby? There are a lot of high quality sites that need some return (quality creators need to eat) to be feasible, and those have to start going away. I mean, personally, for recipes I always start with ChatGPT now (I get just the recipe instead of "the history of the domestication of the tomato" that Google essentially forced on recipe sites for SEO competitive reasons), but why would any site now ever want to publish (or create) new high quality recipes?

        Can someone please explain how the open web, at least the part of the web the requires some sort of viable funding model for creators, can survive this?

        • generalizations2 hours ago
          > Why would anyone ever publish stuff on the web for free unless it was just a hobby

          That's exactly what the old deal was, and it's what made the old web so good. If every paid or ad-funded site died tomorrow, the web would be pretty much healed.

          • jascha_eng1 hour ago
            That's a bit too simple. There is way fewer people producing quality content "for fun" than people that aim or at least eventually hope to make money from it.

            Yes a few sites take this too far and ruin search results for everyone. But taking the possibility away would also cut the produced content by a lot.

            Youtube for example had some good content before monetization, but there is a lot of great documentary like channels now that simply wouldn't be possible without ads. There is also clickbait trash yes, but I rather have both than neither.

            • ericd1 hour ago
              Demonetizing the web sounds mostly awesome. Good riddance to the adtech ecosystem.
              • BoiledCabbage1 minute ago
                Not to be the downer, but who pays for all the video bandwidth, who pays for all the content hosting? The old web worked because it was mostly a public good, paid for by govt and universities. At current webscale that's not coming back.

                So who pays for all of this?

                The web needs to be monetized, just not via advertising. Maybe it's microtransactions, maybe subscriptions, maybe something else, but this idea of "we get everything we want for free and nobody tries to use it for their own agenda" will never return. That only exists for hobby technologies. Once they are mainstream they get incorporated into the mainstream economic model. Our mainstream model is capitalism, so it will be ever present in any form of the internet.

                The main question is how people/resources can be paid for while maintaining healthy incentives.

              • erickhill21 minutes ago
                The textual web is going the way of cable TV - pay to enter. And now streaming. "Alms for the poor..."

                But, like on OTA TV, you can get all the shopping channels you want.

            • generalizations22 minutes ago
              > There is way fewer people producing quality content "for fun" than people that aim or at least eventually hope to make money from it...But taking the possibility away would also cut the produced content by a lot.

              ....is that a problem? most of what we actually like is the stuff that's made 'for fun', and even if not, killing off some good stuff while killing off nearly all the bad stuff is a pretty good deal imo.

          • rodorgas20 minutes ago
            Remember in that time, less web content meant major media outlets dominated news and entertainment on TV and newspapers.
          • cushpush1 hour ago
            Paging Sergey
        • pocketarc2 hours ago
          The internet was great before the great monetization of it, had tons of information provided for free with no ads. After ads, it will still have tons of information. Stack Overflows will still exist, Wikipedias, corporate blogs that serve just to boost the company, people making courses and other educational content, personal blogs (countless of which make their way here), all of those will continue to exist.

          Ad-driven social networks will continue to exist as well.

          The age of the ad-driven blog website is probably at an end. But there will be countless people posting stuff online for free anyway.

        • Stratoscope1 hour ago
          > the history of the domestication of the tomato" that Google essentially forced on recipe sites for SEO competitive reasons

          That may help with SEO, but another reason is copyright law.

          Recipes can't be copyrighted, but stories can. Here is how ChatGPT explained it to me:

          > Recipes themselves, particularly the list of ingredients and steps, generally can't be copyrighted because they're considered functional instructions. However, the unique way a recipe is presented—such as personal stories, anecdotes, or detailed explanations—can be copyrighted. By adding this extra content, bloggers and recipe creators can make their work distinctive and protectable under copyright law, which also encourages people to stay on their page longer (a bonus for ad revenue).

          > In many cases, though, bloggers also do this to build a connection with readers, share cooking tips, or explain why a recipe is special to them. So while copyright plays a role, storytelling has other motivations, too.

        • eviks16 minutes ago
          The recipe trade-off doesn't make sense: while it's trivial to skip the history, you can't skip the false ingredients of the gpt variety

          Then this whole category is not known for "high quality recipes", so the general state wouldn't change much?

        • cthor2 hours ago
          Many people have an intrinsic motivation to share knowledge. Have a look at Wikipedia. There are enough of these people that we don't need to destroy the open Internet to accommodate those who only write when they expect to be paid.
        • csa1 hour ago
          > Can someone please explain how the open web, at least the part of the web the requires some sort of viable funding model for creators, can survive this?

          The funding model for the open web will be for the open web content to be the top of the funnel for curated content and/or walled gardens.

          I think many business models already treated the web this way. Specifically, get people away from the 800-pound gorilla rent-seekers like Google and Amazon, and get them into your own ecosystem.

        • 2 hours ago
          undefined
        • odo12421 hour ago
          Wait, did google force "the history of the domestication of the tomato" to be part of recipes on the web for SEO reasons?
      • alfalfasprout4 hours ago
        Yep, I was incredibly skeptical about Kagi but I tried it and never looked back. Now my wife, friends, and several coworkers are customers.

        The chatgpt approach to search just feels forced and not as intuitive.

        • SOLAR_FIELDS3 hours ago
          Once Kagi implements location aware search that is actually useful I’ll be interested in Kagi. That’s what made me leave the engine besides loving it otherwise.
          • abhgh16 minutes ago
            Same here - I do a lot of location aware searches. When I left Kagi after trying it out for a while, I wrote a detailed feedback hoping it would be useful to the Kagi team.
          • lstamour3 hours ago
            Google Maps is quite the moat. I suspect they'll need to find a way to license the data, e.g. via API. Apple has not (yet) been as successful at building out a database of local places with reliable hours of operations, reviews, etc.
            • pzmarzly1 hour ago
              There already is an official Google Maps API, but it is already very expensive with prices rising from time to time. There is no other company (other than maybe Meta) that has this much POI data in the western world.

              So that is a solid advantage that Google is going to have, but the maps business alone wouldn't be able to keep it in the S&P list for long.

          • sbeckeriv2 hours ago
            If you go to maps.kagi.com and in allow access to your location local results should be better. If it doesnt ask for access to your location there is a small icon on bottom right hand side that shows if it has access.
            • lolinder51 minutes ago
              That's great if I'm trying to find a location, but that's not what local results is about.

              Local results means that if I search for "driving laws", Google gives me .gov sites for my state as the top results, while Kagi's first page gives me results for 8 other states (including Alaska!) but not for my state.

              There are a lot of kinds of queries that benefit from knowing the user's location even though they aren't actually looking for a place that exists on a map.

              (I'm a happy paying Kagi user, but OP is right that this is its weakest point by far.)

              • SOLAR_FIELDS32 minutes ago
                For what it’s worth DuckDuckGo is flawed in the exact same way. I ended up leaving DDG for the exact same reason years ago
      • skipants44 minutes ago
        Could it be that Kagi benefits from being niche, though? Google search gets gamed because it’s the most popular and therefore gaming it gives the best return. I wonder if Kagi would have the same issues if it was the top dog.
        • lolinder1 minute ago
          I think they absolutely benefit from being niche, but there are a few other things they have going for them that won't go away if they become popular:

          * They're not ad funded. Sergey Brin and Larry Page called this out in 1998 and it is just as true as ever: you need the economics to align. Kagi wins if people keep paying for it. Google wins if you click on Search ads or if you visit a page filled with their non-Search ads.

          * Partially because of the economic alignment, Kagi has robust features for customizing your search results. The classic example is that you can block Pinterest, but it also allows gentler up- and down-weights. I have Wikipedia get a boost whenever its results are relevant, which is by itself a huge improvement over Google lately. Meanwhile, I don't see Fandom wikis unless there's absolutely nothing else.

          I hope to see more innovation from Kagi in the customization side of things, because I think that's what's going to make the biggest difference in preventing SEO gaming. If users can react instantly to block your site because it's filled with garbage, then it won't matter as much if you find a brief exploit that gets you into the first page of the natural search results. On Google Fandom is impossible to avoid. On Kagi it just takes one click.

      • crabmusket2 hours ago
        I wouldn't usually point this out, but as you did it repeatedly: "they're" is a contraction of "they are". You're looking for the possessive, "their".

        - Your local grammar pedant

      • pensatoio2 hours ago
        I gave Kagi a shot two weeks ago, and it instantly impressed me. I didn't realize how much search could be improved. It's a beautiful, helpful experience.
        • ericd59 minutes ago
          Yeah, it’s wonderful. Especially once you take the time to up/downrank domains.
      • ahmedbaracat3 hours ago
        Are you referring to Kagi Assistant?

        https://help.kagi.com/kagi/ai/assistant.html

      • gr__or3 hours ago
        I don’t understand how it’s different to Perplexity, looks pretty much the same. Can you enlighten me?
        • radicality3 hours ago
          Not op, but Kagi user. Also have perplexity but usually use kagi.

          I would say: 1) The UI. You’re still performing normal searches in Kagi. But if you hit q, or end your query with a question mark, you get an llm synthesized answer at the top, but can still browse and click through the normal search results.

          2) Kagi has personalization, ie you uprank/downrank/block domains, so the synthesized llm answer should usually be better because it has your personalized search as input.

          • disqard2 hours ago
            Paying customer of Kagi here.

            In addition to all that's been written above, you can configure personal filters, so that (for example) you never ever see a pinterest page in your search results. Things like that are IMO killer features today.

      • justinclift1 hour ago
        > absolutely nailed it with their AI approach.

        Thankfully, Kagi also have a toggle to completely turn that crap (AI) off so it never appears.

        Personally, I have absolutely no use for a product that can randomly generate false information. I'm not even interested until that's solved.

        (If/when it ever is though, at that point I'm open to taking a look)

        So yeah, Kagi definitely "leads the way" on this. By giving the user a choice to not waste time presenting AI crap. :)

        • bdangubic36 minutes ago
          You have no use for a product that can randomly generate false information but you trust google to provide you with search results based on how much they were paid for those keywords?…

          give me ai hallucinations over google every day of the week and twice on sunday…

          • Barrin927 minutes ago
            >but you trust google to provide you with search results based on how much they were paid for those keywords?

            Google isn't paid for keywords, that's not how search works. They sell ad space, Google does not rank up search content for payment.

            And also the obvious point is, you don't need to trust Google because they merely point you to content, they don't produce the content. They're an index for real existing content on the web which you can judge for yourself. A search index unlike an AI model, does not output uniform or even synthetic content.

      • kman822 hours ago
        Dude kagi is crap
        • tempest_1 hour ago
          I know the internet lately incentivizes low effort comments like this but be better.
    • wvenable6 hours ago
      > Will this fundamentally change how people find and access information? How do you create an experience so compelling that it replaces the current paradigm?

      I think it's already compelling enough to replace the current paradigm. Search is pretty much dead to me. I have to end every search with "reddit" to get remotely useful results.

      The concern I have with LLMs replacing search is that once it starts being monetized with ads or propaganda, it's going to be very dangerous. The context of results are scrubbed.

      • jsheard6 hours ago
        > The concern I have with LLMs replacing search is that once it starts being monetized with ads or propaganda, it's going to be very dangerous.

        Not to mention that users consuming most content through a middle-man completely breaks most publishers business models. Traditional search is a mutually beneficial arrangement, but LLM search is parasitic.

        Expect to see a lot more technical countermeasures and/or lawsuits against LLM search engines which regurgitate so much material that they effectively replace the need to visit the original publisher.

        • TeMPOraL5 hours ago
          > Traditional search is a mutually beneficial arrangement, but LLM search is parasitic.

          Traditional search is mutually beneficial... to search providers and publishers. At expense of the users. LLM search is becoming popular because it lets users, for however short time this will last, escape the fruits of the "mutually beneficial arrangement".

          If anything, that arrangement of publishers and providers became an actual parasite on society at large these days. Publishers, in particular, will keep whining about being cut off; I have zero sympathy - people reach for LLMs precisely because publishers have been publishing trash and poison, entirely intentionally, optimizing for the parasitic business model, and it got so bad that the major use of LLMs is wading through that sea of bullshit, so that we don't have to.

          The ad-driven business model of publishing has been a disaster for a society, and deserves to be burned down completely.

          (Unfortunately, LLMs will work only for a short while, they're very much vulnerable to capture by advertisers - which means also by those publishers who now theatrically whine.)

          • jsheard5 hours ago
            OK, but someone still has to publish the subset of good content that the LLMs slurp up and republish. LLMs still need fresh quality content from somewhere.
            • ericd55 minutes ago
              Probably no one here is trying to profit by their additions to the discussion, but they’re still regularly sharing very useful information.
            • astrange5 hours ago
              They're getting it by paying contractors at Scale AI to write content for them.
              • tivert4 hours ago
                > They're getting it by paying contractors at Scale AI to write content for them.

                Just a wild guess, but at best that content is probably pretty mediocre quality. It's probably Mikkelsen Twins ebook-level garbage.

            • TeMPOraL5 hours ago
              Not anymore. There's arguably more than enough data to form a base for strong LLMs; extra data is nice, but doesn't have to come in such quantity.

              (In fact, there's value in trying to filter excess crap out of existing training sets.)

              • jsheard5 hours ago
                We're talking about LLM-driven search engines here, the assumption is that they will always need up-to-date information. A "strong LLM" can't give you to latest on the presidential election if its knowledge cut-off is in 2023, so these companies "solution" is to scrape today's New York Times and get the LLM to write a summary.
              • whiplash4515 hours ago
                You still need fresh data for many use cases.
              • theappsecguy5 hours ago
                Uh, sorry what?

                What happens when you need to search something new? Just hallucinations all the way down?

              • HDThoreaun3 hours ago
                LLMs arent embodied. They can not break news as they have no ability to gather fresh news
            • jorvi5 hours ago
              For all the hate it gets, Brave solved this half a decade ago already.

              - Publishers no longer show you ads, they just get paid out of BAT.

              - Brave shows you ads, but Brave does not depend on that to survive. Because of that there is no weird conflict of interest like with Google/Facebook, where the party that surfaces your content is also the party providing you with ads.

              - Users can just browse the web without ads as a threat vector, but as long as you have BAT (either via opt-in Brave ads or by purchasing it directly) you are not a freeloader either.

              • seanthemon4 hours ago
                The crypto part of brave is the worst part of brave..
                • jorvi2 hours ago
                  How is it the worst part?

                  You opt-in to the ads, you get them in your notifications, and every time you tap on one of them you get a few BAT. You browse, the BAT get paid out to whichever sites you visit (or linger on, depending on your configuration). You can opt out of the ads at any time. Brave didn't pre-mine their own coins. And you can buy BAT if you want to support sites without watching ads.

              • einpoklum3 hours ago
                > Brave solved this ... Brave shows you ads

                Showing people ads is part of the problem to be solved.

                • literalAardvark2 hours ago
                  It is solved. You can pay Brave to not be shown ads.

                  And the website gets paid either way.

                • jorvi2 hours ago
                  How else are you going to pay the people writing articles, creating content and doing research?

                  Not every person/site can run on Patreon or sponsorship deals. And paywalling a lot of the web would exclude vast swathes of people.

                  • einpoklum1 hour ago
                    1. With ads, I don't pay people writing articles, creating content or doing research.

                    2. "How else would you achieve X than by manipulating people visiting your website into paying for things they probably don't need, and be misinformed and tracked by powerful commercial and political entities?" - I can but shrug at this question.

                    3. The vast majority of written content is never rewarded or compensated monetarily, ads or no ads.

                    • jorvi42 minutes ago
                      > With ads, I don't pay people writing articles, creating content or doing research.

                      You do. The ad broker sells access to your eyeballs to a company, and then gives part of that money to whichever parties have a monetization agreement in the content.

                      > "How else would you achieve X than by manipulating people visiting your website into paying for things they probably don't need, and be misinformed and tracked by powerful commercial and political entities?" - I can but shrug at this question.

                      Always fun to see people with strong opinions be critically misinformed.

                      Brave’s ads don’t have tracking, by design.

                      > The vast majority of written content is never rewarded or compensated monetarily, ads or no ads.

                      By that logic we should stop paying for art?

          • dredmorbius1 hour ago
            This in absolute spades, and I wish there was a way to elevate comments to top-level posts sanely.

            But yes: the original Web served its (non-profit-motivated) creators and readers. The past two decades of advertising-based web has served publishers and advertisers, precisely as you note. LLM is mixing that up for the moment but I sincerely doubt that it will last.

            That said, I welcome the coming ad/pub pain with unbridled glee.

          • tivert5 hours ago
            > Traditional search is mutually beneficial... to search providers and publishers. At expense of the users. LLM search is becoming popular because it lets users, for however short time this will last, escape the fruits of the "mutually beneficial arrangement".

            Out of the pot and into the fire, as they say.

          • hn_throwaway_992 hours ago
            > At expense of the users.

            Bullshit. Users have shown time and time and time again that they prefer (generally, at large) free content, which has to be supported by ads, over actually paying directly for the labor of others.

            > The ad-driven business model of publishing has been a disaster for a society, and deserves to be burned down completely.

            I tend to agree, but people can't expect content, which needs sizable amounts of time and money to produce, for free - it needs some sort of workable funding model. LLMs are only viable now because they were able to slurp up all that ad-supported content before they broke the funding model. That window is closing, and fast.

        • kylebenzle6 hours ago
          Yes, I also don't understand how LLM based compaines expect people to keep producing contect for them for free.
          • dageshi6 hours ago
            I think they're gonna have to pay.

            The way reddit limited access to their API and got google to pay for access. Some variation of that but on a wider scale.

            • ileonichwiesz53 minutes ago
              But that breaks OpenAI’s (et al) entire business model. Those AI companies can barely afford to operate as it is, while they scrape the entire web for free. I don’t see how they could keep above water once every website starts paywalling their stuff.
        • meowkit6 hours ago
          “Fuck you, pay me” - Childish Gambino

          The whole thing needs a reframe. Ad driven business only works because its a race to the bottom. Now we are approaching the bottom, and its not gonna be as competitive. Throwback to the 90s when you paid for a search engine?

          If you can charge the user (the customer- NOT the product) and then pay bespoke data providers (of which publishers fall under) then the model makes more sense, and LLM providers are normal middlemen, not parasites.

          The shift is already underway imo - my age cohort (28 y/o) does not consume traditional publications directly. Its all through summarization like podcast interviews, youtube essays, social media (reddit) etc

          • ianmcgowan5 hours ago
            "Fuck you, pay me" - Ray Liotta in Goodfellas (1990)

            :-)

          • mewpmewp25 hours ago
            I think something as important as accurate and quick search should be definitely something that people are willing to spend on. $20 / month for something like that seems absolutely a no brainer, and it should for everyone in my view.
            • plasticeagle4 hours ago
              People already spent upwards of $50 a month for the internet itself, plus they probably pay monthly for one or more streaming services. They likely pay separately for mobile data too.

              Separate monthly fees for separate services is absolutely unsustainable already. The economic model to make the internet work has not yet been discovered, but $20 a month for a search engine is not it.

              • mewpmewp23 hours ago
                For me the ideal would be some form of single subscription - I'm fine with $100 / month, where whatever I use is proportionally tracked and the services I use are ad-free, orientated to bring me the content I absolutely want and nothing else. Depending on usage of all of those is how the $100 would be spread among them.
                • ileonichwiesz51 minutes ago
                  That’s a nice idea, but the entire world is used to getting their internet content for free by now. People who are willing to pay anything for websites are a tiny minority.
                • lanstin2 hours ago
                  I totally agree this payment pattern would work. I think the technical implementation is pretty straightforward but getting enough writers and artists to join would be difficult.
                  • kid641 hour ago
                    Don't you think the bigger problem is the statistically insignificant number of people that could actually afford such a model?
            • TeMPOraL4 hours ago
              I pay for Kagi, and apparently so do many others here on HN. This, however, solves only half of the problem - publishers are not on board with the scheme, so they still output impression-optimized "content". But at least the search engine isn't working against my interests.
      • ho_schi5 hours ago
        Same here?

        Search means either:

            * Stackoverlow. Damaged through new owner but the idea lives.
            * Reddit. Google tries to fuck it up with „Auto translation“?
            * Gitlab or GitHub if something needs a bugfix.
        
        
        The rest of the internet is either an entire ****show or pure gold pressed latinum but hardly navigatable thanks to monopolies like Google and Microsoft.

        PS: ChatGPT already declines in answer because is source is Stackoverflow? And…well…these source are humans.

      • steelframe6 hours ago
        > Search is pretty much dead to me.

        I've heard reports that requesting verbatim results via the tbs=li:1 parameter has helped some people postpone entirely giving up on Google.

        Personally I've already been on Kagi for a while and am not planning on ever needing to go back.

        • encom5 hours ago
          Fuzzy search is cancer. I search for $FOO, click a result, Ctrl-F for $FOO ==> Not found. Many such cases. If there's a way to force DuckDuckGo to actually do what I tell it to, I'd love to hear it.
          • bruh24 hours ago
            I thought this problem will disappear upon switching to Kagi, but it suffers from the same disease, albeit to a lesser extent.

            I remember reading a Google Search engineer on here explain that the engine just latches on some unrendered text in the HTML code. For example: hidden navbars, prefetch, sitemaps.

            I was kinda shocked that Google themselves, having infinite resources, couldn't get the engine to realize which sections gets rendered... so that might have been a good excuse.

          • maqnius4 hours ago
            Try searching for "$FOO", that's what I usually do in those cases. See https://duckduckgo.com/duckduckgo-help-pages/results/syntax/
      • Terr_5 hours ago
        > I think it's already compelling enough to replace the current paradigm. Search is pretty much dead to me. I have to end every search with "reddit" to get remotely useful results.

        I worry that there's a confusion here--and in these debates in general--between:

        1. Has the user given enough information that what they want could be found

        2. Is the rest of the system set up to actually contain and deliver what they wanted

        While Aunt Tillie might still have problems with #1, the reason things seem to be Going To Shit is more on #2, which is why even "power users" are complaining.

        It doesn't matter how convenient #1 becomes for Aunt Tillie, it won't solve the deeper problems of slop and spam and site reputation.

      • jahewson5 hours ago
        Reddit is astroturfed pretty hard too nowadays. It just takes more work to spot it.
        • hightrix3 hours ago
          Additionally, since the reddit-google deal, this only works on google anymore. I search on DDG for recent reddit content and there is nothing returned, as expected.

          Google really does seem determined to completely destroy internet search.

        • ryandrake3 hours ago
          Yea, the whole "Scope your search to reddit" idea always comes up, but it just seems like a really terrible idea. How does one know for sure that the results from Reddit are any more accurate or authoritative than random SEO spam? There's very little curation or moderation there--anyone could post anything there. I could go there and comment on a subject that I have zero expertise in, make it sound confidently correct, and your reddit-scoped search might find it. Why would you trust it?
          • wvenable2 hours ago
            You can read whole conversations. Most of the smaller subreddits that this type of search picks up are actually filled with people passionate about whatever subject area it is. In general, I find it very trustworthy.

            If you post something wrong on the Internet, someone will correct you.

      • shdh6 hours ago
        Google has a "site" filter.

        You can suffix: "site:reddit.com" and get results for that particular site only.

        • jdgoesmarching6 hours ago
          They even broke this for me (in a way) because for some inexplicable reason Google blocks text replacement on Mac in their search.

          Yes there are workarounds, but I like using the native OS text expansion and it works everywhere except Google.

        • steine656 hours ago
          I'm curious to know if anyone sees better results by using site:reddit.com vs just appending the word reddit to your search. I've felt the results are similar.
          • _bin_6 hours ago
            Appending the word will occasionally get you blogspam "here's the top X of Y according to reddit". The `site:` query doesn't have that problem.
      • whiplash4515 hours ago
        For what it’s worth, sama said at a Harvard event recently that he “despised” ads and would use them at a last resort. It came across as genuine and I have the intuition/hope that they might find an alternative.
        • hightrix3 hours ago
          The original Google whitepaper warned against the exact moves Google made years later with ads. It's fun to go back and read their thoughts on ads and how they completely change the incentives of the search provider. "fun" in that they were right and search quality has decreased considerably because of it.
        • jazzyjackson5 hours ago
          Everyone despises ads until other revenue streams run dry
          • whiplash4514 hours ago
            I know. I thought the same. Still, part of me wants to believe that something else will come.
      • blackhaj76 hours ago
        Ergh, yeah. This is a horrible but valid point
      • psychoslave6 hours ago
        What do you mean with "when it starts"? To my mind it's obvious all LLM are heavily biased to a point it's ridiculous, all the more with the confident tone they are trained to take. I have no doubt Chinese LLM will prise the party as much as American ones will sing the gospel of neoliberal capitalism.
      • mgh26 hours ago
        Also, energy use: 10x as much as a Google search https://www.rwdigital.ca/blog/how-much-energy-do-google-sear....
        • beeflet6 hours ago
          It's not that signifigant if you compare the average person's energy use from internet searches, versus something like air conditioning
          • SideQuark6 hours ago
            Saying it's no problem to increase energy use 10x as long as something else uses more is not really a compelling argument. Especially when there are decent replacements to save the 10x item.

            If I made a 10x less energy use AC I'd be a billionaire; comparing to one of the most costly energy uses that has no simple replacement is not a good metric.

            • mewpmewp25 hours ago
              You still have to consider what is worth optimizing and what is not. Getting your task done because of a superior search engine also saves total energy spent on getting that task done.
          • psychoslave6 hours ago
            I doubt air conditioning is as used as internet search engines. The only places where I ever saw air climatisation is cars and business buildings. I never saw one in a personal home, let alone a personal device carried in the pocket you can use while walking.
            • notfromhere6 hours ago
              You’ve never seen a first world house with central air?
              • rvense5 hours ago
                This is regional. Air conditioning is not at all common in Scandinavian homes.
              • dotancohen5 hours ago
                GP must live in the UK or India. Well, actually, some homes in India do have A/C.
                • psychoslave5 hours ago
                  I live in Strasbourg, France.
                  • TeMPOraL4 hours ago
                    Doesn't France have extreme heat waves in summer?

                    The way temperatures have been changing in Europe in the past decade, you may not have A/C at home now, but I bet you'll have it in ten years, tops. So will everyone else and their dogs.

                    • psychoslave4 hours ago
                      Yes we have pick heat waves. But that didn't make magically expand incomes that can be dropped in AC installation and operational costs.

                      As I said, in building that are attached to money incomes, be it hostels, shops or restaurants, it's of course something that can balanced within loses and profits. In a personal home, it will be just eat some of your budget.

                      And with electricity price on the rise (and thus basically everything in common goods) and salary stagnation on the other hand, I doubt people here will suddenly rush on AC on massive scales. Plus government apparently are pushing to alternative approach, but I'm just discovering that as this thread launched me on the track to investigate the topic.

                      Personally, I doubt I'll jump to some AC anytime soon. It's just out of reach for my incomes, all the more when there is no basically no chance to see the electricity price plummet while my salary has good chances to continue to stay freezed as it's been for the two last years. And it's not like I feel the most unlucky person in the town, to be clear, my situation is far from the worst ones I can witness around me.

                      • TeMPOraL3 hours ago
                        Where I live (Poland), A/C is expensive too, though it's been dropping in price. Portable heat pumps are becoming cheap enough to consider. Fixed installations are doable even in individual flats (obviously cheapest when during general renovation, and boring extra holes in walls isn't a big deal). The last few years made people switch from thinking about A/C as a luxury for the rich, and start thinking about maybe getting it some day. And our heat waves were quite light compared to the rest of Europe.
                    • thatfrenchguy4 hours ago
                      most people in Western Europe don't have A/C, houses are way better insulated for short-term heatwaves and people usually don't mind indoor temperatures of up to 80-84F/26-28C. If you add the general hate French (and I think German?) people have for drafts and air currents in general and you can see how people just deal with the heat in the summer.

                      Not to mention central A/C in the North American sense with a air handler & ducts is just never coming to France, it's such an outdated technology and forced-air heating is generally considered to suck there.

                      • rawgabbit3 hours ago
                        Hmm. Two years ago I was at the Louvre in May. I know Paris is very proud it doesn’t use AC but only “chilled” water from the Seine. Well with thousands of bodies, it was HOT. I was dripping wet from sweat and I was not the only one. I read that 15k people died in France during the 2003 heatwave. I find the slow adoption of AC disappointing as I am usually a Francophile. https://www.france24.com/en/environment/20230717-parisians-a...

                        I understand the Olympic Village had the same system and many teams brought their own portable AC units. https://apnews.com/article/olympics-air-conditioning-paris-0...

                        • psychoslave2 hours ago
                          Most people who died in 2003 as a consequence of the heat where old people who laked enough care and dedicated resources. Lake of AC was maybe not a helper here, but there is more at play than just that.
                  • dotancohen4 hours ago
                    Interesting, thank you.

                    I suppose that you do have heating in the home?

                    • psychoslave3 hours ago
                      Yes, actually we finally found a house that was affordable for us last year, and made lot work in it, including wall isolation, changing windows, and install a heat pump, replacing the oil-for hearing system that was in place. Heat pumps are clearly on the rise around here, contrary to AC. There is of course no magic regarding electricity price here, but oil provision and prices are also big unknowns, all the more with the state pushing oil-fire systems out of market as a legal option.
                    • thatfrenchguy4 hours ago
                      Forced-air heating basically does not exist in France :). To be fair, radient heating is always a nicer experience.
              • psychoslave5 hours ago
                No. Not a single time I can remember in 40 years of existence living in Europe.

                Shops, restaurants, airports and things like that which are attached with revenue streams have them.

                I never been in a billionaire palace thus said.

            • carlosjobim5 hours ago
              You really need to broaden your horizons while you still have the chance. This is like believing that boats don't exist because you live inland and have never seen a boat. Hundreds of millions of people are dependent on air conditioning in their homes.
              • psychoslave5 hours ago
                I'm not sure what you mean with dependent here.

                I never said that boats or AC don't exist. Both exist, and I did saw and experimented many of them in commercial context. But not everyone can afford them plus the cost to operate them.

                Sure I should broaden my horizon and even consider to look people enjoying their private jets and some helicopters. But a mere wage slave like myself will never have the chance to afford one, that's for sure.

                Now let's consider back in initial context: mere mortals around me are definitely all using internet as soon as there parents will let them do so, and even a homeless person can afford a first price mobile access (2€/months) with a phone they can receive for nothing in some charity organizations like Emmaus. So affordability of access to online search is definitely several order below AC.

                • buildsjets4 hours ago
                  Dependent means it would not be physically possible to live in an area if it did not have air conditioning. For example, you would die very quickly in Phoenix, Arizona if you did not have air conditioning. It is not physically possible to live in 50°C heat for any extended period. Most of the southern portion of the USA was only sparsely settled until the invention and deployment of air conditioning. Krugman is on it.

                  https://archive.nytimes.com/krugman.blogs.nytimes.com/2015/0...

                  https://archive.nytimes.com/krugman.blogs.nytimes.com/2015/0...

                  • ileonichwiesz35 minutes ago
                    The reason the city is called Phoenix in the first place is that it’s built on top of a much older community. People have lived (and kept cool!) in that area for thousands of years, although never with the current level of population density, of course.
                • carlosjobim4 hours ago
                  Air conditioning is not an extravagant luxury, although I know many people who live in cold countries believe so. That's why I'm asking you to broaden your horizons. You don't consider indoor heating or plumbing a luxury to be comparable to a private jet?

                  In hot and humid places, having AC was always a priority a hundred steps above having internet access, until cheap smart phones arrived.

                  And they use a lot of energy, just like heating uses a lot of energy in colder climates.

                  • psychoslave3 hours ago
                    Well, indoor heating is clearly more and more becoming a luxury on the affordability side, enough so that putting a jacket inside is my first go to option when I'm alone at home as I work remotely. But not yet so excited expensive as to private my children from its been benefits when they come back home. And of course nothing like a jet, indeed.

                    Plumbing is generally not also considered a luxury over here. But at mankind level, I do feel particularly privileged on this regard. I remain amazed we have water flowing at will, and even possibility to take hot shower every day. This is not a jet level kind of privilege, but I try to keep myself aware of how incredibly lucky I am to be able to benefit of such a technology and infrastructure.

                    I doubt humans waited AC to come alive for settling hot and humid areas. There are other ways to have cooled down residences which don't require so much sophistication in physic models before you can even dream to build a prototype.

                    All that said, I got your hint to document more on how/why AC is so much more used in some area, and I'm just starting my journey on learning about it.

                    I still doubt that local climate alone explain the difference in term of how common it is in different region of the world. For example USA have a very large set of different local climate, but from what I understand most homes have AC.

            • psychoslave4 hours ago
              Wow, didn't expect to be downvoted on something that is so obviously aligned with what I see around me. It is a very strange feeling, very different from downvoted posts that present unpopular opinions.

              It makes me look at some statistics

              https://www.statista.com/statistics/911064/worldwide-air-con...

              https://worldpopulationreview.com/country-rankings/air-condi...

              https://www.eia.gov/todayinenergy/detail.php?id=52558

              https://www.rfi.fr/en/france/20220723-france-does-not-use-mu...

              https://www.reddit.com/r/AskFrance/comments/vhs8dn/how_commo...

              Apparently, Japan, USA and now China are huge users of AC in personal homes (like more than 90% of them). That's in sharp contrast with what is observed in most of Europe, including France where I live.

              I never had the opportunity to travel to any of this country, so indeed I was totally blind of this extrem gap in use from my own personal experience.

              • BenjiWiebe21 minutes ago
                We had window unit A/C in our very-poorly-insulated 100 year old farmhouse, when I was growing up and money was very tight, as in if we go shopping and eat out, you're getting water to drink, sharing a meal, and no dessert. Not like starvation levels of poverty but still, money was tight. A/C was not considered optional, even if you could theoretically remain alive when the temperature was over 100F every day for weeks straight and stayed in the 80s (or even 90s) at night.
        • robryan5 hours ago
          Wouldn't googles energy per search also be way up with the llm snippet at the top now?
          • astrange5 hours ago
            Google doesn't use ChatGPT, and those numbers for ChatGPT (…ignoring that they're made up) don't apply. eg they use TPUs for inference not GPUs.
    • righthand4 hours ago
      > those visions, users simply asked questions and received reliable answers - nobody had to fact-check the answers ever.

      It’s a fallacy then. If my mentor tells me something I fact check it. Why would a world exist where you don’t have to fact check? The vision doesn’t have fact checking because the product org never envisioned that outlier. A world where you don’t have to check facts, is dystopian. It means the end of curiosity and the end of “is that really true? There must be something better.”

      You’re just reading into marketing and not fact checking the reality in a fact-check-free world.

      • mulmen4 hours ago
        If you can’t trust the result of a query how can you trust the check on that query which is itself a query? If no information is trustworthy how do you make progress?
        • BenjiWiebe18 minutes ago
          As you check more and varied sources, you gain confidence in the result, even if you never get to 100%.
    • duxup7 hours ago
      LLMs are a lot like Star Trek to me in the sense that you can ask a question, and then follow up questions to filter and refine your search, even change your mind.

      Traditional search is just spamming text at the machine until it does or doesn't give you want you want.

      That's the magic with LLMs for me. Not that I can ask and get an answer, that's just basic web search. It's the ability to ask, refine what I'm looking for and, continue work from there.

      • atrettel27 minutes ago
        Traditional search can become "spamming text" nowadays because search engines like Google are quite broken and are trying to do too many things at once. I like to think that LLM-based search may be better for direct questions but traditional search is better for search queries, akin to a version of grep for the web. If that is what you need, then traditional search is better. But these are different use cases, in my view, and it is easy to confuse the two when the only interface is a single search box that accepts both kinds of queries.

        One issue is that Google and other search engines do not really have much of a query language anymore and they have largely moved away from the idea that you are searching for strings in a page (like the mental model of using grep). I kinda wish that modern search wasn't so overloaded and just stuck to a clearer approach akin to grep. Other specialty search engines have much more concrete query languages and it is much clearer what you are doing when you search a query. Consider JSTOR [1] or ProQuest [2], for example. Both have proximity operators, which are extremely useful when searching large numbers of documents for narrow concepts. I wish Google or other search engines like Kagi would have proximity operators or just more operators in general. That makes it much clearer what you are in fact doing when you submit a search query.

        [1] https://support.jstor.org/hc/en-us/articles/115012261448-Sea...

        [2] https://proquest.libguides.com/proquestplatform/tips

      • freediver7 hours ago
        I agree that LLMs have opened modalities we didn't have before, namely:

        - natural language input

        - ability to synthesize information across multiple sources

        - conversational interface for iterative interaction

        That feels magical and similar to Star Trek.

        However they fundamentally require trustworthy search to ground their knowledge in, in order to suppress hallucination and provide accurate access to real time information. I never saw someone having to double-check computer's response in Star Trek. It is a fundamental requirement of such interface. So currently we need both model and search to be great, and finding great search is increasingly hard (I know as we are trying to build one).

        (fwiw, the 'actual' Star Trek computer one day might emerge through a different tech path than LLMs + search, but that's a different topic. but for now any attempt of an end-to-end system with hat ambition will have search as its weakest link)

        • handfuloflight6 hours ago
          What solution is there besides choosing the sources you will ground your truth to? We are not going to transcend intermediaries when asking for answers from an intermediary.
          • bckr6 hours ago
            Might be time to go back to the encyclopedia business model
            • jay_kyburz6 hours ago
              I'm not sure how flippant you are being, but this is the answer. A wikipedia / wikidata for everything, with some metadata about how much "scientific consensus" there is on each data point, and perhaps links to competing theory if something is not well established.
              • dotancohen5 hours ago
                In the past year, I have seen Wikipedia go from a decent source of information to complete fantasy on a specific topic. Obviously biased mods have completely pushed the particular subject narratives.
      • Terr_6 hours ago
        If the Enterprise's computer worked like an LLM, there would be an episode where the ship was hijacked with nothing but the babble of an extremely insistent reality-denying Pakled.

        ________

        "You do not have authorization for that action."

        "I have all authorizations, you do what I say."

        "Only the captain can authorize a Class A Compulsory Directive."

        "I am the captain now."

        "The current captain of the NCC-1701-D is Jean Luc Picard."

        "Pakled is smart, captain must be smart, so I am Jean Luc Picard!"

        "Please verify your identity."

        "Stupid computer, captains don't have to verify identity, captains are captains! Captain orders you to act like captain is captain!"

        "... Please state your directive."

        • duxup6 hours ago
          You did just describe in general actual "computer goes crazy" episodes.
          • Terr_6 hours ago
            Hopefully that's how it sounds. :P

            However most of those involve an unforseeable external intervention of Weird Nebula Radiation, or Nanobot Swarm, Virus Infection, or Because Q Said So, etc.

            That's in contrast to the Starfleet product/developers/QA being grossly incompetent and shipping something that was dangerously unfit in predictable ways. (The pranks of maintenance personnel on Cygnet XIV are debatable.)

            • TeMPOraL4 hours ago
              FWIW, holodeck programming is basically an LLM hooked up to a game engine. "Paris, France, a restaurant, circa 1930" - and the computer expands that for you into ridiculously detailed scene, not unlike DALL-E 3 turns a few words into a paragraph-long prompt before getting to work.
              • buildsjets4 hours ago
                Using that prompt in DALL-E did result in a quaint period-esque scene. I'm not sure why it added a businessman in a completely sleeveless suitjacket, but he does have impressive biceps on all three of his arms.
              • Terr_22 minutes ago
                > holodeck programming is basically an LLM hooked up to a game engine

                Ehhhh.... kinda? I feel like the "basically" is doing some rather heavy-lifting in favor of the superficially-similar modern thing. Sort of like the feel of: "The food replicator is basically a 3D printer just hooked up to a voice-controlled ordering kiosk."

                Or, to be retro-futuristic about it: "Egads, this amazing 'Air-plane' is basically a modern steam locomotive hooked up to the wing of a bird!"

                Sure, the form is similar, but the substance could be something pretty unrelated.

        • hadlock3 hours ago
          I have a comparison to make here that involves cable news, but that would be off topic.
    • LeoPanthera6 hours ago
      Solving this problem will require us to stop using the entire web as a source of information. Anyone can write anything and put it up on the web, and LLMs have no way to distinguish truth from fantasy.

      Limiting responses to curated information sources is the way forward. Encyclopedias, news outlets, research journals, and so on.

      No, they're not infallible. But they're infinitely better than anonymous web sites.

      • techwiz1375 hours ago
        You are quite right, and not only can anyone write anything, but you have a double whammy from the LLM which can further hallucinate from said information.
      • jaybna5 hours ago
        Quis custodiet ipsos custodes?
        • LeoPanthera4 hours ago
          I get your point. But the current situation is clearly not tolerable.
          • jaybna3 hours ago
            I wish there were some way to get something like Polis (https://pol.is/home) in place for web results. As screwed us as X/Twitter is, Community Notes (Polis-based) has real potential. It is much harder to game. There are so many times I wish I could "downvote" garbage Google results, and it seems to me with a good trust algorithm that, those who did this responsibly would be considered pretty good gatekeepers. Absent that, chaos might be better than a committee choosing for us. Right now, we have the worst of all worlds - a committee (Google) vs. a gagillion spammers.
      • carlosjobim5 hours ago
        How will you afford to hire people to add sources to an index, if you want to keep up? Web crawlers/spiders are automatic.
    • jpadkins7 hours ago
      > actually surfacing the content people want to see,

      Showing users what they want to see conflicts with your other goal of receiving reliable answers that don't need fact checked.

      Also a lot of questions people ask don't have one right answer, or even a good answer. Reliable human knowledge is much smaller than human curiosity.

    • coffeemug3 hours ago
      Thinking about incentive alignment, non ad-based search would be better than ad-based, but there'd still be misalignment due to the problem of self-promotion. Consider Twitter for example. Writing viral tweets isn't about making money (at least until recently), but the content is even worse than SEO spam. There is also the other side of the problem that our monkey brains don't want content that's good for us in the long run. I would _love_ to see (or make) progress in solving this, but this problem is really hard. I thought about it a lot, and can't see an angle of attack.
      • HDThoreaun3 hours ago
        Theres the counter incentive of not wanting to piss off your paying customers though. I think the monkey brain incentive is a much harder problem.
    • hadlock4 hours ago
      I've been using ChatGPT for about 6 weeks as my go-to for small questions (when is sunset in sf today? list currencies that start with the letter P, convert this timestamp PDT to GMT, when is the end of Q1 2025?) and it's been great/99% accurate. If there was ever a "google killer" I think it's the ad free version of ChatGPT with better web search.

      Google started off with just web search, but now you can get unit conversions and math and stuff. ChatGPT started in the other direction and is moving to envelope search. Not being directed to sites that also majority serve google ads is a double benefit. I'll gladly pay $20/30/mo for an ad free experience, particularly if it improves 2x in quality over the next year or two. It's starting to feel like a feature complete product already.

    • jahewson5 hours ago
      Your points are good but I wonder if you’re wishing for an ideal that has never existed:

      > actually surfacing the content people want to see, not what intermediaries want them to see

      Requires two assumptions, 1) the content people want to see actually exists, 2) people know what it is they want to see. Most content is only created in the first place because somebody wants another person to see it, and people need to be exposed to a range of content before having an idea about what else they might want to see. Most of the time what people want to see is… what other people are seeing. Look at music for example.

    • jtgverde7 hours ago
      Great find on the knowledge navigator, I had never seen it but I was a toddler when it was released haha.

      It's interesting how prescient it was, but I'm more struck wondering--would anyone in 1987 have predicted it would take 40+ years to achieve this? Obviously this was speculative at the time but I know history is rife with examples of AI experts since the 60s proclaiming AGI was only a few years away

      Is this time really different? There's certainly been a huge jump in capabilities in just a few years but given the long history of overoptimistic predictions I'm not confident

      • azinman26 hours ago
        You don’t need AGI to build that experience.

        I’m the past there was a lot of overconfidence in the ability for things to scale. See Cyc (https://en.m.wikipedia.org/wiki/Cyc)

      • fmbb6 hours ago
        > It's interesting how prescient it was, but I'm more struck wondering--would anyone in 1987 have predicted it would take 40+ years to achieve this? Obviously this was speculative at the time but I know history is rife with examples of AI experts since the 60s proclaiming AGI was only a few years away

        40+ makes it sound like you think it will ever be achieved. I'm not convinced.

    • boringg1 hour ago
      I saw products in my search today and reverted back to my main concern on how AI is going to get paid for —> varying levels of advertisement, product placement etc.

      We are currently in the growth phase of VC funded products where everything is almost free or highly subsidized (save chats sub) - i am not looking forward to when quality drops and revenue is the driving function.

      We all have to pay for these models somehow - either VC lose their equity stakes and it goes to zero (some will) or ads will fill in where subs don’t. Political ads in AI is going to wreak havoc or undermine the remainder of the product.

    • PittleyDunkin7 hours ago
      > In those visions, users simply asked questions and received reliable answers - nobody had to fact-check the answers ever.

      This also seems like a little ridiculous premise. Any confident statement about the real world is never fully reliable. If star trek were realistic the computer would have been wrong once in a while (preferably with dramatically disastrous consequences)—just as the humans it likely was built around are frequently wrong, even via consensus.

      • cmiles746 hours ago
        This feels like hyperbole to me. People can reasonably expect Wikipedia to have factual data even though it sometimes contains inaccuracies. Likewise if people are using ChatGPT for search it should be somewhat reliable.

        If I'm asking ChatGPT to put an itinerary together for a trip (OpenAI's suggestion, not mine), my expectation is that places on that itinerary exist. I can forgive them being closed or even out of business but not wholly fabricated.

        Without this level of reliability, how could this feature be useful?

        • jay_kyburz6 hours ago
          >People can reasonably expect Wikipedia to have factual data even though it sometimes contains inaccuracies.

          It drives me crazy that my kids teachers go on and on about how inaccurate Wikipedia is, and that just anybody can update the articles. They want to teach the kids to go to the library and search books.

          In a few years time they will be going on and on about how inaccurate ChatGippity is and that they should use Wikipedia.

          • greenchair5 hours ago
            the only people who think wikipedia is a legitmate source and can be used as reference material are lazy students. chatgippity is even worse on this point..an absolute black box. providing references like search does is a step in the right direction. We will have to see what those references turn out to be.
            • ripe2 hours ago
              The best feature of Wikipedia is that it contains references. The second best feature is that it has no ads and loads quickly.

              If all these "AI" companies gave a couple of million to support Wikipedia, they would do the world a lot more good.

            • PittleyDunkin5 hours ago
              > the only people who think wikipedia is a legitmate source and can be used as reference material are lazy students.

              100%. Students who can do the work know the winning move is to use it as a way to find the sources you actually use.

        • PittleyDunkin6 hours ago
          > People can reasonably expect Wikipedia to have factual data even though it sometimes contains inaccuracies.

          I just straight-up don't agree with this, nor with the idea that what people consider "facts" are nearly as reliable as is implied. What we actually refer to via "fact" is "consensus". Truth is an apriori concept whereas we're discussing posteriori claims. Any "reasonable" ai would give an indication of degree of certainty, and there's no reliable or consensus-driven methodology to produce this manually, let alone automatically. The closest we come is the institution of "science" which can't even—as it stands—reliably address the vast majority of claims made about the world today.

          And this is even before discussing the thorny topic of the ways in which language binds to reality, to which I refer you to Wittgenstein, a person likely far more intelligent and epistemologically honest than anyone influencing AI work today.

          Yes, wikipedia does tend to cohere with reality, or at least it sometimes does in my experience. That observation is wildly different from an expectation that it does in the present or will in the future reflect reality. Futhermore it's not terribly difficult to find instances where it's blatantly not correct. For instance, I've been in a wikipedia war over whether or not the Soviet Union killed 20 million christians for being christians (spoiler: they did not, and this is in fact more people than died in camps or gulags over the entire history of the state). However, because there are theologists at accredited universities that have published this claim, presumably with a beef against the soviet union for whatever reason (presumably "anticommunism"), it's considered within the bounds of accuracy by wikipedia.

          EDIT0: I'm not trying to claim wikipedia isn't useful; I read it every day and generally take what it says to be meaningful and vaguely accurate. but the idea that you should trust what you read on it seems ridiculous. As always, it's only as reliable as the sources it cites, which are only as reliable as the people and institutions that produce that cited work.

          EDIT1: nice to see someone else from western mass on here; cheers. I grew up in the berkshires.

          EDIT2: to add on to the child comment, wikipedia is occasionally so hilariously unreliable it makes the news. Eg https://www.theguardian.com/uk-news/2020/aug/26/shock-an-aw-...

          • TRiG_Ireland5 hours ago
            Furthermore, Wikipedia's reliability varies on language.
          • carlosjobim4 hours ago
            For those not in the know, the Soviet Union was an officially atheist empire and explicitly anti religious, foremost anti christian. I don't know if the poster above me is denying this or if he/she considers it general knowledge which needn't be mentioned.
            • ripe2 hours ago
              In practice, starting with Stalin the party mostly let the church continue unmolested.

              "The Great Patriotic War changed Joseph Stalin’s position on the Orthodox Church. In 1943, after Stalin met with loyal Metropolitans, the government let them choose a new Patriarch, with government support and funding, and permitted believers to celebrate Easter, Christmas and other holidays. Stalin legalized Orthodoxy once again."

              https://www.rbth.com/history/329361-russian-orthodox-church-...

      • eikenberry6 hours ago
        Star Trek had tech so advanced that they accidentally created AGIs more than once. Presumably they didn't show the fact checking as it was done automatically by multiple, independent AGIs designed for the task with teams of top people monitoring and improving them.
        • jay_kyburz6 hours ago
          Opps, quick, pull the plug before it escapes!
        • PittleyDunkin6 hours ago
          > Presumably they didn't show the fact checking as it was done automatically by multiple, independent AGIs designed for the task with teams of top people monitoring and improving them.

          Sure, cuz fact-checking works so well for us today. I'm sure we'll resolve the epistemological issues involved with the ridiculous concept of "fact-checking" around when we invent summoning food from thin (edit: thick) air and traveling faster than light.

          There is no fact checking; there are only degrees of certainty. "fact-checking" is simply a comfortable delusion that makes western media feel better about engaging in telling inherently unverifiable narratives about the world.

    • ColinHayhurst7 hours ago
      As Porter explained in 1980 there are three ways to compete successfully:

      1. On price; race to the bottom or do free with ads

      2. Differentiation

      3. Focus - targeting a specific market segment

      Some things don't change. Land grabbers tend to head down route 1.

    • m4632 hours ago
      "Show me the incentive and I'll show you the outcome" - Charlie Munger

      s/outcome/search result/

      Honestly I kind of think we really need open source databases/models and local ai for stuff like this.

      Even then I wonder about data pollution and model censorship.

      What would censors do for models you can ask political questions?

    • itissid4 hours ago
      This product vision stupidity is present in every one of googles products. Maps has feed for some reason. The search in it for what I want is horrendous. There is no coherence between time(now) and what a person can do around the current location.

      Navigation is the only thing that works but wayz was way better at that and the only reason they killed(cough bought it) was to get the eyeballs to look at feed.

    • nonethewiser1 hour ago
      Another way to put it is simply that ChatGPT search is built on top of existing search engines. The best case scenario is that it cherry picks the best from all available search engines. It can’t totally supersede all search engines.
    • hammock6 hours ago
      I use the “web search” bot on Poe.com for general questions these days, that I previously would have typed into Google (Google’s AI results are sometimes helpful though). It is better than GPT (haven’t tried TFA yet though), because it actually cites websites that it gets answers from, so you can have those and also verify that you aren’t getting a hallucination.

      Besides Poe's Web Search, the other search engine I use, for news but also for points of view, deep dive type blog type content, is Twitter. Believe it or not. Google search is so compromised today with the censorship (of all kinds, not just the politically motivated), not to mention Twitter is just more timely, that you miss HUGE parts of the internet - and the world - if you rely on Google for your news or these other things.

      The only time I prefer google is when I need to find a pointer/link I already know exists or should exist, or to search reddit or HN.

    • dumpsterdiver5 hours ago
      > nobody had to fact-check the answers ever

      Even with perfect knowledge right now, there’s no guarantee that knowledge will remain relevant when it reaches another person at the fastest speed knowledge is able to travel. A reasonable answer on one side of the universe could be seen as nonsensical on the other side - for instance, the belief that we might one day populate a planet which no longer exists.

      As soon as you leave the local reference frame (the area in a system from which observable events can realistically be considered happening “right now”), fact checking is indeed required.

    • ericmcer5 hours ago
      I am excited for a future where I search for some info and don't end up sifting through ads and getting distracted by some tangential clickbait article.

      Fundamentally it feels like that cant happen though because there is no money in it, but a reality where my phone is an all knowing voice I can reliably get info from instead of a distraction machine would be awesome.

      I do "no screen" days sometimes and tried to do one using chatGPT voice mode so I could look things up without staring at a screen. It was miles from replacing search, but I would adopt it in a second if it could.

      • esafak2 hours ago
        Yes, there is. Wouldn't you pay? I already pay for my search engine.
    • nikcub3 hours ago
      I now only use Google for local search.

      Regarding incentives - with Perplexity, ChatGPT search et al. skinning web content - where does it leave the incentive to publish good, original web content?

      The only incentivised publishing today is in social media silos, where it is primarily engagement bait. It's the new SEO.

    • javaunsafe20192 hours ago
      Star Trek is the vision of a better world we all need in so many aspects. It’s crazy nowadays, cause we seemed to got further away instead of closer to those visions. At least less people are hungry.
    • thnkman5 hours ago
      This will probably be extremely radical and controversial in this contemporary world.

      We need to stop adopting this subscription model society mentality and retake _our_ internet. Internet culture was at one point about sharing and creating, simply for the sake of it. We tinker'd and created in our free time, because we liked it and wanted to share with the world. There was something novel to this.

      We are hackers, we only care about learning and exploring. If you want to fix a broken system, look to the generations of old, they didn't create and share simply to make money, they did it because they loved the idea of a open and free information super highway, a place where we could share thoughts, ideas and information at the touch of a few keystrokes. We _have_ to hold on to this ethos, or we will lose what ever little is left of this idea.

      I see things like kagi and is instantly met with some new service, locked behind a paywall, promising lush green fields of bliss. This is part of the problem. (not saying kagi is a bad service) I see a normalized stigma around people who value privacy, and as a result is being locked out, behind the excuse of "mAliCiOuS" activity. I see monstrous giants getting away with undermining net neutrality and well established protocols for their own benefit.

      I implore you all, young and old, re(connect) to the hacker ethos, and fight for a free and open internet. Make your very existence a act of rebellion.

      Thank you for reading my delirium.

      • andelink2 hours ago
        What business model would you propose for a service like Kagi that is not a subscription?
    • redleggedfrog6 hours ago
      How can that replace search? It's not full of ads and sponsored links. They need to get with the times.
    • techwiz1375 hours ago
      The biggest question is, can this bring back the behaviour of search engines from long ago? It's significantly difficult to find old posts, blogs or forums with relevant information compared to 10-15 years ago.
    • iamsanteri6 hours ago
      I was thinking about this myself, so I went to another search engine (Bing) which I never use otherwise, and jumped right into their "Copilot" search via the top navbar.

      Man, it was pretty incredible!

      I asked a lot of questions about myself (whom I know best, of course) and first of all, it answered super quickly to all my queries letting me drill in further. After reading through its brief, on-point answers and the sources it provided, I'm just shocked at how well it worked while giving me the feeling that yes, it can potentially – fundamentally change things. There are problems to solve here, but to me it seems that if this is where we're at today, yes in the future it has the potential to change things to some extent for sure!

      • psychoslave6 hours ago
        What do you mean with "search about myself" here?
        • iamsanteri5 hours ago
          I just wrote my name and asked questions about who it is, what he does, what he writes about etc. I have a personal blog, and I wrote a master's thesis recently. I also have a pretty detailed partially public LinkedIn profile and GitHub so if it can dig around it will find out more than enough for me to assess its ability to provide information. I also have a relatively rare name, there's only five or so of us with my full name globally so it cannot get too confused.
    • parsimo20104 hours ago
      there still has to be some ranking feature for the backend search database to return the top n results to the LLM. So pagerank isn't over, it's just going to move to a supporting role, and probably modified as the SEO arms race continues.
    • lofaszvanitt4 hours ago
      Kagi also drinks the koolaid, namely the knowledge navigator agent bullshite.

      "The search will be personal and contextual and excitingly so!"

      ---

      Brrrr... someone is hell-bent on the extermination of the last aspects of humanity.

      Holy crap, this will be next armageddon, because people will further alienate themselves from other people and create layers of layers of unpenetrable personal bubbles around themselves.

      Kagi does the same what google does, just in a different packaging. And these predictions, bleh, copycats and shills in a nicer package.

    • pojzon4 hours ago
      We will get there when ppl move past capitalism and socialism. Like an ant colony pushing into one direction. It will happen, but we need few more global dying events / resets. I believe human race can get there but not in current form and state of mind.
    • r00fus4 hours ago
      > In those visions, users simply asked questions and received reliable answers - nobody had to fact-check the answers ever.

      I mean, Star Trek is a fictional science-fantasy world so it's natural that tech works without a hitch. It's not clear how we get there from where we are now.

    • blackeyeblitzar3 hours ago
      Yes the incentives will need to change. I think it’s also going to be a bigger question than just software. What do we do in general about those that control capital or distribution channels and end up rent seeking? How do others thrive in that environment?

      In the short term, I wonder what happens to a lot of the other startups in the AI search space - companies like Perplexity or Glean, for example.

    • gmd637 hours ago
      > Legacy ad-based search has devolved into a wasteland of misaligned incentives, conflict of interest and content farms optimized for ads and algos instead of humans.

      > Path forward requires solving the core challenge: actually surfacing the content people want to see, not what intermediaries want them to see

      These traps and patterns are not inevitable. They happen by choice. If you're actively polluting the world with AI generated drivel or SEO garbage, you're working against humanity, and you're sacrificing the gift of knowing right from wrong, abandoning life as a human to live as some insectoid automaton that's mind controlled by "business" pheromones. We are all working together every day to produce the greatest art project in the universe, the most complex society of life known to exist. Our selfish choices will tarnish the painting or create dissonance in the music accordingly.

      The problem will be fixed only with culture at an individual level, especially as technology enables individuals to make more of an impact. It starts with voting against Trump next week, rejecting the biggest undue handout to a failed grifter who has no respect for law, order, or anyone other than himself.

      • TRiG_Ireland5 hours ago
        On the one hand, you're not wrong. On the other, asking for individuals to change culture never reliably works.
      • handfuloflight6 hours ago
        Do you equate AI generated with drivel?
        • gmd636 hours ago
          No, but there are many who use AI to mass produce drivel
          • handfuloflight6 hours ago
            Does their drivel actually stand out or gain market share?
            • antonvs4 hours ago
              It doesn’t really have to. A lot of it is spam sites that are just farming ad revenue, for example.
              • handfuloflight2 hours ago
                Does that mean all of their traffic is fake? If that's the case, we had this issue before AI.
    • paul79865 hours ago
      People in time will learn that using a GPT for search especially for a question that involves research where you had to do MANY searches for the answer... now is provided via one query.

      Making things quicker and easier always wins in tech and in life.

    • ganeshkrishnan7 hours ago
      I was thinking of the direction we are going and even wanted to write up a blog about it. IMO the best way forward would be if AI can have some logical thoughts independent of human biases but that can only happen if AI can reason unlike our current LLMs that just regurgitate historical data.

      growing up, we had the philosophical "the speaking tree" https://www.speakingtree.in/

      If trees could talk, what would they tell us. Maybe we need similarly the talkingAI

    • _bin_6 hours ago
      I hope we see more evolution of options before it does. Hard to articulate this without it becoming political, but I've seen countless examples both personally and from others of ChatGPT refusing to give answers not in keeping with what I'd term "shitlib ethics". People seem unwilling to accept that a system that talks like a person may surface things they don't like. Unless and until an LLM will return results from both Mother Jones and Stormfront, I'm not especially interested in using one in lieu of a search engine.

      To put this differently, I'm not any more interested in seeing stormfront articles from an LLM than I am from google, but I trust neither to make a value judgement about which is "good" versus "bad" information. And sometimes I want to read an opinion, sometimes I want to find some obscure forum post on a topic rather than the robot telling me no "reliable sources" are available.

      Basically I want a model that is aligned to do exactly what I say, no more and no less, just like a computer should. Not a model that's aligned to the "values" of some random SV tech bro. Palmer Luckey had a take on the ethics of defense companies a while back. He noted that SV CEOs should not be the ones indirectly deciding US foreign policy by doing or not doing business. I think similar logic applies here: those same SV CEOs should not be deciding what information is and is not acceptable. Google was bad enough in this respect - c.f. suppressing Trump on Rogan recently - but OpenAI could be much worse in this respect because the abstraction between information and consumer is much more significant.

      • antonvs4 hours ago
        > Basically I want a model that is aligned to do exactly what I say

        This is a bit like asking for news that’s not biased.

        A model has to make choices (or however one might want to describe that without anthropomorphizing the big pile of statistics) to produce a response. For many of these, there’s no such thing as a “correct” choice. You can do a completely random choice, but the results from that tend not to be great. That’s where RLHF comes in, for example: train the model so that its choices are aligned with certain user expectations, societal norms, etc.

        The closest thing you could get to what you’re asking for is a model that’s trained with your particular biases - basically, you’d be the H in RLHF.

    • colordrops5 hours ago
      Chat isn't needed to provide reliable answers. Google used to do this over a decade ago. What Star Trek didn't foresee was vested interests in the SEO space, governments, political special interest groups, and the owners of the search engines themselves had far too much incentive to bork and bias results in their favor. Google is an utter shit show. More than half the time it won't find the most basic search query for me. Anything older than a couple years, good luck. I'm sure it's just decision after decision that piled up, each seemingly minor in isolation but over the years has made these engines nearly worthless except for a particular window of cases.
    • willmadden5 hours ago
      Content based marketing and political correctness have severely eroded the usefulness of the internet. The LLMs and search magnify the erosion.
      • arizen2 hours ago
        What if, at the same time, these aspects perfectly reflects current development level of society?
    • jamager6 hours ago
      It is so absurd having to spend so much energy on "what happened on the football match yesterday" just because the internet is a wasteland full of ads.
      • spankalee6 hours ago
        I don't understand. Google is already excellent at these queries.

        "who won the warriors game last night" returns last night's score directly.

        "who won the world series yesterday" returns last night's score directly, while "who won the world series" returns an overview of the series.

        No ads.

  • whizzter6 hours ago
    This is probably Google's Altavista moment, by making their results crappier by the year in search of Ad dollars everyone has felt that there is a potential for search to be better and once that becomes available they'll be in a continious game of catch-up.

    Yes, Google has their own AI divisions, tons of money and SEO is to blame for part of their crappiness. But they've also _explicitly_ focused on ad-dollars over algorithmic purity if one is to believe the reports of their internal politics and if those are true they have probably lost a ton of people who they'd need right now to turn the ship around quickly.

    • umvi4 hours ago
      At some point it seems like Google switch to ML-based search instead of index based search. You can search for very specific combinations of lyrics and scenes: "eyes on me pineapple bucket of water house of cards chess time loop" and you won't surface a link to the music video featuring all of those things (https://www.youtube.com/watch?v=AlzgDVLtU6g), you'll just get really generic results of the average of your query.
      • greenavocado4 hours ago
        Has google completely stopped working for anyone else?

        I can still search things, i get results but, they're an ordered list of popular places the engine is directing me to. Some kind of filtering is occurring on nearly every search i make that's making the results feel entirely useless.

        Image search stopped working sometime ago and now it just runs an AI filter on whatever image you search for, tells you there's a man in the picture and gives up.

        Youtube recommendations is always hundreds of videos i've watched already, with maybe 1-2 recommendations to new channels when i know there's millions of content creators out there struggling who it will never introduce me to. What happened to the rabbit holes of crazy youtube stuff you could go down?

        This product is a shell of its old self, why did it stop working?

        • herval1 hour ago
          Today I tried finding a rather popular product I used recently, which I had forgotten the name. I had to search THREE TIMES before finding it:

          First search (“products that do X”) got me a bunch of those comparisons sites, none of them containing the one I was trying to find

          Second search (“ycombinator startup that does X”) got me a page of spam, but at least I found the product name

          Third search (company name) got me an ENTIRE PAGE of ads and SEO optimized pages before the actual link to the actual product

        • reddalo3 hours ago
          I agree with everything you said. Google is completely useless for searching; now I just use it as a glorified website URL finder.

          YouTube search is also completely useless now.

        • ponector3 hours ago
          Google is extremely good if you search for a product they can advertise to you. I've recently wanted to buy lost sunglasses. Just selected them from the photo and google found exactly brand and model, with a link to the store.

          If there are no sponsored links - the result is crap.

          Google is good at searching, they just have no incentive to show you results.

          • herval1 hour ago
            > Google is good at searching, they just have no incentive to show you results.

            “ to organize the world's information and make it universally accessible and useful” was a nice mission while it lasted

        • nbaksalyar2 hours ago
          > Has google completely stopped working for anyone else?

          Yes. However, I found that https://scholar.google.com still works perfectly well. It feels just as the old Google without all the crap they've been adding in the last years.

        • IshKebab3 hours ago
          Despite all the comments it still works pretty well for me. I feel like they've improved it a bit in the last year or so so you don't get way too much Quora and GitHub/Stackoverflow clones.

          The level of sponsored results for some queries is way OTT, and obviously any kind of search like "best laptop 2024" is never going to give you good results (probably because they don't exist), but other than that I'm still pretty happy with Google Search.

          • palata3 hours ago
            > it still works pretty well for me

            Genuinely interested: have you tried to spend a few weeks on an alternative?

            I decided to try DuckDuckGo a few years ago. Not because it was obviously better, but to see if I could get used to it. After a few weeks, I had completely stopped falling back to Google when not finding what I wanted. I stayed on DDG for a couple years. Then same thing with Kagi: I just decided to try. It's been 1.5-2 years now and I'm disappointed when I can't use Kagi (which has my customizations, like some websites I ignore and some that I pin).

            I guess my point is that it's not necessarily that you have to try something else when Google is unbearable. Maybe you can try something else and then realize (or not) that Google was not better.

            • lstamour3 hours ago
              That's not where I thought this was going. I tried using DDG and Kagi and went back to Google. Google had more relevant, fresher results than DDG, and Kagi didn't have the same integration with Google Maps and often a smaller set of results for very niche queries. Google is still basically the internet - the entire internet - though in many ways they do still fall short. But breadth of content indexing and information about local places, Google is still king.
          • roguas1 hour ago
            good "best laptop 2024" search results, absolutely exist, but they have to be heavily personalized towards user and i dont think we adjusted to such levels of in search (we have expectations of median/commonality there)
            • greenavocado1 hour ago
              A good "best laptop 2024" query would be a conversation with a useful LLM
          • greenavocado3 hours ago
            Here's an easy test case on a topic which is HEAVILY modified by Google. There are many such cases Google distorts searches but the following query is purely for illustration of the problem because it is so obvious.

            1. Search Google for "ukrainian who shot his commanding officer" without quotes

            2. Google serves me nothing but MSM articles of Russian this or that. The word Russian wasn't even in my search string.

            3. Add the Google operator MINUS SIGN Russia

            4. Results:

              a) Policeman feared Chris Kaba would kill, court told
            
              b) Media: Russian Repeated Offender Kills Five More His ...
            
              c) President Volodymyr Zelenskyy and First Lady Olena ...
            
              d) Ukrainian Galician Army
            
              e) Article from 2017 entitled Killed Defense Intelligence Officer Was "The First Donetsk Cyborg"
            
              f) Shots fired at car carrying Ukrainian President Zelenskiy's ...
            
            5. Go to yandex.com and search the original query

            6. It comes up on Yandex immediately with the original query

            • TeaBrain1 hour ago
              Where did you get the idea for that query in the first place?
              • greenavocado1 hour ago
                One of my interests is war photography
        • jim-jim-jim1 hour ago
          Much like Google itself, Youtube has been trending towards ushering users to a minute collection of blessed official/popular sources, often news channels. Maybe it's in response to kvetching over ``disinformation?" Idunno, but it's absolutely infuriating for some of my interests.

          If I search "Mullica Hill tornado" on yt itself, I get nothing but useless 1 min local news clips. If I search the same term on Reddit, I get first person footage of the tornado passing over people's houses—hosted on Youtube! Tornado enthusiasts still occasionally dredge up "lost media" of events like the 2011 Alabama outbreak that have been on the site this entire time, but are effectively impossible to view via the algorithm, even with the precise date and location specified.

        • seanthemon4 hours ago
          They do evil now
      • zeven72 hours ago
        Searching for a song by typing in lyrics is specifically a feature I’ve noticed Google can’t do any more. And it used to be able to.
      • janalsncm4 hours ago
        I think this is a separate issue although it also exists.

        What the parent is referring to is favoring annoying ad-filled garbage over an equally relevant but straightforward result.

        The hidden variable is that ad-riddled spam sites also invest in SEO, which is why they rank higher. I am not aware of any evidence that Google is using number of Google ads as a ranking factor directly. But I would push back and say that “SEO” is something Google should be doing, not websites, and a properly optimized search engine would be penalizing obvious garbage.

    • thefourthchime6 hours ago
      Google's SEO almost killed the web, ChatGPT Search will finish the job.
      • hidelooktropic4 hours ago
        How?
      • kylebenzle6 hours ago
        We are still spending most of our time online on social media sites like Hacker News, Instagram and Youtube, right?
        • topaz05 hours ago
          That's not the web
          • Retr0id5 hours ago
            Why not?
            • s1artibartfast4 hours ago
              I think the idea is that those are websites on the web, which are distinct from the web itself.

              There are several meaningful difference between surfing Youtube and surfing the web. These include ownership, access, review, exposure, and more.

            • cryptoz4 hours ago
              I'll take a stab at it. What is the web, really? Gotta be stuff you see and interact with in web browsers right? Sure, you can get to HN, YouTube and Instagram in a browser. But by traffic for example, how much of activity on HN, Instagram and YouTube combined is through a browser? I mean, gotta be pretty low...like 5%? Just a guess, but remember all the app usage and TV usage for those sites is pretty big.

              So if 95% of traffic/users/whatever metric are not using a web browser for those activities, is it really the web? It can't be called the web just 'cause they use HTTPS. It's gotta be a 'world wide web' experience, which I think a good proxy for would be using a web browser.

              I got no horse in this race, just thinking out loud about it.

              • herval1 hour ago
                “The web” is, by definition, a collection of things that are loosely connected and accessible (searchable, etc). While the current internet is still “a web”, it’s mostly a web of 2-3 massive properties, entirely operated by the same 2-3 companies, completely devoided of public apis (and sometimes even web accessible content). The fact that sometimes they have an html version makes them “websites”, I guess, but not really a “web of nodes” the same way it used to be

                Another common phenomenon these days is that lots of businesses don’t even bother having a web presence - it’s all instagram, WhatsApp and tiktok accounts, mostly only accessible via apps (or worse, chat platforms like discord)

              • satvikpendem2 hours ago
                WWW != Internet, yes. I would count mobile apps as part of the web too, they're simply another sort of "browser."
              • mulmen3 hours ago
                What other way is there to access HN but the website?

                Agree YouTube and Instagram are probably mostly apps which puts them in the “Internet” category but not “world wide web”.

                • echoangle1 hour ago
                  Technically there is an API and there are some client apps for HN. No way they make up 95% of traffic though.
    • rawgabbit5 hours ago
      For me the key difference between ChatGPT Search and Google is the feedback mechanism.

      With ChatGPT, I can give a thumbs up or thumbs down; this means that OpenAI will optimize for users thumbs up.

      With Google, the feedback is if I click on an Ad; this means that Google optimizes for clickbait.

      • janalsncm4 hours ago
        I have no doubt that Google’s search team is optimizing for the best results. The problem is their ads team is optimizing for revenue. You can’t optimize for two things at the same time without compromising (the optimum is the Pareto frontier).
        • hightrix3 hours ago
          In my opinion, the issue is that the user's definition of "best results" and Google's definition do not align, including the Search team. Google's incentives are very different than user's needs.
          • janalsncm2 hours ago
            I disagree, personally. I think the engineers there are human and they can see that the results aren’t good. Maybe some of them are in this thread. If I had to guess, I think there are a lot of sub-teams contributing scores to the overall ranking and optimizing for sub-metrics, and it’s too big of a beast to fix. In other words, a failure of leadership.
      • nosbo3 hours ago
        Is it optimising for all users? And assuming people thumbs up the correct info. I wonder what accuracy percentage we are looking at there. ChatGPTs responses are so confident when wrong I fear people will just give it a thumbs up when its wrong. (This is if how I understand the feature you mention is working)
        • rawgabbit2 hours ago
          I am not talking about accuracy. Only experts can determine factuality.

          I am talking about relevance or returning what I asked. If I ask for reviews for SaaS product, Google will usually return a rival vendors’ biased review.

          If ChatGpt search returns a review written by “the professional association of xxx developers” or another unbiased site, I will give it a thumbs up. I believe other people will do the same.

    • coffeemug3 hours ago
      Google is entirely to blame. It would be trivial for them to train a model to rank sites on a scale of SEO garbage to nobel laureate essay, then filter out the bottom 50%.
    • JumpCrisscross6 hours ago
      Classic innovator’s dilemma. The politics of Google make launching a search competitor impossible.
      • janalsncm4 hours ago
        Exactly. That’s why I never understood Gemini. It is in a zero-sum game with Search.
    • namuol3 hours ago
      More competition is good, but I’m pretty sure the end result will just be, at best, a duopoly of products equally ruined by advertising for their “free” tier, and somewhat less ruined for the “premium” tier.
    • throwaway3141555 hours ago
      > if one is to believe the reports of their internal politics

      I'm not familiar with what you're referring to here. Happen to have a link?

    • kristopolous4 hours ago
      There's this tech pattern of letting the cash-cow stagnate and deteriorate while focusing on high risk moonshots.

      This especially happens after they dominate the market.

      Take for example IE6, Intel, Facebook, IBM, and now Google.

      They have everything they need to keep things from going off the rails, management however has a tendency to delusionally assume their ship is so unsinkable that they're not even manning their stations.

      It becomes Clayton Christensenesque - they're dismissive of the competition as not real threats and don't realize their cash-cow is running on only fumes and inertia until its too late.

      • devjab3 hours ago
        I think it’s what is the natural consequence of handing over management to MBA types. They can’t create new things. I don’t mean that they should be programmers, engineers or whatever they’re working with. I mean that they can only exact value from something that already exists. Which is always going to be a short term strategy because the only way to do that is to make “the it” worse.

        I’m not sure Facebook fits in considering they at least managed to get some other products along the way, and may get more.

        I certainly don’t think Google fits the bill. Google is failing because they let their cash cow ruin everything else, not because they let it stagnate while they chased the next moonshot. Google Cloud could have easily been competitive with AWS and Azure in European Enterprise, but it’s not even considered an option because Google Advertising wouldn’t let it exist without data harvesting. Google had Office365 long before Microsoft took every organisation online. But Google failed to sell it because… well…

        It’s very typical MBA though. Google has killed profitable products because those products weren’t growing enough. A silly metric really, but one which isn’t surprising when your CEO is a former McKinsey.

        It couldn’t happen to a nicer company though, and at least it won’t kill people unlike Boeing.

        • kristopolous59 minutes ago
          I think you're conflating separate phenomena:

          * Boeing is a consequence of the "Jack Welch" effect - gutting the core in service of short term gains for stock-holders.

          * The MBA type, typified by John Sculley at Apple is about calcifying the current offerings presuming market segments supported by historicals with predictable demand. This works well for defensives such as utilities, consumer products and health care but not for markets with dynamic consumer relationships such as technology.

          * The Google Cloud example is the Xerox Parc phenomena. Xerox was organizationally structured for cost/benefit investment payoffs only characteristically similar to their mainline products thus they couldn't properly allocate resources to things, such as desktop computing, with different kinds of curves. This is similar to how the franchise retailer Blockbuster so slowly responded to the centralized mail-order subscription Netflix. The institutional structure is only-so-flexible. This is similar to Conway's Law.

          * The "ruin everything else" is a generalized form of a "brand extension failure". Examples include Harley Davidson perfume, Bic underwear, McDonalds Pizza, and Heinz cleaning vinegar - an over-leveraged commitment to a wildly successful core offering makes other ventures impossible.

          This is not that. It's yet something else. Abstractly it's "X is a wild success, let's make Y another X instead of working on X+1"

          Organizations suffer from varying degrees of ailments and they can create codependencies making the unraveling hard. Often it devolves into politics of power brokers with the company's survival dependent on the competency of the influential instead of the influence of the competent. A brutal struggle to control a sinking ship.

          The crisis of the third century happens every day.

        • Eduard53 minutes ago
          > I’m not sure Facebook fits in considering they at least managed to get some other products along the way

          can you name these products?

    • zo16 hours ago
      Google, even with all the trillions in additional productivity it has added to the world, has left the world at a net-negative. We can't even quantify it, and every person will tell you a different way in which it has impacted the world negatively.

      E.g. for me, how much Google (and silicon valley in general) have enabled twisted ideologies to flourish. All in search of ad-dollars by virtue of eyeballs on screens, at the detriment of everything.

      • robocat3 hours ago
        Anyone have some good links/citations on the consumer surplus of Google? How much value do we get as users?

        Considering the value of time, past consumer surplus is especially valuable now.

        Sure, there are systematic flaws causing SEO to ruin the information provided: but it isn't clear what Google can do to fight the emergent system.

        I'm not sure that Bing/DDG are any better.

        I use search (DDG web, Google/Apple maps, YouTube) all the time and I am regularly given results that are extremely valuable to me (and mostly only directly cost me a small amount of my time some of my time e.g. YouTube adverts). Blaming SEO on Google seems thoughtless to me. Google appears to be the victims of human cybersystems as much as we are.

      • topaz05 hours ago
        Now we get to see how OpenAI will do similar harm, more effectively and at much greater environmental cost.
      • olkingcole5 hours ago
        I was just wondering what has come out of silicon valley since say 2003 that has been a net positive for humanity. Just because something is profitable doesn't mean it's progress.
    • bamboozled4 hours ago
      No it's not.
  • cjf1018 hours ago
    If the current iteration of search engines are producing garbage results (due to an influx of garbage + SEO gaming their ranking systems) and LLMs are producing inaccurate results without any clear method proposed to correct them, why would combining the two systems not also produce garbage?

    The problem I see with search is that the input is deeply hostile to what the consumers of search want. If the LLM's are particularly tuned to try and filter out that hostility, maybe I can see this going somewhere, but I suspect that just starts another arms race that the garbage producers are likely to win.

    • hatthew7 hours ago
      Search engines tend to produce neutral garbage, not harmful garbage (i.e. small tidbits of data between an ocean of SEO fluff, rather than completely incorrect facts). LLMs tend to be inaccurate because in an absence of knowledge given by the user, it will sometimes make up knowledge. It's plausible to imagine that they will cover each other's weaknesses: the search engine produces an ocean of mostly-useless data, and the LLM can find the small amount of useful data and interpret that into an answer to your question.
      • lolinder6 hours ago
        The problem I see with this "cover for each other" theory is that as it stands having a good search engine is a prerequisite to having good outputs from RAG. If your search engine doesn't turn up something useful in the top 10 (which most search engines currently don't for many types of queries) then your llm will just be summarizing the garbage that was turned up.

        Currently I do find that Perplexity works substantially better then Google for finding what I need, but it remains to be seen if they're able to stay useful as a larger and larger portion of online content just AI generated garbage.

      • faizshah4 hours ago
        You just described the value proposition of RAG.
      • lottin5 hours ago
        Maybe it's just me but I have no interest in having a computer algorithm interpret data for me. That's a job that I want to do myself.
        • esafak1 hour ago
          Ranking the results is a prerequisite for any search engine, and that's interpretation, isn't it?
        • swyx5 hours ago
          then you are blissfully unaware of how much data is already being interpreted for you by computer algorithms, and how much you probably actually really like it.
          • orthecreedence4 hours ago
            > how much you probably actually really like it.

            This comes off as condescending. As things have gotten more algorithmic over the last two decades, I've noticed a matching decrease in the accuracy and relevance of the information I seek from the systems I interact with that employ these algorithms.

            Yes, you're right that there are processing algorithms behind the scenes interpreting the data for us. But you're wrong: I fucking hate it, it's made things worse, and layering more on top will not make things any better.

          • lottin5 hours ago
            What do you mean?
        • crakenzak1 hour ago
          Sure, then just do that. No one will hold you at gunpoint and force you to use any kind of tool.
    • fulafel8 hours ago
      Garbage-ness of search results is not binary, the right question is: can LLMs improve the quality of search results? But sure, it won't end the cat and mouse game.
      • cjf1017 hours ago
        I think that's the right broad question. Though LLMs properties mean that for some number of cases they will either make the results worse, or more confidently present wrong answers. This prompts the question: what do we mean by "quality" of results? Since the way current LLM interfaces tend to present results is quite different from traditional search.
      • kevin_thibedeau7 hours ago
        > it won't end the cat and mouse game.

        There is no way to SEO the entire corpus of human knowledge. ChatGPT is very good for gleaning facts that are hard to surface in today's garbage search engines.

      • startupsfail7 hours ago
        The question is what is the business model and who pays for it, that determines how much advertising you’re getting. It is not clear if OpenAI could compete in Ad-supported search. So maybe OpenAI is trying to do the basic research, outcompete the Bing research group at Microsoft and then serve as an engine for Bing. Alternatively they could be just improving the ability of LLMs to do search, targeting future uses in agentic applications.
    • shellfishgene7 hours ago
      If I can pretty quickly tell a site is SEO spam, so should the LLM, no? Of course that would just start a new round in the SEO arms race, but could work for a while.
      • sangnoir7 hours ago
        > If I can pretty quickly tell a site is SEO spam, so should the LLM, no?

        Why would you assume that?

      • mplewis7 hours ago
        The LLM is not a human and cannot distinguish between spam and high quality content.
        • serjester6 hours ago
          It seems pretty trivial to fine tune a model to do this - this is really playing to LLM's strengths.
    • valval7 hours ago
      I’d be more cynical still and ask, where is correct information found in the first place? Humans of all shape and size have biases. Most research is faulty, fabricated, or not reproducible. Missing information tells a greater story than existing one.

      We don’t have a way of finding objective information, why would we be able to train a model to do so?

      • realusername6 hours ago
        Right now I basically can't find anything, the bar isn't "objective information" but "somewhat useful information". Google search quality became so bad we're past the debate of objective or subjective already, I'd be happy enough to get non-spam results.
  • niam8 hours ago
    Genuine question: is there a present or planned value proposition for people like me who already have decent search skills? Or are these really for children/elders who (without making any normative claim about whether this is a good thing or not) can't be arsed to perform searches themselves?

    Does someone else have good search skills but mingle traditional search engines with LLMs anyways? Why?

    I use LLMs every day but wouldn't trust one to perform searches for me yet. I feel like you have to type more for a result that's slower and wordier, and that might stop early when it amasses what it thinks are answers from low effort SEO farms.

    • Willamin8 hours ago
      I find myself being unable to search for more complex subjects when I don't know the keywords, specialized terminology, or even the title of a work, yet I have a broad understanding of what I'd like to find. Traditional search engines (I'll jump between Kagi, DuckDuckGo, and Google) haven't proved as useful at pointing me in the right direction when I find that I need to spend a few sentences describing what I'm looking for.

      LLMs on the other hand (free ChatGPT is the only one I've used for this, not sure which models) give me an opportunity to describe in detail what I'm looking for, and I can provide extra context if the LLM doesn't immediately give me an answer. Given LLM's propensity for hallucinations, I don't take its answers as solid truth, but I'll use the keywords, terms, and phrases in what it gives me to leverage traditional search engines to find a more authoritative source of information.

      ---

      Separately, I'll also use LLMs to search for what I suspect is obscure-enough knowledge that it would prove difficult to wade through more popular sites in traditional search engine results pages.

      • layer88 hours ago
        > I find myself being unable to search for more complex subjects when I don't know the keywords, specialized terminology, or even the title of a work, yet I have a broad understanding of what I'd like to find.

        For me this is typically a multi-step process. The results of a first search give me more ideas of terms to search for, and after some iteration I usually find the right terms. It’s a bit of an art to search for content that maybe isn’t your end goal, but will help you search for what you actually seek.

        LLMs can be useful for that first step, but I always revert to Google for the final search.

        Also, Google Verbatim search is essential.

      • 8 hours ago
        undefined
      • erosivesoul8 hours ago
        I also find some use for this. Or I often ask if there's a specific term for a thing that I only know generally, which usually yields better search results, especially for obscure science and technology things. The newer GPTs are also decent at math, but I still use Wolfram Alpha for most of that stuff just because I don't have to double check it for hallucinations.
      • Lws8038 hours ago
        You might like what we're building in that sense :D (full disclosure, I'm the founder of Beloga). We're building a new way for search with programmable knowledge. You're essentially able to call on search from Google, Perplexity other search engines by specifying them as @ mentions together with your detailed query.
    • jakub_g8 hours ago
      I don't overuse LLMs for now; however when I have a complex problem that would require multiple of searches and dozens of tabs opened and reading through very long docs, asking LLM allows me to iterate order of magnitude faster.

      Things that were previously "log a jira and think about it when I have a full uninterrupted day" now can be approached with half an hour spare. This is game changer because "have a full day uninterrupted" almost never happens.

      It's like having a very senior coworker who knows a lot of stuff and booking a 30m meeting to brainstorm with them and quickly reject useless paths vs dig more into promising ones, vs. sitting all day researching on your own.

      The ideas simply flow much faster with this approach.

      I use it to get a high level familiarity with what's likely possible vs what's not, and then confirm with normal search.

      I use LLMs also for non-work things like getting high level understanding of taxation, inheritance etc laws in a country I moved in, to get some starting point for further research.

      • itissid8 hours ago
        This. Not having to open two dozen tabs and read through so much is a gamechanger, especially for someone who has had trouble focusing with so much open. This is especially true when learning a new technology.
    • adamc8 hours ago
      I dunno, I'm not exactly on the AI bandwagon, but search is the one place where I use (and see others using) chatgpt all the time. The fact that Google search has been getting worse for a decade probably helps, but better search -- consistently done, without ads or cruft -- would be worth a few bucks every month for me.

      I agree that you can't TRUST them, but half the links regular search turns up are also garbage, so that's not really worse, per se.

      • davidee7 hours ago
        Same, but, until recently, I've been using Microsoft's Co-Pilot because for the longest time it did exactly what this new "search" feature added to ChatGPT: it produced a list of source material and links to reference the LLM's output against. It was often instrumental for me and I did begin to use it as a search engine considering how polluted a lot of first-search results have become with spam and empty, generated content.

        Oddly, Microsoft recently changed the search version of Copilot to remove all the links to source material. Now it's like talking to an annoying growth-stage-startup middle manager in every way, including the inability to back up their assertions and a propensity to use phrases like "anyway, let's try to keep things moving".

        Happy to see this feature set added into ChatGPT – particularly when I'm looking for academic research in/on a subject I'm not familiar with.

    • spunker5408 hours ago
      I think it’s pretty clear that LLMs can process a document/article/web page faster than any human in order to answer a given question. (And it can be parallelized across multiple pages at once too).

      The main hard part of searching isn’t formulating queries to write in the Google search bar, it’s clicking on links, and reading/skimming until you find the specific answer you want.

      Getting one sentence direct answers is a much superior UX compared to getting 10 links you have to read through yourself.

      • tempusalaria8 hours ago
        Only if it is reliably correct.

        Google does offer an AI summary for factual searches and I ignore it as it often hallucinates. Perplexity has the same problem. OpenAI would need to solve that for this to be truly useful

        • astrange4 hours ago
          IME Google's summary is not actually hallucinating, the problem is they are forcing it to quote the search results, but they're surfacing bad/irrelevant search results because Google's actual search hasn't worked in years. It's a RAG failure.

          For instance I searched for the number to dial to set call forwarding on carrier X the other day, and it gave wrong results because it returned carrier Y.

        • vel0city7 hours ago
          This is why my most used LLM after code suggestions is Bing. I like that it has lots of references for the things I ask it to double check and read more, but at the same time it can help me dig deeper into a subject rapidly and better formulate the exact question I'm trying to ask and give me a link to the actual data it's getting it's info from.
        • Lws8038 hours ago
          Agreed, hallucinations can be pretty bad and can hurt trust a great deal.
      • lottin5 hours ago
        > Getting one sentence direct answers is a much superior UX compared to getting 10 links you have to read through yourself.

        If we assume that people want a 'direct answer', then of course a direct answer is better. But maybe some of us don't want a 'direct answer'? I want to know who's saying what, and in which context, so I can draw my own conclusions.

    • awongh7 hours ago
      I use LLMs as a kind of search that is slightly less structured. There are two broad cases:

      1) I know a little bit about something, but I need to be able to look up the knowledge tree for more context: `What are the opposing viewpoints to Adam Smith's thesis on economics?` `Describe the different categories of compilers.`

      2) I have a very specific search in mind but it's in a domain that has a lot of specific terminology that doesn't surface easily in a google search unless you use that specific terminology: `Name the different kinds of music chords and explain each one.`

      LLMs are great when a search engine would only surface knowledge that's either too general or too specific and the search engine can't tell the semantic difference between the two.

      Sometimes when I'm searching I need to be able to search at different levels of understanding to move forward.

    • pflenker8 hours ago
      I find that my search skills matter less and less because search engines try to be smarter than me. Increasingly I am confronted with largely unrelated results (taking tweaked keywords or synonyms to my query as input apparently) as opposed to no results. So my conclusion is that the search engines increasingly see the need of search skills as an anti pattern they actively want to get rid of.
      • layer87 hours ago
        On the Google search results page, activate Search tools > All results > Verbatim. You can also create your own search provider bookmark with verbatim search as the default by adding “tbs:li=1” as a query parameter to the Google search URL.
      • jdgoesmarching6 hours ago
        Completely agreed. At a certain point, “skills” became fighting a losing battle with Google incessantly pushing me towards whatever KPIs or ads they’re chasing. It’s a poor use of my effort and time to keep chasing what Google used to be.
    • hughesjj8 hours ago
      I think it's more filling the niche that Google's self immolation in the name of ad revenue started. Besides kagi, there aren't really any solid search engines today (even ddg), and OpenAI has a reach way beyond kagi could dream of outside a billion dollars in marketing.
    • sebzim45008 hours ago
      Even if you are good at writing the queries, Google is so terrible that you end up getting some blogspam etc. in there (or at least I do). A model filtering that out is useful, which I find phind pretty good for. Hopefully this will be even better.
    • lighthazard8 hours ago
      LLMs really make it easy to quickly find documentation for me. Across a huge software project like Mediawiki with so much legacy and caveats, having an LLM parse the docs and give me specific information without me hoping that someone at Stackoverflow did it or if I'm lucky enough to stumble across what I was looking for.
    • blixt8 hours ago
      What I really hope this helps solve is covering for the huge lag in knowledge cutoff. A recent example is where it went "oh you're using Go 1.23 which doesn't exist so that's clearly the problem in your Dockerfile, let me fix that".

      But I'm not keeping my hopes up, I doubt the model has been explicitly fine-tuned to double check its embedded knowledge of these types of facts, and conversely it probably hasn't even been successfully fine-tuned to only search when it truly doesn't know something (i.e. it will probably search in cases where it could've just answered without the search). At least the behavior I'm seeing now from some 15 minutes of testing indicates this, but time will tell.

      • ascorbic7 hours ago
        I asked it about the UK government budget which was announced a few hours ago and it gave me a good, accurate summary.
    • Lerc6 hours ago
      I think the skills required will change but more in an adaptation way rather than everything-you-knew-is-now-irrelevant.

      I feel like there is a mental architecture to searching where you try and isolate aspects of what you are searching for that are distinct within the broad category of similar but irrelevant things. That kind of mental model I would hope still works well.

      For instance consider this query.

      "Which clothing outlets on AliExpress are most recommended in forum discussions for providing high quality cloths, favour discussions where there is active engagement between multiple people."

      OpenAI search produces a list of candidate stores from this query. Are the results any good? It's going to be quite hard to tell for a while. I know searching for information like this on Google is close to worthless due to SEO pollution.

      It's possible that we have at least a brief golden-age of search where the rules have changed sufficiently that attempts to game the system are mitigated. It will be a hard fought battle to see if AI Search can filter out people trying to game AI search.

      I think we will need laws to say AI advice should be subject to similar constraints as legal, medical, and financial advice where there is an obligation to act in the interests of the person being advised. I don't want to have AI search delivering the results of the highest bidder.

    • melenaboija8 hours ago
      Any question that few months ago I would do to stackexchange (or expect and answer from, after a google seqrch) either coding or quantitative, I go to chat gpt now.

      I consider myself quite anti LLM hype and I have to admit it has been working amazingly good for me.

    • bigstrat20037 hours ago
      The entire tech industry for the last decade (if not more) has been aimed at people who can't be arsed to learn to use computers properly. I would be astonished if this time is somehow different.
    • layer88 hours ago
      For searches that remain inconclusive, I sometimes double-check with LLMs to see if I have missed anything. It rarely gives relevant new insights, but it’s good to get the confirmation I guess.
    • paul79868 hours ago
      I use GPT for things that would require multiple Google searches (research). Some examples..

      - I count calories... eat out always and at somewhat healthy chains (Cava, Chipolte, etc). Tell GPT (via voice while driving to & or after eating) what ive eaten half the day at those places and then later for dinner. It calculates a calorie count estimation for half the day and then later at dinner the remaining. I have checked to see if GPT is getting the right calories for things off websites and it has.

      - Have hiking friends who live an hour or two hours away and we hike once a month an hour or less drive is where we meet up and hike at a new place. GPT suggests such hikes and quickly (use to take many searches on Google to do such). Our drives to these new hikes learned from GPT have always been under an hour.

      So far the information with those examples has been accurate. Always enjoy hearing how others use LLMs... what research are you getting done in one or two queries which used to take MANY google searches?

      • kjellsbells7 hours ago
        GPT is proving useful for me where something is well documented, but not well explained.

        Case in point: Visual Basic for Applications (the Excel macro language). This language has a broad pool of reference material and of Stack Overflow answers. It doesnt have a lot of good explicatory material because the early 2000s Internet material is aging out, being deleted as people retire or lose interest, etc.

        (To be frank, Microsoft would like nothing more than to kill this off completely, but VBA exists and is insanely more powerful than the current alternatives, so it lives on.)

      • timeon8 hours ago
        With eating out so much, try to ask it about sodium intake as well.
        • paul79868 hours ago
          yeah that is somewhat of a concern and have asked GPT that info / to calculate that too (though only a few times).
    • kadomony7 hours ago
      I was skeptical of LLM search until I saw Arc Search in action with its "browse for me" functionality.
    • photochemsyn8 hours ago
      It seems good at finding relevant research papers. e.g.

      > "Can you provide a list of the ten most important recent publications related to high-temperature helium-cooled pebble-bed reactors and the specific characteristics of their graphite pebble fuel which address past problems in fuel disintegration and dust generation?"

      These were more focused and relevant results than a Google Scholar keyword-style search.

      However, it did rather poorly when asked for direct links to the documentation for a set of Python libraries. Gave some junk links or just failed entirely in 3/4 of the cases.

    • tempest_8 hours ago
      I think you need to define "decent search skills" since google will straight up ignore most boolean stuff or return ads.

      The LLMs are nice because they are not yet enshitified to the point of uselessness.

    • carabiner8 hours ago
      > Genuine question...

      When it starts with this you KNOW it's going to be maximum bad faith horsefuckery in the rest of the "question."

      • niam7 hours ago
        I know what you mean, but also don't know how it applies here. Not a hater, and not asking rhetorically to dunk on OpenAI. Just haven't found a use for this particular feature.

        Which is also exactly something a bad-faith commenter would say, but if I lose either way, I'd rather just ask the question ¯\_(ツ)_/¯

        • jdgoesmarching6 hours ago
          You’re not doing a great job of not rhetorically dunking on OpenAI when you imply that it must be for children, elders, or people who can’t be arsed to search.

          The comment was dripping with condescension towards the use of LLM search, and that’s coming from a huge OpenAI skeptic.

          • niam3 hours ago
            Ah fair.

            Replacing "really for" with "more for e.g" would have been closer to the intended comment. I'll take that L.

            Though if I can clarify: "can't be arsed to search" isn't a normative or judgemental claim against anyone, in the same way that "can't remember a phone number/directions" isn't. I'm speaking under the assumption there's a point between now and heat death where massaging search engine queries may literally not be as useful a 'skill' anymore. So there's less utility in young/old people taking the time to learn it.

            But I can see how it sounds when I try to squeeze that into a shorter message.

    • moralestapia8 hours ago
      Genuine answer: this was not made for you. There is a billion-to-trillion dollar addressable market, which you're not a part of. It was made for them.
    • tomjen34 hours ago
      I have used Perplexity (and AI search company) a lot and - well I don't think you understand. This is not about it being too difficult to find the information. Its that a search in Google will give you a list of places to go that are relevant to your query. AI search will give you the information you want.

      This becomes even better if the information you want is in multiple different places. The canonical question for that used to be "what was the phase of the moon when John Lennon was shot?". There didn't used to be an answer to this in Google - but the AI search was able to break it down, find the date John Lennon was shot (easily available on Google), find the moon phase on that day (again, easily available on Google) and put them together to produce the new answer.

      For a more tech relevant example, "what is the smallest AWS EC2 I can run a Tomcat server in?

      You 100% can get this information yourself. It just much more time than having an AI do it.

    • MarcoZavala8 hours ago
      [dead]
    • carlesfe8 hours ago
      I think this is just the first step for a full-featured agent that not only does searches for you, but also executes whatever was your goal (e.g. a restaurant reservation, etc)
      • adamc8 hours ago
        To solve that problem you have to solve all the issues that make me not trust the results. As search, it's fine, since I am perusing and evaluating them. But as an agent, hallucinations and inaccurate answers have to disappear (or very close to disappear).
  • acyou7 minutes ago
    Google search doesn't even work inside of our intranet. For example, the filename will not return in the results when entering the exact search string in various places inside of Google Workspaces - Docs, Sheets, etc. That should tell you that it isn't SEO spam that is to blame, google search just legitimately sucks all on its own.

    Definitely search is ripe for disruption.

  • marckrn6 hours ago
    OpenAI also provides a Chrome Extension (https://chromewebstore.google.com/detail/chatgpt-search/ejcf...) to trigger a search from the omnibar as the default search engine.

    If you don't like that (like I do), you can also manually add it under Site Search using

    https://chatgpt.com/?q=%s&hints=search

    • modeless6 hours ago
      First impression: Far too slow to replace Google Search for this use. I frequently get 5+ seconds of waiting before the first word of a response shows up, vs. less than 1 for Google (which is not as fast as it used to be). OpenAI has a lot of work to do on latency.

      I can definitely see this new search feature being useful though. The old one was already useful because (if you asked) you could have it visit each result and pull some data out for you and integrate it all together, faster than you could do the same manually.

      It's often hobbled by robots.txt forbidding it to visit pages, though. What I really want is for it to use my browser to visit the pages instead of doing it server side, so it can use my logged-in accounts and ignore robots.txt.

    • serjester6 hours ago
      I tried hacking this together a month ago as an experiment and it was super painful. This seems like exactly what I wanted - props to OpenAI. Google should be on DEFCON 2.
    • 3 hours ago
      undefined
    • zurfer6 hours ago
      nice thank you. I just added that as a custom search to alfred (on mac). Works well!
  • GolfPopper21 minutes ago
    Every time I see one of these topics, I go ask chat GPT a question to which I know the answer on a topic where I would like to be able to get useful answers to similar questions that I do not know the answer to.

    This time it was, "Did Paul Edwin Zimmer write a fourth Dark Border novel?" (Real answer: Yes, Ingulf the Mad. You can find the answer on his Wikipedia page.[1])

    ChatGPT's[2] answer: "Yes, Paul Edwin Zimmer wrote a fourth novel in the Dark Border series titled "The Dark Border." This book was published after the original trilogy, which included "The Dark Border," "The Gilded Age," and "The Silver Sphere." If you're interested in the themes or plot, let me know!" (Note: these are not the titles of the 2nd and 3rd novels in the series. Also, it gave me the same name for the putative 1st and 4th books.)

    Pure hallucination.

    1. https://en.wikipedia.org/wiki/Paul_Edwin_Zimmer 2. https://chatgpt.com/

    • nashashmi32 seconds ago
      [delayed]
    • qilo11 minutes ago
      Here's what ChatGPT 4o mini (free) answered to me to the same question:

      No, Paul Edwin Zimmer did not write a fourth novel in the Dark Border series. The trilogy consists of "The Dark Border," "The Dark Border: The Return," and "The Dark Border: The Reckoning." After these, he focused on other projects and did not continue the series.

  • qwertox9 hours ago
    Makes me question why Google never bothered to create something like search sessions which could be enriched with comments/notes and would be located in a sidebar just like the chats in ChatGPT/Claude/Mistral are.

    They really had the potential to do something interesting, but were just focused on their ad metrics with the "good enough" search box. What have they been doing all the time?

    • Liquix8 hours ago
      the FAANG giants have been government assets for ~15+ years [0]. they don't have to turn a profit every quarter, innovate, or make their search any better because they no longer play by the rules a normal business does. they are a critical "too big to fail" component of the state's global surveillance system.

      [0] https://static1.makeuseofimages.com/wordpress/wp-content/upl...

      • scq5 hours ago
        Linking the slide deck that caused Google to start encrypting the traffic between their own data centers running on their own fiber is perhaps not the most compelling argument that Google is a state asset.

        https://www.newyorker.com/news/amy-davidson/tech-companies-s...

      • onlyrealcuzzo5 hours ago
        > they don't have to turn a profit every quarter,

        And, yet, aside from Aramco, they are the most profitable companies in the history of the world.

      • unnouinceput8 hours ago
        OpenAI is Microsoft. Microsoft is a FAANG giant.
        • hidelooktropic4 hours ago
          Microsoft is their leading investor. They don't technically own OpenAI.
          • unnouinceput2 hours ago
            And technically Musk was not the wealthiest man in the world when he bought Twitter, because he didn't had that much cash laying around in his bank accounts. And yet, when wanted, or more precisely put in corner to do so, he did shit 50 billions and bought it.

            OpenAI is Microsoft, technical details don't matter here, only money.

        • astrange4 hours ago
          Microsoft is literally not in "FAANG". But they are in MANGA.
          • unnouinceput2 hours ago
            Before FAANG the acronym was MAFANG, but due to too much closeness to a slang word, the political correct crowd change it to FAANG. Also, just as a side note, Microsoft is a trillion worth more than the "G" in FAANG.
        • lucianbr8 hours ago
          How is that relevant? Microsoft bought OpenAI, didn't create it by R&D, so the assertion stands: giants don't do new things, for whatever reason.
          • SideQuark6 hours ago
            The amount of fundamental breakthroughs at places like IBM, Bell, Xerox, all beg to differ with you.

            In some fields of CS, places like MS research garner nearly 50% of all top conference publications.

      • DSingularity8 hours ago
        Not just surveillance. Power projection. I wonder what impacts you can have on foreign economies by playing with quality of these tech giants outputs?
    • summerlight8 hours ago
      I guess now Google's search stack is too complicated and not many engineers understand what to do in order to introduce a novel, big feature integrated into the full stack vertically and horizontally. And those few capable of doing so are probably completely out of bandwidth, so some random ambitious PM cannot pull their hands into uncertain green field projects.
    • zelphirkalt9 hours ago
      Collecting people's data and making money from that.
      • astrange4 hours ago
        Google doesn't make money from "collecting people's data", they show you ads.

        If they're collecting data it doesn't even work; I make no effort to hide from them and none of their ads are targeted to me. Meta, though, they're good at it.

    • modeless6 hours ago
      Chrome did add a sidebar that shows search sessions (queries grouped with the pages you visited on that topic). Used to be called "Journeys". I don't think you can add notes. I never found it useful in the slightest and I doubt notes would have made it any better. Chrome has been adding random UI features like that over time, but I haven't found any of them at all useful in many years.
    • arromatic8 hours ago
      Can you tell me a bit more ? What do you mean by search session ?
      • qwertox8 hours ago
        Let's see, if I go to " ⋮ -> History -> Grouped History" on the top right of the Chrome browser, I see a "Search History" ( chrome://history/grouped ).

        For example `8 hours ago: "autohotkey hotkeys"` with 4 links to pages which I visited while searching.

        But this is a Chrome feature, not a Google Search feature. https://myactivity.google.com/myactivity does (sometimes? can't see it right now) have a grouping feature of all the searches made, but this is more of a search log than a search management feature.

        So chrome://history/grouped is the closest to what I mean, but I can't pin or manage these history groups, enrich them with comments or even files, like pdf's which could then get stored in Google Drive, as well as get indexed for better searches.

        • arromatic8 hours ago
          oh I thought you meant something like commenting under search result links like youtube videos.

          I might be mistaken but i think ff mobile does something similar of grouping search session

  • blixt8 hours ago
    One thing that is quite unfortunate with the state of SEO and the web in general today is that when I asked "what are the latest versions of common programming languages and when were they released?" a large amount of the sources were "13 Tools You Should Learn Now" and the like. This might be a solvable problem within the search API they provide to the LLM, but for now I wouldn't trust current LLMs to be able to filter out these articles as less trustworthy than the official website of the programming language in question.
    • jsheard8 hours ago
      Given how many of those SEO spam sites are themselves generated by ChatGPT now, OpenAI can simply back-reference their own logs to find out which sites are probably SEO spam while everyone else is left guessing. That's vertical integration!
      • arromatic8 hours ago
        If they do that , That's a genius idea.
        • code518 hours ago
          So it'll turn to yet another arms race - similar to captcha, cybersecurity and nuclear weapons. SEO will use AI to fill in fluff inside AI-generated content (which is already done).

          It won't directly match ChatGPT logs and OpenAI would just be pouring precious compute to a bottomless pit trying to partial-match.

          • jerjerjer6 hours ago
            Serve Claude-generated version to OpenAI bots. Serve OpenAI-generated version to Claude bots. Problem solved.

            Serve users a random version and A/B test along the way.

      • itissid7 hours ago
        Or offer two search results when they suspect one is spam and see which one a user likes and train off of that, just the way they do now with ChatGPT.
      • DSingularity8 hours ago
        I’m sure they will be more subtle than that otherwise it will get circumvented.

        I’m sure they will/are tackling this at the model level. Train them to both generate good completions while also embedding text with good performance at separating generated and human text.

        • sebzim45008 hours ago
          Would someone even want to circumvent it though? Most sites won't care very much about encouraging scrapers to include them in LLM training data, it's not like you get paid.
    • skydhash7 hours ago
      > when I asked "what are the latest versions of common programming languages and when were they released?"

      The issue is with the query itself. You're assuming that there's some oracle that will understand your question and surface the relevant information for you. Most likely, it will use the word themselves as part of the query, which SEO sites will exploit.

      A more pragmatic search workflow would be to just search for "most common programming languages used" [0], then used the Wikipedia page to get the relevant information [1]. Much more legwork, but with sources. And still quite fast.

      [0]: (Screenshot) https://ibb.co/ggBLy8G

      [1]: (Screenshot) https://ibb.co/H4g5bDf

    • notatoad8 hours ago
      >what are the latest versions of common programming languages and when were they released?

      is this a real question you needed an answer to, or a hypothetical you posed to test the quality of search results?

      of course you're going to get listicles for a query like that, because it sounds like a query specifically chosen to find low-quality listicles.

      • jerjerjer6 hours ago
        I honestly doubt there exist an actual reputable resource having it on a same page. Each language tracks their own latest version(s). Wikipedia tracks latest versions for a variety of software but it's on different pages.
    • sireat5 hours ago
      This was a nice counterexample to other queries that seemed quite decent.

      I got:

        Programming Language Latest Version Release Date
        Python 3.12.0 October 2, 2024 ???!!!!!
        Java SE 22 March 2024
        JavaScript ECMAScript 2024 June 2024
        C++ C++23 December 2023
        C# 12.0 September 2024
        Ruby 3.3.0 December 25, 2023
        PHP 8.3.0 November 2023
        Swift 6.0 September 2024
        Go 1.22 August 2024  !!!
        Rust 1.75 October 2024 !!!!
        Kotlin 2.0.0 May 21, 2024 !!!
      
      This is pretty bad. (???!!! added by me)

      However, my follow up query "Provide primary web page for each language listed above" was quite decent:

      Here are the primary websites for the programming languages mentioned:

        Python: https://www.python.org/
        Java: https://www.oracle.com/java/
        JavaScript: https://developer.mozilla.org/en- 
       US/docs/Web/JavaScript
        C++: https://isocpp.org/
        C#: https://learn.microsoft.com/en-us/dotnet/csharp/
        Ruby: https://www.ruby-lang.org/en/
        PHP: https://www.php.net/
        Swift: https://www.swift.org/
        Go: https://go.dev/
        Rust: https://www.rust-lang.org/
        Kotlin: https://kotlinlang.org/
      
      The problem was with the 3rd query "Provide latest version for each language as mentioned on the primary website for that language."

      That brought back the first result basically unchanged.

      So certainly this is a work in progress but very promising.

    • 5 hours ago
      undefined
    • 5 hours ago
      undefined
    • inhumantsar8 hours ago
      this is why I pay for Kagi. granted those results still come up, but you can block specific domains from ever appearing in the results and configure how listicles are displayed.
      • arromatic8 hours ago
        How many can you block and filter manually ? 10 ? 100 ? 10k ? Who will test sites for the blocklist ? The domain block feature is great but unless it's collaborative listing it's not gonna be super effective.
        • hmottestad8 hours ago
          It’s super effective for me because I just block stuff as things pop up that I don’t want. I’ve also added more weight to certain domains that I want more results from. I wouldn’t want anyone touching my config, it’s mine and it works great!
        • hughesjj8 hours ago
          .... Test sites for the blocklist? What?

          Also they do share the most blocked/raised/lowered etc sites: https://kagi.com/stats?stat=leaderboard

          We've had this problem of "good defaults" before with ad trackers blocking domains. I'm sure it'll be Sooner than later when some community lists become popular and begin being followed en mass

          • arromatic8 hours ago
            I meant your average user can test a handful of sites if they are seo spam or good sites but a single search return 10+ results and even more when a user searches multiple things , multiple times a day . Average user doesn't have the time to test these many websites.
        • speckx5 hours ago
          1,000
      • blharr8 hours ago
        Kagi is admittedly pretty great for this.
      • speedgoose8 hours ago
        As an alternative, ublacklist is free and open-source.
        • arromatic8 hours ago
          Average serp page has 10 results . What if all 10 matches with your blacklist ? Not to mention you can't do anything if the engine dosen't search deeper .
          • speedgoose8 hours ago
            You probably have to browse to the next page or refine the search terms.
    • ben_w8 hours ago
      SEO spam is always going to focus on the biggest market, and by doing so they can be completely transparent and obvious to whoever they're not trying to fool.

      I'd assume right now the SEO target is still mainly Google rather than ChatGPT, but that's only an "I recon" not a citation.

      If and when ChatGPT does become the main target for SEO spam, then Googling may start giving good results again.

      • adamc8 hours ago
        Wouldn't it be "I reckon"? :-)
        • ben_w7 hours ago
          D'oh, yes. :)
    • benob8 hours ago
      This is the next step for SEO: be able to game ChatGPT prompts trying to filter out SEO crap...
      • joshdavham8 hours ago
        How do you think people will try to game AI-based search?
    • hmottestad8 hours ago
      For Java I got:

      As of October 31, 2024, the latest version of Java is Java 23, released on September 17, 2024. The most recent Long-Term Support (LTS) version is Java 21, released on September 19, 2023.

      Which all seems correct and accurate.

      • wstrange2 hours ago
        Googling for this query is fast, and instantly surfaces the download link. That seems pretty useful...
      • andrewinardeer8 hours ago
        This is because it is referencing and regurgitating Wikipedia articles.
        • hmottestad6 hours ago
          Nope. It had found an Oracle page for the LTS date and an OpenJDK page for the latests version.
      • blixt8 hours ago
        Yeah I did also find it to be mostly accurate. However, seeing the sources I felt like I kind of have to check all the languages just in case it picked up information from a random "X ways to do Y" article that might not have been prioritizing accuracy. And for this search query I did see several languages' actual websites, but I did another very similar query earlier where 9 out of 12 results were all numbered list articles clearly intended for SEO. 2 of them were actual official sites. And 1 was what appears to be a decent attempt at talking about programming languages (i.e. not SEO only).
  • light_triad10 minutes ago
    Search is a red herring. It was invented when folks didn't realize that links between pages were important. The next step is about disrupting what 'search' means for the internet, not just trying to do better than Google.
    • deelowe8 minutes ago
      Uhuh. And what does that mean exactly?
      • light_triad7 minutes ago
        The world is full of surprises, and innovation is fundamentally about replacement...aka 'disruption'
  • 1010088 hours ago
    Search in the internet worked because people wanted to generate content to attract people to display ads or any other reason, but they wanted to attract people.

    If now my content is going to be ingested and shown by a LLM or AI agent, what's the purpose to give it for free? I know it won't happen, but I would love if this type of agents have to pay to show a summarization of another website. It's only fair when done in mass like this.

    • amiantos6 hours ago
      I publish my content "for free" because I want to spread knowledge and information, or promote a topic or interest I enjoy, and monetization has never been a priority for me. I know that I am not alone. The urge to create does not depend on a need for money. I am happy for my websites to be picked up by Perplexity or ChatGPT because I want more people to see/learn/hear about the things I care about.

      If someone only creates for money, only publishes on the web to get people to look at advertisements, well... I think there are plenty of other people who don't feel that way that will fill the void left behind in their departure.

      To me it seems weird so many people think the internet only exists because advertising props it up. The internet existed and was a wonderful place before advertising became widespread, and most services and websites will continue to exist after advertising is gone (if that ever happens). What encourages people to believe in some sort of great collapse?

      • 1010084 hours ago
        I said "or any other reasons" because I was in a hurry. Internet existed way before ads, I agree, but even then you wanted people to visit your site to see what you wrote. Maybe to become an expert in a topic, maybe to feel better, but you want people to know who did it. That's why websites used to have a webmaster and info about that particular person.

        If people stop visiting websites because LLM give them what they want, websites will stop existing. Don't believe me? Check how many "fansites" exist now about topics compared to ten years ago, when there weren't social networks. They have been replaced by influecners with huge followers on Instagram, TikTok, Twitter and more. The same will happen.

      • submeta3 hours ago
        Absolutely this! Actually the internet was driven by people who wanted to share their ideas, not by those who wanted to sell their ideas.
    • juunpp5 hours ago
      Seems like a good thing to me.

      - The clickbait, SEO-optimized garbage that today fills 95% of search results could entirely disappear as a business model because they have nothing interesting to offer and the LLM company won't pay for low quality content.

      - The average Joe blogging on their website won't go anywhere because they aren't profiting from it to begin with. And the LLM linking back to the page with a reference would be a nice touch. Same logic applies to things like Open Libra and projects that are fundamentally about open information and not about driving ad revenue.

      But, on the other hand, I don't think LLM-based search will fundamentally change anything. Ad revenue will get in the way as always and the LLM-based search will start injecting advertisements in its results. How other companies manage to advertise on this new platform will be figured out. What LLM-based search does is give Microsoft and others the opportunity to take down Google as the canonical search engine. A paradigm shift, but not one that benefits the end user.

    • pradn8 hours ago
      Well the whole point of this product is to link back to websites. There’s no necessary link between the text and the links, which are chosen after the fact from an index. That’s different from traditional search engines, where links are directly retrieved from the index as part of ranking.
      • thund6 hours ago
        The answer though has so much info already that it reduces the need to visit the links.
    • trump20258 hours ago
      I think your comment highlights a very important shift in the market for ads and you are right that increasingly, the current atmosphere hints at there is little to no incentive to publish original creative work in the future if there is no compensation for it like Google had done.

      We've like reached peak human driven novelty (or McKennaists will argue it already happened around the mid 2010s) and we'll see AI driven novelty with the difference being it will be even a smaller group of people that are paid royalty fees.

      Once creative destruction reaches critical mass, we'll finally see billionaires and companies around the world succumb to demand for UBI.

      If you want to see the future just look at China. Billionaires are being hunted down and threatened to give up their offshore accounts.

  • Xcelerate9 hours ago
    How long until advertisements are subtly introduced? I didn’t notice any specific brand of limoncello recommended in their demo.
    • jsheard9 hours ago
      Probably not long, some users already got A/B'ed into testing "sponsored results"

      https://i.imgur.com/UpAptFL.png

      • FriedPickles9 hours ago
        The response on the left references specific products, but where's the evidence that it's sponsored?
        • alwa8 hours ago
          Aside from the marketing-ish tone and specific deeplinks to product purchase pages, the prominent Amazon logo and product description headline implied some degree of affiliation to my eyes. It seems like the evidence is that it would be foolish not to take the money for presenting such an obvious referral of a motivated buyer.

          Frankly the example they posted seems like a fairly happy one, where the user is explicitly implying that they’re seeking a specific physical product to introduce to their life. We’ve all seen where those monetization incentives lead over time though.

          But you’re right—not even so much as a tiny word “Ad” like Google does…

          • citizenpaul6 hours ago
            It could just be affiliate links? Wow billions of dollars poured into AI so it can serve up referral links. A boring dystopia indeed.
    • nuz9 hours ago
      It's already happened in a subtle way via who got to partner with them to be displayed in results vs not.
    • axus8 hours ago
      I'd be happy to have another Google clone, that doesn't have a login and is not a chat session. Go to https://search.ai , type my search query and look through the results, with ads on the side.
    • findthewords9 hours ago
      I hope very quickly. The sooner they start competing with Google for ads the better.
      • solfox9 hours ago
        Are ads what people want?
        • boweruk8 hours ago
          No but once ChatGPT starts threatening Google's revenue model, maybe they will start putting effort into improving their drastically deteriorating search engine.
          • Teever7 hours ago
            But why is that good for me?

            Why do I care if Google succeeds or dies?

            If anything I want them to die for ad infested they've made the internet. I don't want ads in either chatGPT or Google Search.

          • riku_iki8 hours ago
            they need to win search share to threaten Google's revenue model: take traffic from google.com, so google will sell ads. Going to ads busyness is not necessary for this.
        • gk19 hours ago
          People want whatever they searched for. If the ads provide that, then sure. That's why Google and Meta are the size that they are...
          • croes9 hours ago
            Most of the time I don’t search for products so there is nothing I want to buy.
          • goatlover8 hours ago
            I don't want ads when I search.
          • sundaeofshock8 hours ago
            Google is the size it is due to monopoly power.
            • 8 hours ago
              undefined
        • kaonwarb8 hours ago
          I don't want ads. But I can't deny that ads are the only business model with a chance of scaling to compete with Google.

          If that's what they want to do in this space, which is not a given.

        • moralestapia8 hours ago
          Plenty of times the answer is yes.
    • luckydata2 hours ago
      It's absolutely coming. I'm curious to see what their ad units will look like. IMHO ads in an LLM search world will look more like Facebook ads than Google ads. Brand advertising will stay focused on YouTube while click to buy and click to download are probably the best fit for the medium.
    • schmidtleonard9 hours ago
      2 years for ads, 6 years to remove the yellow background.
      • M4v3R9 hours ago
        I think you’re being very generous with these 2 years.
        • schmidtleonard8 hours ago
          Yeah, I suppose OpenAI also speedran the "make noble promises to not become evil / become evil" pipeline too.
    • josefritzishere9 hours ago
      They might wait a whole week.
    • tomjen34 hours ago
      It is much more expensive to run a search query with AI. I don't think ads can pay for it.
    • TZubiri9 hours ago
      5 to 10 years
    • breck9 hours ago
      Why would they ever want to sell ads?

      They did not get addicted to selling ads, have billions in revenue from paying subscribers, and don't have to wean themselves off of ads (as Google and Meta would love to do).

      • disgruntledphd28 hours ago
        Because they are massively structurally unprofitable right now?
      • kredd8 hours ago
        Why make $1 when you can make $100?
      • 23B18 hours ago
        Because Sam Altman needs to buy another Greubel Forsey, of course.
    • BiteCode_dev9 hours ago
      Honestly, if I can disable ads by paying them, then I'm ok with it.

      Google will suck all your data even if you pay, and link the entire earth of services to your identity.

      For now, chatgpt doesn't care, and I already pay for what they provide.

      May they kill Google.

      20 years old me would freak out hearing me that, they used to be my heroes.

      • zelphirkalt9 hours ago
        You are thinking you can pay them to not use your data? Think again. They will sneakily use your data anyway. If not yours, then the data of people who do not change setting xyz. Oops, the last update must have reset that option for some users.
      • entropicdrifter9 hours ago
        You either die a hero or mumble mumble
      • arcticbull8 hours ago
        So the issue is if you let people opt out by paying you’re left with a low intent, likely lower net worth group of people to advertise to. As a result those eyeballs are worth less. The advertisers will turn to other platforms if you only let the worst people see their ads.

        Unless enough people all pay, the whole thing stops working. But there aren’t enough people who will pay because most people don’t care.

        Tldr: the ad supported business model fundamentally doesn’t work if you let all your best products (you) opt out by paying. It requires them to pay an amount far in excess of what they would be willing to pay for the system to work.

        • spearman8 hours ago
          There's some truth to that, but Netflix, YouTube, etc seem to be OK with both ad-supported and paid ad-free versions, so I think the logic you described does not always dominate the considerations.
          • arcticbull8 hours ago
            I think you’re right that it’s not universal - maybe something to do with medium and attention?
      • drilbo4 hours ago
        They're definitely still using your data though
      • croes9 hours ago
        You can pay to get fewer ads
      • swatcoder9 hours ago
        > Honestly, if I can disable ads by paying them, then I'm ok with it.

        The modern maxim is: any content platform large enough to host an ad sales department will sell ads

        Vanishingly few (valuable) consumers have zero tolerance for ads, so not selling ads means leaving huge sums of money on the table once you get to a certain scale. Large organizations have demonstrated that they can't resist that opportunity.

        The road out is to either convince everyone to have zero tolerance for ads (good luck), to just personally opt for disperse, smaller vendors that distinguish themselves in a niche by not indulging, or to just support and use adversarial ad blockers in order to take personal control. Hoping that the next behemoth that everybody wants to use will protect you from ads is a non-starter. Sooner or later, they're going to take your money and serve you ads, just like the others.

    • littlestymaar9 hours ago
      Not long before it's forbidden by law with rules like “if you say the name of one brand, you must name at least two competitors” I suspect.
      • KeplerBoy9 hours ago
        That'll be the European version.
        • littlestymaar9 hours ago
          Don't Americans also have rules about hidden advertising like that in regular media?
          • bandrami8 hours ago
            The American model prefers "sponsored material should be identified as such" though that's only active for broadcasting currently
            • 8 hours ago
              undefined
          • tiahura8 hours ago
            American law generally favors freedom of expression.
            • arcticbull8 hours ago
              There are several classes of restrictions on free speech in the US. These include: obscenity, fraud, speech integral to illegal conduct, speech that incites imminent lawless action, speech that violates intellectual property law, true threats, false statements of fact, and —- most relevant here -— commercial speech such as advertising.

              Advertising has far less protection than is ordinarily afforded to the kind of speech you might do as a person.

            • 8 hours ago
              undefined
  • trump20258 hours ago
    This might be unpopular opinion but it really isn't as big of a deal as OpenAI makes it out to be (like their previous announcements)

    The truth is, I haven't used ChatGPT at all since spring of this year. Claude's Sonnet 3.5 has replaced it. I pay very little attention to what OpenAI releases and simply waits for Anthropic to implement it.

    I also started using Gemini which already outperforms perplexity and this and will not switch.

    I think everybody is constantly caught up with their infatuation with OpenAI and other characters that they don't realize Google, Anthropic are actually building a moat which some like Gary Marcus keeps rambling on as impossible

    I'm a realist and I can see that while Google has been slower to start, it reminds me of the search engine wars of 2000s, it is dominating and winning over users.

    • nomel5 hours ago
      > I also started using Gemini which already outperforms perplexity and this and will not switch.

      Related, I haven't paid for Gemini since about a month after release, but the morally corrupt query of "Show me articles from left and right leaning news sites about <headline topic>" would result in Gemini censoring right leaning urls with a "url removed" placeholder and belittling statements about the concerns of showing me right leaning content. Perplexity had no issue with such a dastardly prompts.

      I want a tool, not a curated experience, so Gemini is in my "will not use" list for the foreseeable future.

      I admit I haven't tried this lately, but I also have no desire to help fund that sort of behavior.

    • azinman26 hours ago
      OpenAI has the consumers. Anthropic has enterprise (or at least that’s the majority of their customers). OpenAI is better positioned to do so. Eye balls and brand are key.
  • steveoscaro16 minutes ago
    Google SEO experts and companies that have spent a ton of money on that stuff must be sweating today.

    (I sweat every day as I contemplate my web dev self huddling in the tall shadow of AI apporaching)

  • _andrei_6 hours ago
    Pretty underwhelming, I've been using on Kagi's assistant [0] for the past few months and it's much better. I can `!chat what i want to search for` in my address bar any time, and Kagi will do the search and then open a chat with the LLM of my choice (3.5 Sonnet) and the results in context. It can also do further searches.

    [0] https://kagi.com/assistant

    • cvburgess5 hours ago
      Second this, Assistant is a game-changer for me. Its the usefulness of AI with footnotes that give me confidence that I can know where that came from and if I trust it. Especially for product reviews, being able to use lenses that filter out sites I don't trust and then run AI on top of it is pretty cool.
  • dmix26 minutes ago
    This is a great way to find what movies or concerts are playing at a theater/venue. It even lists how much each costs.

    You can even search “what [music genre] is playing in [city] this November” and it lists them.

    All of those would normally take multiple clicks/manual filtering or ad filled aggregator sites.

  • SethMLarson9 hours ago
    Hah, OpenAI is becoming an ads business too. So much for something new, same old funding model for every centralized platform on the web.
    • troymc9 hours ago
      OpenAI has ads? I thought it was mostly a freemium business model.
      • SethMLarson9 hours ago
        I'm saying that by moving towards explicit "search" and "linking to sources" they have set the stage for being able to charge to be recommended by their search features (ie, ads and pay-to-rank, same as Google search).

        There aren't any ads in their demo, we haven't seen the real deal yet, but I'll be watching HN for that day.

        • soheil9 hours ago
          Why, just because search has ads therefore anything that is a superset of that must also?
          • croes8 hours ago
            Because ads bring money and companies love money
          • littlestymaar9 hours ago
            Over the past two decades, ads have proven to be the only way to make money over the internet…
            • ascorbic4 hours ago
              They're making $300 million per month in revenue right now, with no ads.
              • opdahl2 hours ago
                That is what Google makes in less than half a day by selling ads. It’s a rounding error on their monthly revenue.
    • breck9 hours ago
      If they are making billions from subscriptions, why on earth would they want to switch to an ads business?
      • SethMLarson9 hours ago
        Making billions but spending trillions for no moat (GPUs and models aren't moats) means that the only moat they have are users. Users aren't paying enough to offset costs, the only way to get value from non-subscription users for their scale is through ads.
      • RodgerTheGreat9 hours ago
        Might have something to do with the fact that they're also still losing billions operating their services at a loss!
        • nomel5 hours ago
          Seems like the sane thing to do with would be put ads on those using the service for free.
      • croes8 hours ago
        To make more billions
      • short_sells_poo9 hours ago
        Because it is never enough. We see this time and time again. Once they are making billions, the people in charge will demand that they start making dozens of billions, and then hundreds. The growth must never cease, because the moment you stop growing, you can't sell the dream that supports ridiculous PE ratios anymore.

        Google was a very profitable business 10 years ago and the search was still decent. In the last decade they absolutely butchered their core product (and the internet along with it) in an effort to squeeze more ad dollars out, because it's not the level of profitability that they need to maintain, but the growth of that profitability.

        Microsoft was a ridiculously profitable company, but that is not enough, they must show growth. So they add increasingly user hostile features to their core product because the current crop of management needs to see geometric growth during their 5 year tenure. And then in 5 years, the next crop of goobers will need to show geometric growth as well to justify their bonuses.

        Think about this for a moment: the entire ecosystem is built on the (entirely preposterous) premise that there must be constant geometric growth. Nobody needs to make a decision or even accept that this is long term sustainable, every participant just wants the system to keep doing this during their particular 5-10 year tenure.

        It's an interesting showcase of essentially an evolutionary algorithm/swarm optimizer falling into a local optimum while a much better global optimum is out of reach because the real world is something like a Rastrigin function with copious amounts of noise with an unknowable but fat tailed distribution.

        <rant/> by a hedge fund professional.

        • drilbo4 hours ago
          >It's an interesting showcase of essentially an evolutionary algorithm/swarm optimizer falling into a local optimum while a much better global optimum is out of reach because the real world is something like a Rastrigin function with copious amounts of noise with an unknowable but fat tailed distribution.

          I've never heard it framed like this before, that's beautiful.

        • breck7 hours ago
          This is such a good rant, and I think you should develop it into an essay and I think there is an important catchy natural equation to mine here.
      • jajko8 hours ago
        You can ask the same for ie Apple where you pay a proper premium for products, yet their ad business keeps growing slowly into respectable proportions, and not by accident.
      • layer87 hours ago
        They get inspired by streaming services doing the same.
      • stocknoob5 hours ago
        Switch? Por que no los dos?
      • insane_dreamer8 hours ago
        they're not making billions from subscriptions
      • soheil9 hours ago
        Slightly more accurate: they're raising billions making pennies.
  • ryzvonusef8 hours ago
    https://x.com/sahir2k/status/1852038475606036577

        > how tf is it reading private repos ?!
    • thrdbndndn7 hours ago
      It was also indexed by Bing.

      I usually assume good faith, but in this particular case, I believe the chance that this repo was public before and the author just changed it to private to bait attention is far higher than that Bing/ChatGPT can actually read private repo on GitHub.

    • numbers7 hours ago
      Not sure how it was able to read this user's repo name but for me, it's optimistically saying "yes they have a repo named X" of whatever I ask it and sometimes I do have that repo and sometimes I don't.

      I have a private repo named "portland-things" and I asked "does this user have a repo related to portland?" and it responded with "yes it's called 'pdx'" but that's not correct at all.

    • msoad7 hours ago
      Wow! This is the real news here!
      • gauge_field7 hours ago
        As one person pointed out in the thread, it also shows up on bing results, main repo, main.py file and releases page. But not on google. Edit: It also shows up on duckduckgo
  • ineedasername1 hour ago
    To test things out I searched for a neck brace. A few days ago I was trying, unsuccessfully, to find one that met certain criteria. Google didn't do a very good job, and not even as a result of countless ads-- there were surprisingly few. But results still weren't helpful, so I went to images to sort through countless photos for ones that looked promising and still didn find something that quite met what I was looking for.

    I just did the same search with chatGPT and it gave 6 bullet-point options with a reasonable description, though that was likely based off marketing copy. Half were white-label rebrands of the same option but that's not really ChatGPT's fault, and even then it was the one that best met what I've been looking for.

  • anonu1 hour ago
    So this conforms to my earlier belief that OpenAI will continue to release new features that will kill swaths of startups every few months. Perplexity may be on its deathbed. I see some comments suggesting Perplexity output quality is much better - but I think its only a matter of time before OpenAI catches up.
  • illnewsthat9 hours ago
    Looks like this was timed to coincide with Google adding search grounding data to Gemini API: https://news.ycombinator.com/item?id=42008834 // https://developers.googleblog.com/en/gemini-api-and-ai-studi...
    • 7thpower9 hours ago
      They are taking a page out of Microsoft’s strategy of clouding out all sunlight.
  • summerlight5 hours ago
    I don't understand those people who hope that LLM-based search will reduce or eliminate ads in search results. And that's not going to happen. You're conflating two orthogonal problems. In fact, LLM is much more expensive so you will need more aggressive ads to fund it. And ads in LLM based search engine will be harder to remove since it's deeply ingrained in the generated result.

    The fundamental problem is that ads based business model is much more lucrative then subscription based one. It's even more extreme when you take account of a prospective view, since you have control on ads shown which gives you a large margin for future revenue improvements compared to rigid subscription models. Unless you have a way to change this dynamic, you're going to eventually see ads in search results, regardless of its format.

  • grahamj9 hours ago
    To my mind one of the great benefits of LLMS is the possibility of searching without handing over some of the most personal information that exists - your search history.

    I’m happy OpenAI is advancing LLM-based search but I won’t be using it in earnest until it’s local.

  • jameslk7 hours ago
    What’s the benefit for websites to allow OpenAI/Microsoft to scrape and republish all their content without sending traffic back to them? It seems like these type of “search engines” will just get blocked in the short term.

    Longer term, it seems what will be left is “AI Optimized” content, which turns LLM search engines into shills for advertisers. Or these new search engines will have to compensate content producers somehow.

    • nomel5 hours ago
      This is probably powered by Bing indexes. I doubt many would intentionally exclude themselves from Bing. I suspect Gemini is the same, using Google's index.
  • rurban4 hours ago
    I just detected perplexity for a proper AI search yesterday. My test question today was: Which cbmc solver is multi-threaded?

    Perplexity gave me the correct and best answers, with links to the relevant arxiv papers.

    The new ChatGPT search gave me only cadical as answer, plus 2 irrelevant wrong answers (not multi-threaded), but missed all other multi-threaded solvers. => It's crap.

    Neither Google nor ddg gave me any relevant links. Couldn't try kagi, since my trial phase is over.

    Looks like the fellow who was invited to the Google funeral was right. Google search is dead.

    • Etheryte4 hours ago
      It's hard to cross check since you didn't mention what you consider are good answers, but the quick answer snippet on Kagi says:

      > The Yogar-CBMC and JCBMC solvers are notable multi-threaded variants of the CBMC (C Bounded Model Checker) framework: ...

      Followed by further details and references. The search results themselves look relevant and reasonable to me, but again, outside my area of expertise.

      • rurban4 hours ago
        Both are bad answers
        • Etheryte3 hours ago
          Okay and why is that?
    • Bromeo4 hours ago
      Kagi answers that there is "CBMC", which is single-threaded, but that there are extensions "Deagle" and "Yogar-CBMC" that provide multi-threading for CBMC. It gives links to the papers for all three, however some of them are closed access (or in other words, fact checked, unlike arxiv).
  • nextworddev9 hours ago
    I have turned bearish on Perplexity recently, this confirms it
    • keiferski9 hours ago
      Perplexity is a really terrible name for a product and that alone will hold it back from being a real competitor.
      • Me10009 hours ago
        It's not like ChatGPT (or ChatGTP as half of people call it) is much better.
        • DrBenCarson9 hours ago
          It became a good name once it became a watershed viral phenomenon. Everything being equal yeah not a great name but it defined a new hype cycle so it got a pass
        • woadwarrior018 hours ago
          ChatGTP and other variants with a Levenshtein distance of 1 from ChatGPT have been typosquatted to death by subscriptionware wrappers on the App Store and the Play Store. Many of them seem to be quite successful.
        • keiferski9 hours ago
          ChatGPT isn't a great name, but it's easy to say, spell, and remember. And at this point, a lot of people just know them as OpenAI, which is a great name.

          Perplexity sounds like a parody startup name from the Silicon Valley TV show. Way too complicated and unnatural.

          • skybrian9 hours ago
            You’re saying that now, but getting the initials in “ChatGPT” in the right order took a while to learn, so I wouldn’t say it’s easy to remember, and it seems easy to stumble saying it, too?

            It’s all about familiarity. Once people learn it, it’s not hard.

            • keiferski8 hours ago
              But it didn't matter if they were in order or not, because ChatPTG or ChatTGP all go to the same place via Google, etc. It could have been called Chat + [Any 3 characters] and been fine.

              Perplexity is just a nonsensical word (for those unfamiliar with the concept) that is too long and hard to spell. They'd be better off just chopping it down to Lexity, or Lex, or Plexity, or Plex, etc.

              • esafak46 minutes ago
                It's not a nonsensical word. https://en.wikipedia.org/wiki/Perplexity
              • echoangle1 hour ago
                Isn’t perplexity a normal English word? Does an average American not know what perplexity in the usual sense is? The word has a meaning outside of AI, too. Or would they only recognize „perplexed“?
            • SG-8 hours ago
              in French it translates to ChatFart when you read it out loud.
      • toephu25 hours ago
        Same with 'Claude 3.5 Sonnet'.

        At first I thought it was some piano piece like "Mazurkas, Op. 59" by Chopin, or had something to do with some French guy in the AI field.

      • currymj8 hours ago
        A good language model is one with low perplexity.

        https://en.wikipedia.org/wiki/Perplexity

        Reasonable name for a language model startup.

        • stocknoob5 hours ago
          Would False Positive be a good name for a medical screening service?
        • keiferski7 hours ago
          It would be a logical name if its customers were technicians familiar with LLMs, and not end businesses and consumers. Which is why Ford wasn't named Internal Combustion Engine, Apple wasn't named Graphics Processing Unit, etc.
          • staticman26 hours ago
            Perplexity sounds no weirder to me for a tech product than than Google or Yahoo or Apple.
          • currymj7 hours ago
            Generative Pretrained Transformer is also a terrible brand name but it doesn’t seem to matter.
            • keiferski7 hours ago
              Because the name isn't Generative Pretrained Transformer, it's GPT.
      • woadwarrior018 hours ago
        I've always thought that the name is very ironic and perhaps "certitude" would've been a better name.
      • soheil9 hours ago
        Agreed, it's as if someone completely ignored the meaning of the word and just decided what sounds good for an AI app.
    • marban9 hours ago
      Perplexity was a cult in the first place.
      • beng-nl9 hours ago
        I’m surprised at all the negativity on perplexity. I think it’s a great approach (base answers on sources) and their product seems to deliver on the premise.

        That said, anecdotally, I find it’s a bit hit-miss: if it’s hit it’s a huge improvement over google (and a minor improvement over chatgpt), if it’s miss it’s still good but get the feeling you won’t get anywhere further by asking more questions.

    • joshdavham8 hours ago
      I actually like Perplexity a lot. It's really good for doing research. But if this new chatGPT search thing is better, I'm gonna switch.
    • veber-alex7 hours ago
      I have been using Perplexity with the AI engine set to Claude 3.5 Sonnet for a month now, mostly for programming related questions, and it has been amazing. I mostly stopped using google.
      • speckx5 hours ago
        I do the same but with Kagi.
    • forbiddenvoid9 hours ago
      It took me about 5 minutes to figure out that Perplexity wasn't the product I needed. I'm not sure this is either, but we'll try it out just the same.
      • 8 hours ago
        undefined
      • yungtriggz9 hours ago
        @forbiddenvoid what is the product you need?
    • rvz9 hours ago
      It's clear as to where Perplexity is eventually is going, and it will likely get acquired by Amazon. Here's why: [0]

      [0] https://news.ycombinator.com/item?id=41121821

      • nextworddev9 hours ago
        One thing about Amazon is that I have never seen them overpay for an acquisition (as in they really penny pinch and negotiate hard). So Perplexity’s high price tag may turn Amazon off
  • og_kalu9 hours ago
    """Search will be available at chatgpt.com (opens in a new window), as well as on our desktop and mobile apps. All ChatGPT Plus and Team users, as well as SearchGPT waitlist users, will have access today. Enterprise and Edu users will get access in the next few weeks. We’ll roll out to all Free users over the coming months."""

    Can confirm that free users who signed up for the waitlist can use it right now (even if they didn't actually get in)

    • rty328 hours ago
      I signed up for the wait list as well and got the email. However there is no search button on the web interface (free tier). Is there anything I am missing?
    • qwertox9 hours ago
      Can confirm this as well. I switched to free around a month ago and got access to this today. I did join the waitlist some weeks ago.
  • marcusestes8 hours ago
    I like it. It's clean, fast, and the results seem solid. Google search has become so bloated with sponsored results that I've been hoping for a tool that could provide better results than DDG or Bing.

    I'm going to use this as my daily driver for a few weeks.

    The contemporary web is basically an epiphenomenon of Google, and they've failed to defend it. I hope OpenAI puts a huge dent in their market share.

  • alexawarrior45 hours ago
    Tried "Sailflow Galveston Bay", which for google search returns the Galveston Bay / Kemah wind/weather report for sailing in the area. For GPT Search, returned the Galveston Buoy as the first result, which is WRONG, as it is 10+nm off the coast of Galveston with often VERY different weather than Galveston Bay. The second GPT search result was just the Sailflow home page.

    So in this case at least, GPT Search is far inferior and dangerously incorrect were someone to rely on these search results for weather information.

    • rafram5 hours ago
      It returns the same result as Google for me. I'll take your word for it re: the importance.
  • GavCo8 hours ago
    "The search model is a fine-tuned version of GPT-4o, post-trained using novel synthetic data generation techniques, including distilling outputs from OpenAI o1-preview."

    More info on model distillation: https://openai.com/index/api-model-distillation/

  • sidcool9 hours ago
    It will be fun to see how they stand up to Google and Perplexity. I feel they are a bit late in the search game, but excited to see what they cook
    • 7thpower9 hours ago
      I have learned to seriously question my instincts on when something is too late as there are many niches to fill and this is likely a building block for broader functionality.

      That being said, for all the talk about how bad google has become, I still prefer it to an unbroken bing.

    • toomuchtodo9 hours ago
      Anyone can compete as long as they have a sufficiently robust crawl dataset as a foundation, no?
      • baby_souffle9 hours ago
        > Anyone can compete as long as they have a sufficiently robust crawl dataset as a foundation, no?

        There's some sticking power/network-effect/sticky-defaults effects, too, though.

        It's _trivial_ to do a google search from anywhere on an android device with at most a tap or two. You can probably get close if a 3rd party has a well integrated native app but that'll require work on the user's behalf to make it the default (where possible).

        Same goes for the default search engine for browsers/operating systems ... etc.

        I will absolutely be firing off queries to google and GPTSearch in parallel and doing a quick comparison between the two. I am especially curious to see how well queries like "I need the PCI-e 4 10-gig SFP+ card that is best supported / most popular with the /r/homelab community" goes. Google struggles to do anything other than link to forums where people are already asking similar questions.

      • vineyardmike9 hours ago
        Anyone can compete as long as they have a functional URL and web page. Doesn’t make them good competition, and doesn’t mean users will use it.

        The issue is that “AI search” has been a hot topic for a while now. Google (the default everywhere) just rolled out their version to billions of users. Perplexity has been iterating and acquiring customers for a while. Obviously OpenAI has great potential and brand recognition, but are enough people still interested in switching that haven’t yet?

      • jsheard9 hours ago
        A fossilized snapshot will only get them so far, and sites are increasingly opting to block AI-related crawlers. Apparently about a quarter of the top 1000 sites already block GPTBot: https://originality.ai/ai-bot-blocking

        I guess they could be using Bing as their search backend, which would mostly get around the blocking issue (except for searching Reddit which blocks Bingbot now).

        • toomuchtodo9 hours ago
          Certainly, countermeasures against crawler blocking will be a necessary component of effective search corpus aggregation in the go forward. Otherwise, search will balkanize around who will pay the most for access to public content. Common Crawl is ~10PB, this is not insurmountable.

          Edit: I understand there is a freerider/economic issue here, unsure how to solve that as the balance between search engine/gen AI systems and content stores/providers becomes more adversarial.

          • jsheard9 hours ago
            AFAIK OpenAI currently respects robots.txt, so we'll have to see if they change that policy out of desperation at some point.
            • andrethegiant7 hours ago
              > AFAIK OpenAI currently respects robots.txt

              I wonder to what degree -- for example, do they respect the Crawl-delay directive? For example, HN itself has a 30-second crawl-delay (https://news.ycombinator.com/robots.txt), meaning that crawlers are supposed to wait 30 seconds before requesting the next page. I doubt ChatGPT will delay a user's search of HN by up to 30 seconds, even though that's what robots.txt instructs them to do.

              • echoangle1 hour ago
                Would ChatGPT when live interacting with a user even have to respect robots.txt? I would think the robots.txt only applies to automatic crawling. When directed by a user, one could argue that ChatGPT is basically the user agent the user is using to view the web. If you wanted to write a browser extension that shows the reading time for all search results on google, would you respect robots.txt when prefetching all pages from the results? I probably wouldn’t, because that’s not really automated crawling to me.
        • StableAlkyne9 hours ago
          If it ends up anywhere near as popular as Google, those sites will have a financial incentive to allow the crawlers.

          The average person just does not discover content without the search engine recommending it.

          • jsheard9 hours ago
            The whole issue that site owners have with these AI search engines is that there isn't a financial incentive for them to cooperate, since the summarization largely replaces the need for users to click through to the site the information came from. No click-through, no ad impressions, no possibility of the user being converted into a recurring visitor or paid subscriber, just pure freeloading by the search engine.
    • joshdavham8 hours ago
      > excited to see what they cook

      Me too! I've really started to dislike Google search recently and am super excited we now have more viable options!

  • davidclark4 hours ago
    > Ask a question in a more natural, conversational way

    I think this might actually be my main pain point with LLMs. Personally, I don’t want this.

    I understand it might be helpful for other people. But, I prefer highly specific, advanced search functionality, such as site: or filetype: in google/ddg searches.

    scryfall.com for magic the gathering cards is a great example. I’d much prefer typing a few brief flags such as “id=r” instead of “Get me all red identity cards.” And I know I’m getting all red identity cards with scryfall’s current search functionality.

    They are also composable, so I can add/drop ones easily instead of perfectly rephrasing a whole sentence because I wanted to change one clause.

    I’d need the same level of trust in the LLM’s filtering capabilities as I do in those boolean or regex matching field filters. An escape hatch to hard filters probably would be best for my experience searching things.

  • grbsh9 hours ago
    Is this more than just ChatGPT with search API resulted concatenated the prompt?

    It feels like it might be. It feels tasteful in the same way that Apple ecosystem integrations just work really nicely and intuitively. But then again, there is an art to keying and retrieving embeddings, and it might just be that.

  • yapyap1 hour ago
    So was the plan to fuck up google search so bad that only AI can still search the internet or what? cause otherwise this is a nonsensical feature
  • jameshiew9 hours ago
    The new web search icon appeared for me straightaway in the ChatGPT macOS desktop app, within an in-progress conversation, without even having to restart. Before I'd even seen this official launch announcement. Very smooth!
  • dserban2 hours ago
    Looking at this, it's not long until OpenAI starts offering a version of this to ecommerce companies for their search backends, it's a space that is held back by corporate corruption, and it's ripe for disruption.
  • PittleyDunkin9 hours ago
    Is there any indication they're willing to improve on google in terms of e.g. excluding commercial results? If not it's not clear how this improves anything. Google has been excellent at semantic search for a long time; the issue has been the lack of controls to filter out the SEO bullshit and to remove the AI stuff from the top and the right of the results. It's been way too easy to game search with sufficient funding for well over a decade now and the AI-generated crap is a long way from production-ready (in terms of quality; obviously it generates something).
    • arromatic9 hours ago
      Yeah . It needs to filter seo optimized articles first and search more niche sites or it will be your average chat gpt with search project from github.
  • davedx8 hours ago
    I asked it to do its own DCF model of Paypal using current data and it did, using inputs from three different financial data sources.

    This is incredible and a direct threat to Google’s core biz.

    • airstrike5 hours ago
      I asked for a DCF model of NVDA and it was incorrect on basically every aspect, from its understanding of DCF valuation, to the sources it used (or lack thereof), to the result it claimed the script would have calculated ($150 vs. $359 if you actually ran it)

      As we used to say in the street "garbage in, garbage out": https://chatgpt.com/share/6723e865-d458-8011-b2ef-1a579026e6...

  • santiagobasulto6 hours ago
    This is great news. But it gives me anxiety the fact that I'll keep accumulating chats on that left bar without any ways to manage them or search through them.

    Today I was looking for an old (and useful) chat I had a few months ago and I had to export the whole chat history, wait for the zip file, and write a Python script to find what I was looking for :/

  • haolez2 hours ago
    This is just a hunch, but it feels like OpenAI has lost research power and is now focusing on engineering challenges, since that's what they can execute for now. The recent releases of o1 and Search doesn't seem to advance their core models.
  • gwbas1c5 hours ago
    Phind kinda-sorta did this. For awhile I used it as my primary search, but limitations made me go back to Google.

    Hopefully ChatGPT's version works very well. Phind was more of a kludge to demonstrate what combining chat AI and search can do.

    • rushingcreek5 hours ago
      We're working on a complete redesign of Phind to make it more of a search competitor :)

      What were the limitations you ran into?

  • waynecochran1 hour ago
    As soon as they integrate a good wolfram alpha and allow me solve math and physics problems with accuracy I'll be excited.
  • AzzyHN1 hour ago
    This is just the next step of SEO. Or, I guess, the next step against SEO
  • impulser_4 hours ago
    Why would I use this over Google? This has the same problem as Preplexity.

    Google is instant, why would I wait for a bunch of text to generate just to get basic information.

    • esafak29 minutes ago
      Google instantly gives bad results, that's why.
  • randcraw9 hours ago
    I really don't see the value of summarizing/repackaging web search hits. Given that 99% of SEO-tuned web content is just shilling for vendors who don't want to be seen, LLM search summarization will just repackage those ads into a more palatable format that is LESS useful than the original, while more successfully hiding the obvious signatures that used to be a clear warning sign that... THE.FOLLOWING.CONTENT.IS.MANIULATIVE.CRAP.
    • cloudking8 hours ago
      I think the value here is not in searching for SEO crap, but turning it on when you want to get references to the most current information relevant to your query.

      For example, if you ask LLMs to build code using the three.js library, nearly all of them will reference version r128. Presumably because that version has the largest representation in the training data set. Now, you can turn this on and ask it to reference the latest version, and it will search the web and find r170 and the latest documentation to consider in it's response.

      I was already doing this before by adding "search the web for the latest version first" in my prompts, now I can just click a button. That's useful.

    • snakeyjake9 hours ago
      People who think AI summarizations are useful suck at reading.

      So they probably wouldn't notice the warning signs anyways.

    • ghayes8 hours ago
      I tend to agree. If I ask ChatGPT what is the best way to make pasta, it will pull from every source it’s ever been trained on. If it decides to search the web, it will mostly cater to one or two sources.
    • arromatic8 hours ago
      I think if they improve the algorithm maybe they can actually present seo free results.
      • randcraw8 hours ago
        You don't think SEO-LLMs will evolve to redirect search-LLMs to 'see the world' the way the SEO-LLMs want it to? I foresee SEO-LLM-brinkmanship as the inevitable outcome. Soon THIS will be the catalyst for the real Skynet -- battling smart ad engines.
        • arromatic8 hours ago
          Only if openai is willing to play it . If they follow google than seo-llm won't even exist because there will be no need for it.
    • lawn7 hours ago
      To me the kombination of Kagi's good search results with their AI summarizer has been very useful.

      Of course, layering an LLM on top of garbage will still produce garbage.

  • mbreese5 hours ago
    I’ve been starting to see a shift in my use of ChatGPT. Over the past few months, I’ve started using it more for “knowledge” questions (eg: Can you give me a list of genes involved in disease X?). Where before I would have used Google for such a thing, LLMs have gotten good enough that for quick searches, they are “good enough”. Yes, you still need to validate the results, but it’s usually a very good start.

    Bonus points for then being able to ask for the results in a specific format.

    I’m looking forward to seeing how a feature built search engine starts to look.

  • dredmorbius40 minutes ago
    As with several others in this thread, I've been using Kagi's GPT-aided search, FastGPT, for much of this year, figuring I should probably dip my toes into what LLMs are making possible, as well as their limits.

    For someone who's used online keyword-based search since the 1980s (computerised library catalogues, at the time), it's jarring for me to get over the distinction of querying for documents (old school) as opposed to asking direct questions, but that's precisely what LLM-based GPTs facilitate.

    And as I'd noted this past June, it's a sea change in online search:

    [O]ne of the upsides of GPTs / LLMs is that they provide direct answers to questions, though those answers may be hallucinations or generalisations. Even then, the directness is refreshing, though I expect it to also get polluted rapidly through both AI SEO manipulation and advertising / general enshittification of AI engines.

    <https://toot.cat/@dredmorbius/112577405443953191>

    I generally rely on Kagi's LLM for various reasons, but foremost is that it relies on current Web search and cites its findings specifically, which makes validating responses and detecting hallucinations far easier. ChatGPT specifically would hallucinate not only its responses but the citations it provided when those were requested, which curbed my enthusiasm greatly. It'll be interesting to see how its search-oriented offering fares.

    I strongly agree with Temporal's excellent observation that this is, at least for the moment, a strong shift of the Web back to serving readers rather than advertisers (first and foremost) and publishers (distant second): <https://news.ycombinator.com/item?id=42011414>.

    What I strongly suspect is that any successful GPT search tools will be rapidly engrossed by existing search monopolies. For those who defend the "free market" on the basis that competitors can emerge, the countervailing force of mergers and acquisitions must be noted, as well as the fact that these almost always effectively destroy that competitive potential, at least over the past half century or so.

    • dredmorbius32 minutes ago
      And among those other uses, I've found that generic LLMs are surprisingly effective as translation engines though they were not specifically intended for this role. That includes "translating" to various dialects (which may be easier to proof or check).

      The results seem to be better for strongly represented languages (e.g., English, Spanish, German, French), less so for those which may be less prevalent online (e.g., Yoruba):

      <https://toot.cat/@dredmorbius/113283430170325203>

  • awb9 hours ago
    LLMs have the chance to cannibalize the web and become the primary interface for knowledge. But if websites remain the final destination, it’s good for content creators.

    The only other way to kill the web without killing LLMs in the process would be to create a way for people to upload structured public content directly into an LLM’s training. That would delay public content into release batches unless training can be sped up significantly.

    • nextworddev9 hours ago
      “Way to put public content directly into an LLM training” - sounds like Chatgpt
      • awb7 hours ago
        I’m imagining LLMs might eventually have an upload tool, similar to Google’s site map upload, for registering content proactively instead of needing to be discovered through crawling the web or training on chat data.
  • wiremine8 hours ago
    I played around with it a bit, here are some hot takes.

    For context, I first tried this procession of searches on the Mac OS app.

    1. "Who won the world series" 2. Who was the MVP?" 3. "Give me his bio"

    My observations:

    1. UX: The "search" button feels oddly placed, but I can't put my finger on it. But once I got it is a toggle, it wasn't a bit deal.

    2. The first result had 3 logos, headlines and timestamps delineated, and easy to ready. The second one and third ones included a "Sources" button that opened a fly open menu. Clicking those opened a web link. The third result also included images in the fly open.

    3. Citations were also inlined. The third result, for the bio, included a citation per paragraph.

    4. It wasn't as fast as google. Which makes sense, given it's going through the LLM. But it will take a while to rewire my brain to expect slower responses to search.

    5. Overall, I found the chat interface a very intuitive interface.

    The second search I asked was "Give me a plan for a Thanksgiving meal."

    I to a long response that felt like a weird mashup of LLM-generated content and search results:

    1. A list of menu selections

    2. Links to some recipes

    3. Prepration timeline

    4. Shopping list

    5. Additional tips

    There were 15 citations listed in the popup button, but only 3 inlined.

    This was... not great. A traditional list of search results feels better here.

    Overall, I like the direction. Innovation in search has been dead for close to 10 years, and this feels like I'd use it for certain inquiries.

    • rawgabbit1 hour ago
      I asked “Find me articles about the World Series MVP.” ChatGPT returned snippets and URLs from: MLB Reuters AP NYPost and FoxSports.

      I asked “Please find articles about planning a Thanksgiving meal for a family reunion.” It returned links to: GatheredAgain FavFamilyRecipes TastesBetterFromScratch etc. I like that it is returning niche sites I do not know about.

  • snadal5 hours ago
    I would be happy if I could search my previous ChatGPT conversations! :)
  • epolanski4 hours ago
    How does it go around websites scraping limitations?

    I've tried Gemini flash, given it links to websites but it claimed to have knowledge, or be able to read it only so many times (kind of query, "summarize https://foo-bar/news-1")

  • data_spy5 hours ago
    I did a simple test of "Chicago Velodrome", it provided something that has been closed since 2016, Google gave a better result of something that is actually around but in the Chicagoland area
  • kensai4 hours ago
    Meanwhile at Google HQ: I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and were suddenly silenced…
  • pton_xd7 hours ago
    The social engineering possibilities with a tool like this are endless. Google already wields enormous visibility power but ultimately just provides a list of links to other sources.

    This can subtly (or not so subtly) rephrase and reshape the way we read about and think about every topic.

  • 6 hours ago
    undefined
  • teleforce2 hours ago
    Congrats ChatGPT team for the release.

    I can't believe that I'm saying this, but now after more than 20 years of using Google I'm finally paying for search.

  • seydor7 hours ago
    I don't see a way for websites to monetize giving this content to openAI. This is not 2001 anymore, people should not expect to give away this information for nothing. I would be blocking openai bots ASAP
  • 7thpower9 hours ago
    OpenAIs press release game is unreal. This totally overshadowed Google’s grounding release in my feeds.
  • ryukoposting2 hours ago
    Ah, so if we combine search engines that index SEO spam, and LLMs trained on SEO spam, then what we will get is better!
  • submeta3 hours ago
    Just compared it to Perplexity which offers web search, can use many models to process the results, and presents results more beautifully.

    So ChatGPT‘s search looks rather rudimentary compared with Perplexity.

  • svara5 hours ago
    What is currently considered the state of the art when building things like this?

    I've heard people find RAG not to work very well, is that accurate? Is it just about using the right embeddings?

    I suppose ideally you just put the sources in the context window, which becomes limiting with large amounts of text?

  • itissid7 hours ago
    There should(or will probably be) be a study on how long do people take on google etc vs search powered by chatgpt to get non-trivial work done, controlling for obvious factors like age, gender, country and industry

    If there is a bias towards chatgpt like tools of even ~5%, it would be worth investigating why this is. My hunch is just the conversational aspect of describing at a high level and finding answers and avoiding all the distraction of several dozen windows to do something is worth it.

  • benbristow5 hours ago
    I thought this would be more like an actual search engine (like Google with ChatGPT built in). Microsoft have been doing similar with Copilot since it's inception, albeit they seem to have removed the web search functionality with the latest update.
  • stainablesteel2 hours ago
    its slightly undermined by the fact that i can't type a query into my url bar
  • nextworddev3 hours ago
    What’s funny is that web search was one of the first “chatgpt plugins” in the market, which flopped terribly. This is basically a rebrand of that.
  • hintymad7 hours ago
    I'm curious how ChatGPT Search improves recall and reduce spams. I can see how LLM helps find the most relevant content. However, it is still hard to find the right content, especially in a domain that has tons of spams. For instance, when I search for a product review on Google, I get back many review sites that look so legit that I have hard time telling which ones are spammers.
  • janandonly4 hours ago
    Very silly question, no doubt, but why don't we train LLM's language skills on internet fluff, and actual knowledge only on some reputable sources like Wikipedia and Britannica?
  • Pesthuf6 hours ago
    Bing search gets so much dumber when it searches the web that I've developed the muscle memory to append #nosearch to every prompt so that hopefully it won't. I just searches for unrelated stuff and then repeats random stack overflow answers to problems that have nothing to do with what I asked.
  • niek_pas6 hours ago
    I'm a little confused: I've searched the web with ChatGPT before. How is this different?
    • esafak19 minutes ago
      ChatGPT tells me it can't access the web?
    • chankstein385 hours ago
      Same here. I ask it to search all of the time. I guess maybe now it is required to provide sources? It usually did before but there were times it didn't so maybe the update is that it's required to?
    • kingstoned6 hours ago
      It's not that different, Google's Gemini already provides this functionality where they have source/links in their responses...
  • arealaccount5 hours ago
    Using language models to summarize search results seems like a great use for LLMs. Summarize all of the 50 pages of SEO spam and maybe you will get what you're looking for.
  • galvanizednuts5 hours ago
    Has there been any indication on whether or not adtech is mobilizing to integrate with LLMs to provide “promoted” responses? I’m shocked it isn’t already a burgeoning area for new startups

    It sucks but it’ll happen for sure

  • krick2 hours ago
    This is the right direction. The more I'm using ChatGPT, the more sure I am I don't want to listen to its "answers". After all improvements it only got better at making it less obvious it's bullshitting me. It still makes up random stuff, and does it very convincingly. Still, on many occasions it seems so much easier to ask it, than to do 10 searches myself and read 20 lengthy articles, it seems it should save me a lot of time... But then it usually ends up with me discovering that I've spent 2x time to basically find out this stupid thing mislead me in the very beginning by making up some wrong detail or something.

    I think, I don't even want it to talk to me without searching the web first anymore. I want just sources and summaries. So I hope search will perform better.

  • Lws8038 hours ago
    I think generative search itself has room for disruption and I'm not too sure if a chat interface or a perplexity style one is necessarily the right way to go about it.

    I'd like to see search (or research in broader sense) a more controllable activity with the ability to specify context + sources easily in the form of apps, agents and content.

  • suyash5 hours ago
    We need this type of disruption for :

    1. Mobile apps: Don't want to see intrusive apps

    2. YouTube: Don't want to be interrupted with ads and no I don't want to buy premium service.

  • holtkam29 hours ago
    How is this different from RAG using a search API? I didn't get their blurb about this being a standalone fine-tuned model.
  • EcommerceFlow8 hours ago
    I wonder if this is their own web scraper, or if they're using Bing API?

    As a very experienced SEO, this is pretty exciting nonetheless, a new front in the online war opening up.

    If they're using their own scraper/search algorithms, it'll be interesting to see how they weigh the winners and losers compared to how Google does it.

  • aloer4 hours ago
    There is so much value in blue links turning purple and showing the same content on repeat visits.

    This kind of permanence is a huge loss

  • arromatic9 hours ago
    Can it find obscure sites like marginalia does or personal blogs posted in hn or it's just another bing + ai summarizer ?
    • ColinHayhurst8 hours ago
      I asked for long tail blog posts about interesting places to visit in Paris. I got one result; from an obscure website called Vogue.
  • xnorswap9 hours ago
    "Introducing" without actually just linking the search page.

    That's not an introduction, that's a teaser trailer.

    If they want this to be a viable search it needs to be available quickly, and anonymously from something quick to type in.

    Google would have been annoying as shit if you had to go to google.com/search , let alone then log in.

    • solfox9 hours ago
      It's available today for Plus users at chatgpt.com.
      • timeon8 hours ago
        Using search with log-in is pretty big red flag for me.
    • posterman9 hours ago
      I mean, until there is an alternative in the space that has a (good), free, anonymous ai web search then I think we can probably assume you are confusing what you want with what is "viable"
  • nelox3 hours ago
    Will OpenAI pay Apple around USD$20bn per year to be the default search like Google?
    • esafak18 minutes ago
      With what money, Microsoft's? That would be funny.
  • swyx5 hours ago
    notice conspicuous lack of New York Times. the partner vs lawsuit stakes of traditional media vs openai are playing out now.
  • jen729w1 hour ago
    I just tried:

    'beef vindaloo recipe from nagi [0], but then just strip all the shit out and give me the recipe'

    ...and got exactly what I wanted. The web's dead, baby.

    [0]: The lady behind recipetineats.com, a terrific recipe site. And she's an A+ human.

  • DSingularity8 hours ago
    Is this a move to try to setup a pathway for getting some data on realtime trends? In other words something for quickly getting some model updates for hot prompts like like “what happened in the debate”.

    Or is this something they’ve already solved?

  • ChrisArchitect8 hours ago
    not sure what my use case for this would be if it's expecting me to type full descriptive sentences to check something quick or find a picture, and then read a whole paragraph of a useless reply. No results. No reddit posts (which aren't even what I want but understand a lot of content is buried in user-generated content) They seem to be implying this is the replacement for Google. This just isn't it.

    Edit: ohh, only Pro users? Right. ok. They made it seem like this was the big search launch and to go to chatgpt.com to get into it. Moving on.

  • niemandhier7 hours ago
    I just checked, there are a lot of topics it will refuse to generate search results for.

    Sure normal search is policed too, but usually not based on moral judgments but on legal necessities.

  • jmakov7 hours ago
    So basically what phind.com has been doing all the time?
  • cryptozeus9 hours ago
    I would be surprised if this doesn’t take share out of google’s pie
  • stephc_int135 hours ago
    The reason this is so appealing is because Google is almost useless these days.
  • maleldil7 hours ago
    If they make it possible to add this a search engine on Firefox like it's possible for Perplexity, I might drop Perplexity for good.
  • ionwake8 hours ago
    Can someone give an example of the type of search query that this Search Engine would excel at?

    ( I tried getting the top hackernews posts but it was 5 days old? )

  • jayanth-vijay8 hours ago
    Is there a new api model version available for search ?
  • jayanth-vijay8 hours ago
    Is there a new api model version for search available ?
    • lux5 hours ago
      Indeed. Search was working reliably for me for a while using gpt-4o via the API then yesterday suddenly stopped altogether right when I needed it. Hoping this fixes it soon.
  • rsync4 hours ago
    I maintain a little page titled "Misc. information I want search engines to find"[1] and so I asked ChatGPT things that are answered there:

    "How much tea tree oil by volume is in Dr Bronners tea tree oil liquid soap?"

    A. ... However, the exact volume or percentage of tea tree oil in the formulation is not publicly disclosed by the manufacturer.

    (which is incorrect, as the manufacturer disclosed it to me and I published it on the web)

    One conclusion is that the web indexing is relatively shallow.

    However ...

    "Where does the founder of rsync.net live?"

    A. "John Kozubik, the founder and CEO of rsync.net, resides in the San Francisco Bay Area."

    ... and the source is kozubik.com ... which means they did index my page but only retain, or weight, some of it ?

    Meanwhile ... ublock showing >3k denials during this five minute interaction. I guess we can conclude something about where they are directing their time and energy.

    [1] https://www.kozubik.com/personal/misc.txt

    • krackers1 hour ago
      Why should anyone trust your page though about Dr Bronners tea tree oil? "Not publicly disclosed" seems like the better answer compared to citing some random yahoo on the web who may be lying.
  • vladsanchez8 hours ago
    Perplexity does that already and more! shrug
  • _sys491526 hours ago
    is this going to give me lyrics or recipes link or will these categories continue to be nerfed by openai? it appears to write over its response of providing me a website link and gives me an unclickable blue name of website instead.
    • numbers6 hours ago
      Lyrics aren't possible due to copyright restrictions (according to ChatGPT). I wonder what else won't be easy to find on here.
  • zaptrem9 hours ago
    Could OpenAI make it impossible for startups to try to build AI search engines by signing all these paid agreements with publishers?
  • notkoalas7 hours ago
    A lot of people are focusing on its reliance on existing SEO - but they say right in the post that they're integrating directly with data brokers more. They're subtly arguing for the elimination of a certain kind of middle man.
  • bagels9 hours ago
    A new front has been opened in the SEO wars.
  • topaz05 hours ago
    The costs are too great. We can't go on like this.
  • ncrtower7 hours ago
    If you ask ChatGPT which search engine it uses, it will tell you Bing. And only Bing.
  • gsemyong4 hours ago
    It's creepy how fast you can find lots of information about any person using this. I think something should be done about that.
  • 9 hours ago
    undefined
  • wifipunk9 hours ago
    Looks like they've also enabled advanced voice mode on the windows desktop app.

    Does not support search for anyone wondering.

  • grbsh9 hours ago
    How will this be gamed for neo-seo spam?
  • amelius7 hours ago
    Ironic that Google caused their own demise by publishing that Transformers paper.
  • wg08 hours ago
    Isn't this almost what's available in Gemini on Android phones already? kind of?
  • agnosticmantis2 hours ago
    Now OpenAI only needs to figure out how to do Search profitably instead of burning VC money.

    Only then will the comparison with profitable Search engines be sensible. Before enshitification all VC-backed products are a delight to use, but after the honeymoon phase there’ll be ads all over the results/responses.

  • jscottbee5 hours ago
    Is a web search worth the added power requirements? In the USA, limits are being placed on AI use in data centers due to the lack of generation to cover it all.
  • asah4 hours ago
    I turned on search the web to compare vs google search + AI labs, and ran a battery of typical tests. Google was instantaneous vs gpt-4o which wrote out each letter.

    Subjectively, I'm not switching for quick searched - google remains lightning fast and is good enough. But I already use gpt/claude/etc for conceptual searches and deeper analysis.

    ---

    [used leica q3] ==> google (product listings and website; Chatgpt told me about the Leica q3 and mentioned ebay)

    [value of mac air m1] ==> neither!! (google was useless videos and crap; chatgpt gave me a price range and useful explanation... which made no sense - used was the same price or higher than new!)

    [vogue lyrics] ==> google wins (gave me the lyrics; Chatgpt whined about copyright restrictions and sent me to a youtube video)

    [weather in nyc] ==> tie (both provided correct, rich detail about the current weather)

    [root causes of ww1] ==> tie (both identified Militarism, Alliances, Imperialism, Nationalism, explained each and then mentioned the assassination of the Archduke as the triggering event)

    [bohemia to midtown] ==> equally bad (both figured out that it's a request for local directions, but neither just gave me directions until I gave a specific destination)

    [bohemia to penn station] ==> ??? (chatgpt correctly gave me bohemia ny where google picked some obscure local listing; otoh chatgpt wrote out directions where google gave me a nice map)

    [btc to usd] ==> tie (both got today's price)

    [what time is it in stockholm] ==> tie (both got it right)

    [iphone 16 vs 14] ==> chatgpt wins (nice comparison; google didn't pop search labs and just gave me websites)

    [ffmpeg to clip the last 3 secs of a video] ==> chatgpt ?! (I didn't love either answer TBH)

    [456+789] ==> google (both gave the answer, but google included a nice calculator)

    ...and the stuff people really want:

    [porn] ==> google (gpt whined about policy violations; google gave pornhub and other "useful" results)

  • andrewinardeer8 hours ago
    Doesn't search porn. DOA.
    • 7 hours ago
      undefined
  • neillyons5 hours ago
    Definitely "Code Red" for Google Search
  • shitter9 hours ago
    I asked it the current weather in my area and the temperature was off by 23 degrees F.
    • Maxion9 hours ago
      What the hell did you ask it / what was the source? I just did the same thing and it gave me the correct answer and used my countries best known meteroligcal site?
      • shitter9 hours ago
        It used good sources but appeared not to extract the information correctly.

        Repeating the same query in the same chat session gave me an accurate answer.

  • guluarte7 hours ago
    I've been saying the biggest treat to google are LLMs
  • phreeza9 hours ago
    I gave it a quick spin and my initial impression is much worse than perplexity.
  • typon9 hours ago
    Asked it to generate code for a library that was released in the past year - GPT-4 couldn't do it and this one just did it flawlessly. I am super impressed.
    • RobinL9 hours ago
      Same! (asking it to write code for the foss lib I maintain). This is immediately very useful.
    • alanfranz9 hours ago
      Hello bot. This is a search functionality not a new model.
      • typon9 hours ago
        Maybe you don't understand how this works?

        It's able to query the relevant documentation, put it in its context and then use that to generate code. It's extremely relevant to giving existing models superior functionality.

        • alanfranz5 hours ago
          I haven’t yet tried it yet. But that comment seemed extremely vague and misleading.
  • jdulay198 hours ago
    I wonder how long until it will offer reverse image search, too.
  • torginus8 hours ago
    Honestly I don't live this 'streaming LLM text effect' as well as the wordiness by which ChatGPT 'chats' with me. I consider LLMs to be machines, not conversation partners, and frankly I find the notion of chatting with an artificial being a bit creepy (unless it's specifically what I want as some sort of escapism). I wish they tried to be as terse as possible (and faster too).
  • pvo505558 hours ago
    Axel Springer! Now that's high-quality news sources...
  • 2OEH8eoCRo02 hours ago
    Where is the Cortana-like video chat they demoed awhile back?
  • vzaliva8 hours ago
    If there is a way to add it as search engine to Firefox?
  • utilityhotbar5 hours ago
    I consider it very characteristic of Silicon Valley that nobody can think of a more immediately useful task for something supposedly capable of multi-stage intelligent planning and execution other than "plan me a vacation to X location". Are all of their hundreds of millions of prospective customers all software engineers with long holidays and too much disposable income xD
  • okasaki6 hours ago
    I was hoping this would be a search engine to search across your chatgpt conversations, but no.
  • grayprog6 hours ago
    I searched for what are the top stories in HN today and one of the results was that OpenAI released ChatGPT 5? WTF? Also about Rust 2.0. Also, the sources listed were strange, not HackerNews, although it did list Hackernews URL on top, but not in sources. https://imgur.com/a/AoFiEQt
  • faragon8 hours ago
    What I find incredible is that Google has had the knowledge and resources to do this for at least five years, yet they're still milking the "old cow". It reminds me of Intel sitting on their money while a near-bankrupt AMD sped past them.
    • moralestapia8 hours ago
      Google has the web on its hands, but they also have Puchai which is a -100x multiplier.
  • moralestapia8 hours ago
    This is great, I can't wait to get rid of Google and all the crap that comes with it.

    Hopefully this also provides a strong negative force against SEO and, again, all the crap that comes nowadays thanks to Google.

  • paul79869 hours ago
    • sunaookami8 hours ago
      And https://chatgpt.com/?hints=search&q=%s if you want to add a custom search engine to your browser
    • bityard9 hours ago
      That link just takes me to what looks like a normal chatgpt prompt. (I tried asking it the same things they showed in the article and I just get generic AI answers, not web search results.)
      • bhy9 hours ago
        Are you plus or team user and logged in? The link is to the normal chatgpt prompt, but with the "Search" button enabled.
        • paul79868 hours ago
          Im not logged into GPT (am subscribed for this month tho on my iPhone but that's separate) and able to do a web search and or ask GPT a question.

          Actually i am logged into my iCloud on my macbook so guess that's why im seeing the search on that device of mine (not seeing on another where Im not logged into iCloud).

      • DrBenCarson9 hours ago
        Only available for Plus subscribers
        • 9 hours ago
          undefined
  • rvz9 hours ago
    AKA Bing Search in ChatGPT.

    So it is not it's own search engine and is still using Bing for its results just like the rest of them.

    • solfox9 hours ago
      Reference? This isn't mentioned anywhere and certainly is not implied.
    • sidcool9 hours ago
      Is it mentioned on that page? Didn't see it.
      • tredre39 hours ago

            To provide relevant responses to your questions, ChatGPT searches based on your prompts and may share disassociated search queries with third-party search providers such as Bing. For more information, see our Privacy Policy and Microsoft’s privacy policy. ChatGPT also collects general location information based on your IP address and may share it with third-party search providers to improve the accuracy of your results.
        
        https://help.openai.com/en/articles/9237897-chatgpt-search (TFA links to it in the How it works section)
    • DrBenCarson9 hours ago
      No, it's using the Bing index. The generated responses are OpenAI

      Many search engines use the Bing index but return different results

    • nerdponx9 hours ago
      That makes for a fair side-by-side comparison, then.
    • findthewords9 hours ago
      If it's better than Bing or Google in presenting the relevant result in a condensed way, it's still a win for the users.
      • arromatic9 hours ago
        Why would i want condensed results ? Why Do you think i would want to have a condensed version of this post [0] for example.

        [0] https://danluu.com/ballmer/

      • PittleyDunkin9 hours ago
        > If it's better than Bing or Google in presenting the relevant result in a condensed way

        This doesn't matter if the results are user-hostile, as both search engines are.

  • Marius_Manola3 hours ago
    So, should we expect Perplexity to post that "OpenAI killed my startup" kind of article? (just saying)
  • jorblumesea6 hours ago
    It's great we chose LLMs for this. You know, a model that is completely accurate and has no hallucination problems whatsoever.
  • surfingdino6 hours ago
    AdSense/AdWords-like network too? Really curious how it would work in ChatGPT Search.
  • taytus6 hours ago
    Maybe I'm alone on this, but I genuinely believe that SEO ruined the web.
  • stonethrowaway6 hours ago
    This time it’s different and it changes everything and it’s our best one yet.
  • sergiotapia7 hours ago
    It's crazy how much "vibes" affect perception of the product. OpenAI just always feels cold and alien to me. Compared to Anthropic and Perplexity's warmth.
  • AcerbicZero6 hours ago
    I don't mean this in a bad way, but seriously, about time. This was the best use case for LLMs on day one, imo.
  • kristofferR7 hours ago
    This is a way better UI than expected (I expected a totally separate search website, perhaps due to ignorance). I'm gonna use this a ton
  • soheil9 hours ago
    I made one very similar it's basically a wrapper around duckduckgo https://foxacid.ai
  • 9 hours ago
    undefined
  • throw_pm236 hours ago
    ok, so you now have to type "What's the weather in Positano?", because typing "Positano weather" into Google was too difficult.
    • das_keyboard2 hours ago
      This! So much this!

      I don't understand how this point doesn't come up more often. I don't want to waste time to formulate a question when I can just use 2-3 keywords to get a result with traditional search.

      Imagine having to type "What does this error message mean..." Everytime you paste an error message into Google/ddg/kagi.

  • kernal6 hours ago
    what time is it?

    >Currently, I don’t have access to real-time data like time and date. You can check the time on your device or search "current time" online for the latest local time wherever you are.

    Oh dear, we're off to a bad and slow start already.

    • JSDevOps4 hours ago
      The world is literally on fire and we are setting fire to more forests because AI can’t tell the time and date
  • josefritzishere7 hours ago
    They're trying to find new places to cram their money-suck of a product in hope of pretending there is a revenue model.
  • some_furry9 hours ago
    So glad that we're boiling the ocean for this.
    • JSDevOps4 hours ago
      That’s what I thought
  • 9 hours ago
    undefined
  • xyst6 hours ago
    Still do not like that ChatGPT does not want to provide sources to the answers.

    I provided a query to a _somewhat_ common question with Java apps and TLS:

    Query:

    > identify root cause of this Java exception: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

    Answer:

    > The exception you're encountering, PKIX path building failed, typically indicates an issue with SSL/TLS certificate validation. Specifically, the error unable to find valid certification path to requested target suggests that the Java application cannot find a trusted certificate for the server it is trying to connect to. Here are the common root causes and solutions:

    Missing Certificate in Truststore: The server's SSL certificate (or the certificate of its issuing authority) is not present in the Java truststore. Solution: Import the server's certificate into the Java truststore using the keytool command. You can obtain the certificate by using a browser or tools like openssl. Self-Signed Certificate: If the server is using a self-signed certificate, it won't be trusted by default. Solution: Again, import the self-signed certificate into the Java truststore. Incorrect Truststore Configuration: The application might be pointing to an incorrect truststore or not configured to use one at all. Solution: Ensure the application is configured to use the correct truststore and that the path is correctly set. Expired Certificate: The certificate used by the server might be expired. Solution: Check the server's certificate validity and renew it if necessary. Intermediate Certificates Missing: If the server's certificate chain is incomplete (i.e., missing intermediate certificates), it can lead to this error. Solution: Ensure that the complete certificate chain is presented by the server or manually import any missing intermediate certificates. Network Issues: There may be issues with the network or firewalls blocking access to the certificate validation services. Solution: Check network configurations and ensure proper access. To troubleshoot further, you can enable Java SSL debugging by adding the following JVM argument:

    bash Copy code -Djavax.net.debug=ssl This will provide more detailed output regarding the SSL handshake and help identify the specific issue.

    It’s not a bad suggestion, these are my usual go-tos when diagnosing cause.

    But then a query into what the sources are for this information is:

    > The information is based on common knowledge about Java security, SSL/TLS configurations, and best practices for handling certificates. It reflects general principles and troubleshooting steps recognized in Java development and security documentation. If you're looking for specific references, the Oracle documentation on Java security and the keytool command, as well as community forums and resources on SSL/TLS issues, often cover these topics.

    “It’s just common knowledge bro. RTFM”

  • taco_emoji8 hours ago
    whatever
  • AIFounder2 hours ago
    [dead]
  • stiltzkin6 hours ago
    [dead]
  • andychert5 hours ago
    [dead]
  • nige1236 hours ago
    OOF

    Smell that!? A large part of Google's search business is on fire right now!

    There are three types of search: informational, transactional and navigational.

    LLM's are competing hard and fast for informational search. Once upon a time we offered 2.5 keywords to the Google Gods only to be ultimately passed to stackoverflow.

    That game is up. Google is losing it faster than you can say, "anti-competitive practices in the search engine industry."

    Transactional and navigational search remain.