For the bubble to burst, investors would need to stop believing in the potential of AI in the near-to-medium term. I don't think we're quite there yet and, if we are, the article doesn't really support that claim.
There's still room for other companies to innovate and create real value with LLMs. But innovation is usually better done by challengers who have nothing to lose than incumbents who are, as the article correctly points out, trying to signal "trust me, we can innovate" to their (non-VC) investors.
Meanwhile research is cracking along with things like this from Jim Fan, a researcher at NVIDIA saying we are not far from AI being able to recursively improve itself without human help https://www.reddit.com/r/singularity/comments/1i5ghkq/jim_fa...
These are paying customers. This isn't a case of "you're the product." Yet Google chooses not to be good stewards of their customer base. The level of contempt big tech has for users really is something.
As programmers we complain about the ~1% from copilot-type models where the code is terrible. It is annoying but you can live with it.
A 1% error rate for many things, and with no bounds of how terrible the hallucinated error is, perhaps that's unworkable.
For example, Air Canada ended up liable for a discount which its AI chatbot made up.
In my opinion, this AI development stalemate is more layered. Big companies set such broad targets in a race to catch up with OpenAI that they lose focus on real use cases. So the loudest voices, those good at navigating internal politics, end up in a good spot to push their own ambition over actual customer needs or technical practicality. They set goals that sound just a bit more exciting than their peers, which pulls resources their way. But the focus shifts to chasing KPI's rather than drilling into real problems. Even when they know going smaller is smarter, knowing and doing are two different things.
It’s still a great time for small AI startups. My favorite kind is a team that quickly learn a business’s needs, and iterate toward the right interaction points to help. I think just staying focused on solving a lot of small related problems very fast, you can create something that feels like a real solution.
So it's not that there aren't good models, but you need to look around to find a good one. And on iOS it's harder to do that.
Every new technology and medium (AI has features of both) goes through a period where people try to unsuccessfully apply it to old paradigms before discovering the new ones that make it shine. Motion picture cameras seemed like a goofy fad for decades before people finally understood the unique potentials of the medium and stopped trying to just film vaudeville stage shows.
I thought this was basically the core point supporting the conclusion, but I don't think people really want to pay for anything, which is why everything is ad supported. I don't think you can say people don't want AI just because they don't want to pay 20/month or w/e for it.
But rather than going back to the drawing board to make it more useful/appealing, they increased everyone's base subscription and made Gemini "free;" you know, a feature that paying customers demonstrably didn't value enough to pay for.
People don't like subscriptions.
But do people want AI that's rigged to constantly recommend Shopping Like A Billionaire at Temu™, either? Because that's the alternative if people won't pay.
This, despite that the other AI product (the one everyone talks about, ie. ChatGPT $200) is too successful (meaning people use it too much and the price should be higher or there should be more tiers)