A topic that has recently gained traction: AI vendors have already spent over $50 billion purchasing GPUs, yet the total revenue generated in 2023 adds up to only $3 billion. Is this a good business?
Or rather, is this a sign of a severe bubble?
Meanwhile, Google has reached a settlement in a class-action lawsuit regarding Chrome's data collection and will delete billions of "private" (Incognito mode) browsing data records. Earlier, Google announced the gradual phase-out of third-party cookies and has already enabled Tracking Protection for a small percentage of random users.
This is the "eye" symbol shown on the far right of the Chrome address bar; the slash indicates that third-party cookies are disabled (this is a screenshot from my browser, though I am unsure how many people have been pushed this feature in the backend).
Clearly, this "protection" will impact advertising revenue, with the "gray market data industry" being hit the hardest.
Why discuss the "AI bubble" alongside Chrome's phase-out of third-party cookies?
If AI applications were already generating strong cash flow, all doubts about a "bubble" would vanish. However, judging AI applications by any mobile internet era metrics—whether DAU/MAU or retention rates—they perform worse than the APP era. How will OpenAI challenge Google, the king of the previous era?
Many believe that in the mobile internet era, users traded rights like data privacy for free, high-quality services, with third-party cookies being a crucial data source. Whether active or passive, Google's move signifies the gradual start of a new era. The rise of "data privacy" awareness, coupled with the emergence of generative AI, marks the beginning of a transition from recommendation-based systems to XX (I can't find the exact word; Copilot, Agents—these don't seem sufficient to define an era);
Commercially, new revenue models are still obscured by fog, while old revenue models are already facing massive challenges. The ideal of AGI is far from being able to support the reckless investment of most.
As for my views, they are quite simple:
I agree with the judgment of an "AI bubble." However, we cannot predict whether GPT-5 (arriving in about three months) or some other model will send the market into a frenzy again—after all, the new law of "upgrading models every six to nine months" still holds. Nor can we speculate on that ultimate tipping point; no one in history has been able to succeed consistently.
I disagree with the VC/PE-style conclusion that "AI applications just need time, similar to the mobile internet." On the contrary, I have always been fascinated by the concept of the "one-person company." Under this concept, "super apps" may never exist. I also firmly believe that the flip side of Model-as-a-Service is not the application, but the person—the person who assembles the models. The combination of "Model + Human" has only one primary goal: generalized content production.
If we believe the model is the most important part, then personal data should only be used for personal models (services). "User behavior and privacy" will no longer be mere lines of data in a centralized batch process, but fuel in a personal safe that requires permission to be fed into a model for "ephemeral processing."
I also believe that the implicit threshold for generative AI is the highest in history; if it weren't, "it" wouldn't be worth such massive investment.