It’s not surprising; this "?" has lingered in my mind for a long time. Slightly shorter is how long I’ve been hesitating over whether or not to write about this topic.
Now, there is no more hesitation; I am ready to write it down.
Where to begin? Let’s start with a question.
If no new "focus of attention" emerges, will the number of people still enthusiastically discussing OpenClaw a month from now exceed even one-third of the current number?
I am not the target audience for OpenClaw. Tasks like automated information collection, background jobs, mobile-PC synchronization, content production, and social media posting—I essentially automated every part of these workflows that could be automated by the middle of last year. After that, I didn't need it anymore.
When more and more people use AI to collect and publish information, my efficiency in processing information by scanning major platforms and news sites for fifteen minutes to half an hour is already higher than what AI collects. The overall increase in efficiency across the web benefits everyone.
In fact, I don't have that many things that must be done by AI. About 99% of the needs I came up with for myself over the past three years didn't actually exist; the moment I finished implementing them, the need vanished for me. Today, what I still require includes: Deep Research, to help me quickly gather basic information on specific topics; NotebookLM and my visualization tools, to help me quickly browse this information, find new "focuses," and organize them; Google Docs, my daily writing tool (despite its many flaws, I've tried dozens of alternatives and couldn't find or build anything better—perhaps due to a lack of patience); Build and Antigravity as coding IDEs; and Claude Code and Codex in the command line—every model is like "rolling the dice," and the biggest advantage of having multiple models is the ability to "change my luck."
My monthly AI spending is actually decreasing: I downgraded ChatGPT from Pro to Plus, Claude from Max to Pro, and cancelled my Perplexity subscription (the daily free allowance happens to satisfy my curiosity for gossip I don't want Google to record). I control the image output volume of Nano Banana Pro so that monthly costs don't exceed the credit from my Google Code Assistant subscription... and to further control things, I kept only one bank credit card and cancelled all others. These costs didn't really bother me, but once you start consciously saving money, it becomes addictive—not just for tech budgets, but for daily life, food, and travel. Consequently, many friends have started to "look down" on me for having no ambition.
Speaking of money, there is a connection. Here is the second question: Who is actually paying for AI usage? The answer is becoming clearer: enterprises. The core logic behind the "kill the software companies" narrative for several months now is that AI replaces software, and enterprises are shifting from buying software services to buying model services. I’ve held the view that "AI will kill software, specifically software engineering" for three years now, and I never doubted this trend.
However, there is a fatal problem here. As the logic evolves, it has become: "A few hours of work by a model could potentially replace software with billions or tens of billions in revenue."
The real question should be: How much should we charge for that model? The answer might be tens, hundreds, or thousands of dollars—at most tens of thousands.
In the face of this problem, discussing whether software has barriers, or how severe "large enterprise disease" is (to the point they can't transform), becomes meaningless.
Under this linear extrapolation, my view aligns closely with the recently viral "2028 Economic Collapse" article. I’ve written about the deflationary problem of AI several times. My differing view is that AI replacing software won't improve the performance of existing companies; instead, it will lead to their rapid disruption. Thus, under this linear extrapolation, my outlook for the future is much more pessimistic than described in that article.
But I still don't believe such a linear extrapolation will happen:
If we believe AI intelligence will continue to accelerate the disruption of more industries and replace more jobs, we shouldn't simultaneously believe it will create more new jobs, as these two ideas are contradictory.
If we believe AI intelligence will continue to accelerate the disruption of more industries and replace more jobs, then humanity is closer to the point of "unplugging AI" than it is to the arrival of AGI.
If we believe AI is intelligent, then our linear extrapolation predictions might all be wrong.
If AI truly is intelligent, the needs we imagine now will turn out to be pseudo-needs, just like 99% of what I realized over the past three years.
The period from late 2025 to now has been a turning point in my perception (I’m not saying my current perception is more correct; it’s likely more incorrect because I’m starting to amplify my own paranoia): when I see the Spring Festival Gala oozing "tech-pre-prepared meals" (damn, that multi-colored, information-crammed, sticker-style PPT aesthetic that clashes with my aesthetic pursuits), I began to rebel. It became clearer: what we call AI intelligence is a blatant pandering to the "majority rules" preferences of collective human choices.
If, by chance, my perception is becoming more correct, then that’s not a bad thing. AI remains within the scope of powerful tools—extremely powerful, even possessing all human knowledge—but ultimately staying within our range of understanding for a long time, as our insight into it deepens.
If, by chance, the above view is closer to the truth, it’s predictable that AI, alongside humans, will replace many software systems (actually, it should be called an upgrade) more efficiently and much more cheaply (not just a little cheaper, but orders of magnitude cheaper). As mentioned, disrupting many companies is also predictable.
Just like my conscious saving: faster, better, and cheaper is addictive.
However, how much is a model worth if it replaces at least hundreds of billions in revenue at one-tenth the cost? Of course, we could say it will penetrate ten, a hundred, or even a thousand times more in the future. But how long will that take?
This is the second-to-last absurd point: the primary reason we view software companies as collapsing is that AI makes their high revenues seem unjustified. Yet, we expect AI to create even more revenue in just a few years.
Behind this lies the fact that the bulk of current revenue for these AI models actually comes from the very software companies they are set to disrupt. When you bypass your own customers to face your customers' customers, can you do better than your customers?
So, if your downstream is "eliminated" by you, will you remain intact?
This leads to the final absurd point: when the downstream players truly paying for computing power are in agony, can the upstream survive?
Writing this, according to my original intent, I should be more paranoid and firmly say: No. This should be the reason to go short on computing power. We’ve seen the "upstream inflation, downstream contraction" drama many times; this negative feedback has only one outcome in almost every industry chain.
But I won't write that. I still 100% believe the "guidance" given by Nvidia's CFO at CES earlier this year—that annual revenue will reach $500 billion within a year or two. With a 60% net profit margin, that’s $300 billion in profit. This is likely to happen. What valuation should we give that?
By the way, here are a few basic facts:
The core of computing power, the GPU, is subject to high depreciation, whether its lifespan is three, five, or six years, or longer.
This is an old refrain: if every year or two, the cost of new computing power increases by 30-50%, but the efficiency (cost of use) is only 1/5th to 1/10th of the previous generation, would you buy it? Would you buy more? How do you deal with a competitive landscape where there is a late-mover advantage?
A second old refrain: in a non-monopolistic, cut-throat competitive landscape, won't most cost reductions translate into price cuts? While Jevons paradox occurs, it might also mean a decrease in valuation multiples (granted, this isn't a fact, just a guess).