指针从硬件指向软件:写在苹果完全符合预期的发布会后

指针从硬件指向软件:写在苹果完全符合预期的发布会后


Early this morning, the Apple event took place. Thanks to advanced social media, the market fully expected it to perfectly meet expectations, so I didn't stay up to watch. After waking up, I only spent a few minutes browsing the highlights and used two model tools to generate the following image (Gemini read the video, Claude generated the page):

At this moment, various interpretations and comments are already everywhere, and producing redundant information is meaningless. However, the long-term information revealed after an event that met expectations is not simple.

Over the past year or so, perhaps the biggest joke generative AI played on the market was: clearly, the model is a software innovation, so why is the hardware "standing alone" in its success?

By this point in time, however, things seem about to change, and this change will likely start with Apple: the clock hand is starting to point to software, and "generative AI" will become a standard feature, quickly entering our daily lives through consumer electronics represented by smartphones:

  1. Undeniably, at the beginning of the year, people still had big question marks about "is mobile computing power enough, and do we need to change chip designs?" Today, in a context where even "ancient" E-ink readers can run a decent model locally, hardware capability has suddenly become "redundant."

  2. Text-based generative models are different from games; one is a soft metric, and the other is a hard metric. Games determine "computing power" based on hard indicators like resolution, frame rate, and ray tracing. For models, although the 2-billion-parameter model today may have required more than ten times the training compute of a model from a year ago and has significantly improved capabilities, the hardware requirements for local operation remain the same. Therefore, while AAA titles can continuously stimulate hardware updates, model operation seems increasingly unable to do so.

  3. Thanks to the efforts of Meta, Google, Microsoft, Apple, and companies like Alibaba, Moonshot AI, and Zhipu AI (a series of excellent domestic model companies) for their tireless efforts in training and open-sourcing models (especially Meta), we have basically achieved the goal I wrote at the end of last year: what if everyone owned a large model.

  4. Thus, the model has changed. When the most resource-intensive pre-training phase is increasingly concentrated in a few companies and achieves a more substantial "AI equality" through a continuous open-source ecosystem, the resource consumption for society as a whole is drastically reduced. It may even lead to a situation where "the higher the pre-training compute cost, the higher the percentage of savings during use."

  5. This summer, in discussions with many people, I felt a clear awakening of "environmental awareness" based on the principle of "good enough." Since hardware needs are met, there is no need to change phones every year or every two years; every three or even four years is fine. The rapid spread of this awareness, combined with the result of "a lack of generational innovation in hardware," almost points to an increasingly long replacement cycle.

  6. So we saw the unsurprising iPhone 16 and the increasing possibilities powered by AI. We see the clock hand pointing from hardware to software.

  7. Actually, software is much harder than hardware, but a steep climbing curve also means a cliff that is easy to defend and hard to attack beside the moat: because an ecosystem built on the basis of user mindset is a "virus" that is difficult to overcome.

  8. Following the Apple event, Oracle's latest quarterly results exceeded market expectations, showing immediate "excitement" after hours.

  9. Well, the AI clock hand has moved from hardware to software and services.

← Back to Blog