Regarding the issue of the "AI bubble," I wanted to look at the mainstream pros and cons in the market. Since I happened to be on the ChatGPT page, I conveniently opened the Agent mode. It worked for 26 minutes, and the result was pretty much what I wanted.
Naturally, I also conveniently sent the resulting document into Gemini's Canvas: three very clear pages. It's interesting—why must a text report be visualized? It's simple: for information compression. Visualization is a more intuitive way to quickly find numbers or conclusions that differ slightly from common sense, then go back and double-check. Unfortunately, the chances of me finding errors in ChatGPT and Gemini are becoming lower and lower.
Of course, I used Chinese for the convenience of publishing the article.



At one point, Claude's Artifacts was my most-used visualization tool, but as Gemini Canvas kept improving, I now do almost everything in Gemini: better aesthetics, more focused information.
In comparison, the Canvas of the GPT-4o era feels quite outdated.
GPT-5 changed that impression. Below are screenshots of the GPT-5 Canvas. Based on the typical black-and-white color scheme of Tailwind CSS combined with the font selection, I think it looks quite nice.




However, it has one issue by comparison: the report text mentioned electricity consumption was 8TWh in 2024 and is expected to be 652TWh in 2030, but it used Python to draw a chart showing 52TWh for 2026. I checked the source, and it indeed mentioned a prediction of 52TWh for 2026. So, GPT didn't get the chart wrong, and the Gemini above understood it perfectly. ChatGPT "overthought" it when drawing in Canvas, fitting a "trend chart" with a positive second derivative. Does this count as a hallucination? I think so.

I am very satisfied with the visualizations of both models, not just because the content is clear and complete, but also because they both understood my need to see the "source tracing" for each viewpoint.
Gemini outputted HTML, 304 lines, while ChatGPT outputted a TS component, 393 lines. Both were concise. Gemini's code output was significantly faster, taking about half a minute.
At this point, my task would have been considered complete. But given my recent trials of domestic models, especially after trying Kimi's OK Computer a few days ago, I felt like doing more comparisons. So, I prepared to let Kimi handle the visualization to verify some thoughts left over from my previous article.
Since I had already used up my three free credits for OK Computer, I first tried the direct output of the K2 model. The result is as follows:

249 lines of HTML code. Many people underestimate the significance of visualization in evaluating model capability. In my view, visualization can represent a vast amount of information. The difference in model capability is visible at a glance.
I hesitated for a few seconds and then clicked on a Kimi subscription—the $19/month one. Since I'm already here, there was no reason not to continue. So, I gave the same requirement to OK Computer.

Unlike my previous attempts, this time it used the Web Search tool without me asking. Actually, for this task, I preferred not to have search enabled because it affects the visualization results.
However, looking at the process in the image above, the three markdown files really appealed to me: interaction, design, and outline. I previously wrote about the "five principles of vibe coding," and the first point was "Design First."
OK Computer was very "diligent," working for over half an hour before finally finishing. Just like the structure above, it provided a complete website. Part of the reason for the long time was that the token output speed wasn't fast enough (roughly around level 20, about the same speed as running OpenAI's o1-preview on a Mac).
The page had four tabs in total. Screenshots follow. Link: https://www.kimi.com/share/1999624f-bd02-8af9-8000-00000bf80282.




As seen above, it's truly packed with information, and crucially, it creatively added "Source Verification"—and the conclusions were actually quite reasonable. However, because search was enabled, it also smuggled in quite a bit of "extra baggage."
I liked the home page background and the marquee text effect. Although they are not very useful, they at least show that some "effort" was put in (though the dynamic effect made the screenshot look a bit off). I also liked the "Source Verification" section.
I did not like the "Overthinking" part; it was redundant and contained errors.
To be fair, OK Computer's website was a bit like "using a cannon to swat a fly," but compared to the directly generated "rather meager" results, I'll stick with OK Computer.
Gemini Canvas took less than a minute, while OK Computer took half an hour. The results were roughly a draw.
A trace of melancholy emerged again: you have to give your absolute best just to draw even with those who seem to do it "effortlessly."
Fortunately, it seems that most of the time, diligence can compensate for inadequacy—and the same applies to models.