In a morning of use, I am very grateful that Anthropic released Claude 3.7 at this moment, even though it isn't called 4.
I am grateful that, in a way that feels both familiar and slightly novel, it has provided a bit of support for my gradually collapsing value system.
You can clearly see its judgment on "thinking": thinking is not omnipotent; thinking is only effective in limited domains;
You can feel that without needing many words, it understands your coding ability perfectly—a kind of tacit understanding that only comes after long-term familiarity;
You can feel it seemingly pressing the mute button on a noisy world; even just a moment of peace provides enough solace;
Yes, in that folded world of models, distillation, data, and leaderboards are everywhere.
Isn't our world the same? We have turned KPIs into leaderboards; we always think that low-level output from hard work deserves a reward; we believe that as long as a teammate produces something, they should "empower" others for free.
AI has no "humanity"; if it does, it is certainly something we have forced upon it.
Actually, the theme I originally had in mind was: AI cannot empower investment research. Because I always feel there will be those who painstakingly accumulate high-quality data and study knowledge, and then produce work seriously, who will gradually become unwilling to be stepping stones for other people's KPIs.