Interview with OpenAI CEO Sam Altman: A Dialogue with Lex Fridman
OpenAI CEO Sam Altman recently sat down for an interview with Lex Fridman (following Meta's Chief Scientist Yann LeCun). During this nearly two-hour interview, they discussed several core issues surrounding AI and the future of the company.
Chinese Summary Generated by Claude 3
The Power Struggle within OpenAI's Board
- The events of last November were the most painful of Sam's professional career, but he also felt immense support and love from others.
- The experience highlighted the necessity for a strong governance structure and robust processes at OpenAI.
OpenAI's Progress and Outlook
- While GPT-4 is remarkable, there is still a significant gap before reaching the ultimate goal.
- Highly powerful systems are expected to emerge around 2030.
- OpenAI will release more models this year in a gradual, incremental manner.
Computing Power and Energy Issues
- Computing power will become the most valuable commodity, with massive global demand.
- Energy, particularly nuclear fusion, is a core challenge; Helion is doing the best work in this field.
Views on Search Engines and Business Models
- There is no interest in building a "better Google"; the goal is to help people utilize information in entirely new ways.
- Sam favors simple business models and dislikes advertising.
AGI Competition and Safety
- Competition for AGI could trigger an arms race, which is a cause for concern.
- OpenAI is working hard toward safety and hopes competitors will prioritize it as well.
Sora's Progress and Impact
- Sora still has limitations but will improve with scaling.
- They are cautiously addressing potential harms like deepfakes.
- Artists will need to find new ways to create; OpenAI is exploring how to make Sora better understand users.
Perspectives on AGI Timelines and Control
- Judging the exact timing of AGI is difficult, but very powerful systems are expected by 2030.
- Even if OpenAI builds AGI first, it should not be fully controlled by any single individual.
- Strong governance is required, and governments should set the rules.
Current Top Concerns Regarding AI Risks
- The current worry is not "loss of control" but factors like model theft.
- As progress accelerates, OpenAI will consider safety from more comprehensive angles.
Reflections on Extraterrestrial Civilizations and Life
- Sam wants to believe in aliens but is puzzled by the Fermi Paradox.
- Humanity's past achievements make him hopeful for the future; he feels grateful for his life even if it were to end tomorrow.
Full Translation Excerpts by Gemini 1.5
Introduction
- 0:00 - I believe computing power will be the currency of the future and perhaps the most valuable commodity in the world. By the end of this century, or even earlier, we will have extraordinarily powerful systems. The path to Artificial General Intelligence (AGI) will be a massive power struggle.
- 0:26 - Whoever builds AGI first will possess immense power.
- 0:32 - A conversation with Sam Altman, CEO of OpenAI.
The OpenAI Board Crisis
- 1:13 - That was the most painful professional experience of my life—chaos, shame, and frustration. However, seeing people say kind things about me provided incredible support.
- 2:30 - I'm glad it happened relatively early. It helped us build resilience and prepare for future challenges.
- 7:09 - The board should have significant power. In our non-profit structure, the board is powerful and not truly accountable to anyone. We want the board to be as accountable to the entire world as possible.
- 8:46 - Mentioning Bret Taylor and Larry Summers joining the new board. We need diverse expertise (non-profit, operations, legal, and governance).
About Ilya Sutskever
- 18:57 - I love Ilya and have immense respect for him. I hope we work together for the rest of my career.
- 19:30 - Ilya didn't "see AGI," but he takes AGI and safety issues very seriously.
Elon Musk's Lawsuit
- 24:54 - Eight years ago, we started as a research lab and didn't know how the technology would evolve. Later, we needed more funding, and the structure changed accordingly. I don't understand Elon's true motivation.
- 27:44 - Elon wanted full control and to merge with Tesla because he thought OpenAI would fail. He chose to leave, and that's okay.
- 31:29 - It makes me sad. He might be one of the greatest builders in history; I miss the old Elon.
Sora and Video Generation
- 34:57 - The models understand the physical world more than people think. Seeing an object in a Sora video be occluded and then reappear shows good physical intuition. It will improve with scale.
- 40:20 - People who create valuable data should be compensated. Models are changing, but people must get paid.
- 41:40 - Just as artists were worried when photography emerged, new tools will appear, and people will use them in new ways.
GPT-4 and GPT-5
- 45:05 - I think GPT-4 "kind of sucks" (relative to where we need to get).
- 45:18 - I expect the gap between 5 and 4 to be the same as the gap between 4 and 3. We are on an exponential curve.
- 46:23 - I use it more as a brainstorming partner now.
- 53:58 - Hallucination issues will improve in upcoming versions, but they won't be fully solved this year.
- 1:06:36 - We will release a great model this year, whatever it is called. But before GPT-5, there are many other important things to release first.
Competition, Energy, and the Future
- 1:10:06 - Computing power is the currency of the future. Due to massive demand, the supply chain and energy (nuclear fusion) are the biggest bottlenecks.
- 1:18:33 - Building a better copy of Google Search is boring. What's interesting is helping people find, process, and synthesize information in a brand-new way.
- 1:20:26 - I hate ads. I prefer simple business models where users pay, ensuring answers aren't influenced by advertisers.
- 1:38:20 - I don't want super-voting control over OpenAI. No single person should control AGI; we need governments to set the rules of the road.
Conclusion
- 1:54:12 - I am grateful for my life. We are living in such a wonderful and interesting time.