Interesting insights on AI from Eric Schmidt's controversial Stanford talk
By now you must all have heard of Eric Schmidt’s controversial comments on ‘Why Google seems to have lost the initiative”. He blamed Google’s ‘Work-Life-Balance & remote working’. Since then, this blew up all over the news cycle causing Stanford to take down the video & causing Eric Schmidt to apologize & back-track.
But everyone has ignored the really interesting perspectives from the talk. This is available on GitHub & some YouTube channels. I am posting the really interesting parts of his talk along with my comments.
Eric On CUDA
“I like to think of CUDA as the C programming language for GPUs. That's the way I like to think of it. It was founded in 2008. I always thought it was a terrible language and yet it's become dominant.”
Eric On Nvidia
“If $300 billion is all going to go to Nvidia, you know what to do in the stock market. Okay. That's not a stock recommendation.”
On the colossal amount of money been thrown into GenAI
The amounts of money being thrown around are mind-boggling. And I've chosen, I essentially invest in everything because I can't figure out who's going to win. And the amounts of money that are following me are so large.
<Naresh>
BTW, I cover the $300Billion investment into AI in this post (GenAI’s $600B hole)
</Naresh>
On the colossal costs for developing transformer models
“So at the moment, the gap between the frontier models, which they're now only three, I'll refute who they are, and everybody else, appears to me to be getting larger. Six months ago, I was convinced that the gap was getting smaller. So I invested lots of money in the little companies.
Now I'm not so sure. And I'm talking to the big companies and the big companies are telling me that they need 10 billion, 20 billion, 50 billion, 100 billion.
That's very, very hard. I talked to Sam Altman is a close friend. He believes that it's going to take about 300 billion, maybe more.
<Naresh>
This number was echoed by Anthropic’s CEO Dara Amodei in this interview
<Dara> ”And so, today’s models cost of order $100 million to train, plus or minus factor two or three. The models that are in training now and that will come out at various times later this year or early next year are closer in cost to $1 billion. So that’s already happening. And then I think in 2025 and 2026, we’ll get more towards $5 or $10 billion.
imagine a year or two out from that, if you see the same increase, that would be $10-ish billion. Then is it going to be $100 billion? I mean, very quickly, the financial artillery you need to create one of these is going to wall out anyone but the biggest players” </Dara>
</Naresh>
On the critical need for Electricity to train these models
“I went to the white house on Friday and told them that we need to become best friends with Canada because Canada has really nice people, helped invent AI, and lots of hydropower.
Because we as a country do not have enough power to do this.
The alternative is to have the Arabs fund it.
And I like the Arabs personally. I spent lots of time there, right?
But they're not going to adhere to our national security rules. “
On the post-transformer models that are coming
“there are very sophisticated new algorithms that are sort of post-transformers. My friend, my collaborator, for a long time has invented a new non-transformer architecture. There's a group that I'm funding in Paris that has claims to have done the same thing.
There's enormous invention there, a lot of things at Stanford.
And the final thing is that there is a belief in the market that the invention of intelligence has infinite return.”
On code-generation AI that’s only 1-2 years out
So imagine a non-arrogant programmer that actually does what you want and you don't have to pay all that money to and there's infinite supply of these programs.
That's all within the next year or two.
Very soon.
Those three things, and I'm quite convinced it's the union of those three things that will happen in the next wave.
<Naresh> The 3 things Eric mentions are
Infinite context windows
Agents
Text to Action
Here’s his take on all 3
</Naresh>
On Context Windows, Agents & Text-to-Action
”In the next year, you're going to see very large context windows, agents and text action.
When they are delivered at scale, it's going to have an impact on the world at a scale that no one understands yet.
Much bigger than the horrific impact we've had by social media in my view.
Agents: With respect to agents, there are people who are now building essentially LLM agents and the way they do it is they read something like chemistry, they discover the principles of chemistry and then they test it and then they add that back into their understanding.”
Text to action : You understand how powerful that is.
If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want as opposed to the programmers that work for me who don't do what I ask, right?
There’s way more interesting stuff in this talk. On
how he has been building drones for the Ukraine war (he calls himself an Arms Dealer)
the future of warfare
China vs USA great power competition
etc, etc.
Great talk! Eric is a computer scientist, a great businessman & a fantastic communicator. His presentations at Google were something to behold.
Watch the talk (while it’s still available) or read the transcripts!