Three observations from an interview with the Google CEO
In his BBC interview Google CEO, Sundar Pichai covers 3 fascinating topics informing his take on the 'AI bubble':
– Trust
– Energy
– Intellectual Property (IP)
First up, Trust.
The fact that people trust artificial intelligence (AI) or not is probably less important than the over-confidence it is in danger of unleashing.
Scientists at Finland's Aalto University (together with collaborators in Germany and Canada) have found that using AI all but removes the Dunning-Kruger effect, where people over-estimate their abilities.
So when Pichai says, “Don’t blindly trust what these AI platforms say” we should listen.
As he says, “Look we are working hard to ground it in real world information… and we have empowered them with google search … but there are moments these ai models fundamentally have a technology where they are predicting what’s next and they are prone to errors.”
But this is not going to stop continuous exploration. According to Pichai, AI has meant that hundreds of millions of years of PhD research has been done at Google DeepMind in London that would have been impossible a few years ago. This has resulted in the Nobel prize being awarded to the brains behind the work.
And there are numerous other human stories. As our recent speaker at With The Team, Dr Laura Gilbert said, “Think of the risk of not doing this.” Not only do we miss out on human development, we miss out on getting ahead of the bad guys.
But all this technology takes Energy.
And Google is, Pichai says, investing to develop new sources of energy. From geo-thermal energy to purchasing nuclear power and making investments in solar, the AI industry is pouring millions into the development of new energy. The result: “I’m confident we will have an abundance of energy (for the future)” he says.
In short, moments like these – like all disruptive global events – result in huge bursts of invention.
According to Pichai, being a technology leader and a net zero champion are not mutually exclusive targets. “Don’t restrain your economy based on energy concerns,” is his watch out to governments. Embrace it and move on.
But hang on. That’s a big ‘hope’ bet. And, what will this mean for energy prices for consumers as data centres consume more resources? Which mouth do we feed first?
And we are embracing AI in a big way.
On the way to work this morning I overheard a college student say to her friend, “I used AI, but I got it done!”
She’s not alone. One third of us are actively using AI, and as Adobe reports, while only 33% think they use AI-enabled technology, 77% actually use an AI-powered service or device: Think Siri and Alexa as well as ChatGPT and you see how AI is serving us content.
But what about the IP? Who owns it?
On IP, Pichai says it’s about making sure that content owners have the right to opt out of LLM training. Right now, LLMs are being trained on available content, but if a song writer wants their content removed or an illustrator doesn’t want to participate, then content creators should be able to say no to Gemini using its content to serve answers back to users.
But hang on. Let’s think about that. AI is using Google search to learn. Do we think that content owners opting out are going to see their content returned to users far less frequently if they opt out?
My Conclusion
In the interview Pichai is open and honest but also a little guarded. He addresses issues but fails to explore the implications of certain issues and what they might mean for the public at large. For example:
- How is reliance on AI diminishing human intellect and driving over-confidence?
- How will the demand for energy have impact on prices and the cost of living?
- How will content generators turn down the advances of one of their biggest marketplaces when there is little competition?
In short, do the AI Tech Bros know what this means for people living in the real world? Are they consciously aware or is it a blind spot?