Geoffrey Hinton, the visionary expert who was at the center of so much innovation in artificial intelligence and machine learning, recently left Google. In an interview with CNN, he said: “I’m just a scientist who suddenly realized that these things are getting smarter than us. I want to blow the whistle and say that we should be seriously concerned about trying to prevent these things from controlling us.”
Hinton has been called the “godfather of AI” because he was one of the seminal figures in the 1980s who worked on techniques, such as backpropagation, that have been instrumental in creating today’s great language models and generative AI. like ChatGPT.
If the rise of generative AI is creating a battleground between social and corporate values, and its pioneers are scientists like Hinton who started the fight and decided to leave when things started to get dirty, what values are we teaching our next generation? of scientists? Hinton said he’s blowing the whistle, but this doesn’t sound like a complaint at all. If he really wants to blow the whistle, he should tell us what’s going on behind the doors of Google.
This is what other computer scientists like Timnit Gebru, a leader in AI ethics research at Google, did. She was fired after co-writing an article critically exploring the Google search engine and its use of large language models, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Large?”
When asked by the CNN interviewer about Gebru’s criticism, Hinton said: “They were quite different concerns than mine. I think it’s easier to express your concerns if you first leave the company. And their concerns aren’t nearly as existentially serious as the idea of these things getting smarter than us and taking over.”
Among other things, this could be an example of undermining the courageous act of an African woman who raised ethical issues long before the godfather or an indication that Hinton knows and will reveal something far beyond what Gebru warned us about. I suspect the former is much more likely. Just for context, Gebru’s article was published in March 2021, long before the launch of ChatGPT and the subsequent spate of posts about social, legal, and ethical concerns related to large language models.
Among other issues, Gebru and his colleagues highlighted the dangerous risks and biases of large language models and their environmental and financial costs, inscrutability, illusion of meaning, and potential to manipulate language and mislead the public.
Are they different from Hinton’s concern? Yes, because unlike Hinton’s vague, sci-fi claim that AI will take over, Gebru’s concerns were unequivocal and specific. Also, unlike Hinton, who followed his “prevent these things from controlling us” with “it’s not clear to me that we can solve this problem,” Gebru and his co-authors had very specific recommendations: “Weigh environmental and financial issues first , investing resources in carefully selecting and documenting data sets rather than ingesting everything from the web, conducting pre-development exercises that assess how the planned approach fits with research and development goals and supports stakeholder values , and encourage research directions beyond increasingly broad language models,” Gebru and co-authors wrote.
Now that Hinton has left Google, will he really blow the whistle? His current position doesn’t suggest that because he believes that “tech companies are the people who are likely to see how to keep this under control.”
This could imply many things, one of which is that technology companies could directly or indirectly charge us to keep this technology in check, such as their antivirus software, or they can use this technology to blackmail citizens when necessary. However, would they do these things? Maybe, maybe not, but they certainly could.
What we can hope for, however, is Hinton acting like a responsible scientist and prioritizing social interests over commercial ones. He can act like a true whistleblower and divulge meaningful and specific information about what’s happening in the tech industry beyond the blunt or dramatic lines: “These things are going to take over.” Perhaps this way, he could leave a legacy better than a godfather, allowing him to protect the royal family and show loyalty, to us the people and not to Google.
Mohammad Hosseini, Ph.D., is a postdoctoral fellow in the department of preventive medicine at Northwestern University Feinberg School of Medicine, a member of the Global Young Academy, and associate editor of the journal Accountability in Research.