After years of laying the foundation for AI technology, Geoffrey Hinton, a groundbreaking British computer scientist known as the “Godfather of AI,” is leaving his position at Google to join other specialists who are warning about the danger Ai now presents. Seventy-Five-year old Hinton worked as a vice president and engineering fellow at Google in the field of artificial intelligence (AI).
Hinton was interviewed by The New York Times. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton noted when asked about current AI technology.
The launch of GPT chatbot in March, which is the latest version of OpenAI, has brought about a deep concern in the AI world. AI professionals signed an open letter written by the nonprofit Future of Life Institute (FLI), warning that the technology poses “profound risks to society and humanity.”
Speaking about the response to the open letter, FLI said: “The reaction has been intense.” FLI is a nonprofit group seeking to mitigate large-scale technology risks, wrote on its website.
Trending: JUST BREAKING: As House Oversight Chair Prepares To File Contempt Charges Against Director Wray, FBI Makes Astonishing Statement [VIDEO]
“We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers.”
Those that have driven AI technology in recent years are not saying they are terrified by the implications of their work and what it could mean for the future. Hinton agrees, finding recent advancements in AI “scary.”
A major concern they share is that Microsoft has incorporated the technology into its Bing search engine. Google is pushing to follow suit, leaving the innovators concerned that Big Tech companies could drive AI tech too far in a bid to outdo one another.
Hinton clarified his position for quitting his job at Google, believing his interview with the NYT made it appear he was criticizing the company. “I left so that I could talk about the dangers of AI.”
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
The ability for AI to create false photos and text is a deep concern, computer scientist are warning that most people will be deceived by the technology. The average person will “not be able to know what is true anymore.”
Hinton also warned that AI could replace humans in the workforce and could quickly surpass human intelligence,
“The idea that this stuff could actually get smarter than people—a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Elon Musk has signed the open letter and has also warned about the threat AI poses to human life.
Musk said, “AI is far more dangerous than nukes.” He also said he worries that the scientist creating AI technology think they are smarter than they are. He said they should be afraid of what AI tech is capable of. He called for a public body to serve as an oversight committee to make sure everyone is developing AI safely, “It scares the hell out of me.”
Elon Musk AI pic.twitter.com/TfLlw52mfT
— karine (@karineomar143) May 1, 2023
Due to the tremendous risk to humanity, the Future of Life Institute’s letter is calling for a pause to allow for oversight,
“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,
AI systems with human-competitive intelligence can pose profound risks to society and humanity” and should be developed with sufficient care and forethought. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”