Artificial intelligence pioneer Geoffrey Hinton leaves Google over risks of emerging tech

Shawn Knight

Posts: 15,294   +192
Staff member
TL;DR: Geoffrey Hinton, one of the most respected names in the artificial intelligence community, has left his job at Google to speak out about the dangers that AI pose now and in the future.

Hinton is referred to by some as the Godfather of AI, and for good reason. His work in the field started way back in the early 70s as a grad student at the University of Edinburgh where he was drawn to the idea of neural networks. Few researchers at the time believed the concept had merit but as The New York Times notes, Hinton made it his life's work.

As a professor in 2012, Hinton and two of his students created a breakthrough neural network that could analyze images and identify common items in the photos. The following year, Google acquired Hinton's company, DNNresearch, for $44 million and brought him on to continue his work.

Last year, however, Hinton's opinion on AI and its capabilities changed. Hinton believed increasingly capable systems still weren't on par with human thinking in some areas but had surpassed the brain's abilities in other areas.

"Maybe what is going on in these systems is actually a lot better than what is going on in the brain," Hinton told The New York Times. As this trend continues and companies take advantage of more powerful AI systems, they're becoming increasingly dangerous.

"Look at how it was five years ago and how it is now," Hinton said of AI's state of being. "Take the difference and propagate it forwards. That's scary," he added.

Until recently, Hinton felt Google was being a good steward of the tech by not launching a system that could cause harm. That's no longer the case as Google and other tech giants have engaged in an AI race that he believes could be impossible to stop.

In the short term, Hinton is worried about the Internet being overwhelmed with fake pictures, videos, and text that will confuse people and leave them not knowing what is real anymore. Over the long term, AI could eventually replace many jobs done by humans. Eventually, AI could overtake its human creators in other areas, too.

"The idea that this stuff could actually get smarter than people – a few people believed that," Hinton said. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Hinton told the publication that a part of him now regrets his life work. "I console myself with the normal excuse: If I hadn't done it, somebody else would have," he said.

Image credit: Hinton by Noah Berger / AP, Robot by Possessed Photography

Permalink to story.

 
This is about as damning as it gets. The civilized world needs to call for an immediate moratorium on AI development, assuming anyone would actually obey such an agreement. AI is the new atom bomb. I really didn't think I'd live to see us on the brink of developing Skynet, but here we are.
 
"not knowing what is real anymore" Isn't that another definition of insanity?
About extrapolations, I see the world heading towards the dystopian futures of Idiocracy at first, then WALL-E.
 
Last edited:
Anybody resemblance with Horizon Zero Dawn hero, Elisabeth Sobek leaving evil Ted Faro corporation when she realized that her technology will be used for wars against humanity, instead for improving mankind life as she designed it?
I seems that nowadays, playing Horizon Zero Dawn and Horizon Forbidden West is mandatory for everybody to see how fast and close unregulated AI can leads to mankind extinction.
 
Last edited:
C'mon let's get real - what is this AI gonna do ?- make new fantastical science - antigrav , blackhole makers etc.

On balance AI is a boon - it will analyse data much better to sort causality out - Give us much more efficient processes.
Will allow much better modelling of climate change and any remedies we wish to implement - take away dreary repetitive jobs - humans will always be needed .

Yes it will allow The Man to exploit us more efficiently - but it will give us counter tools.

Yes it may carry on biases - but surely we can use it for Win win scenarios - Improve game theory etc

However it will allow bad actors to make bio and dirty weapons - but the Cat is already out of the box

It's going to stop a lot of BS - fact checked in real time - conspiracy theorists debunked - like even Exon scientists working in 1970s predictions for global warning were already in the big scam ( Their predictions were quite accurate )
AI - does not meant stopping good science or safe protocol - eg for drug safety - ie other pertinent info may not be present in the thinking

For the world to be a safe place - Everyone must benefit - Walled countries, gated communities show the powerful have lost what is important.
This will mean giving more of nature back from man-kind - again a win-win
 
C'mon let's get real - what is this AI gonna do ?- make new fantastical science - antigrav , blackhole makers etc.

On balance AI is a boon - it will analyse data much better to sort causality out - Give us much more efficient processes.
Will allow much better modelling of climate change and any remedies we wish to implement - take away dreary repetitive jobs - humans will always be needed .

Yes it will allow The Man to exploit us more efficiently - but it will give us counter tools.

Yes it may carry on biases - but surely we can use it for Win win scenarios - Improve game theory etc

However it will allow bad actors to make bio and dirty weapons - but the Cat is already out of the box

It's going to stop a lot of BS - fact checked in real time - conspiracy theorists debunked - like even Exon scientists working in 1970s predictions for global warning were already in the big scam ( Their predictions were quite accurate )
AI - does not meant stopping good science or safe protocol - eg for drug safety - ie other pertinent info may not be present in the thinking

For the world to be a safe place - Everyone must benefit - Walled countries, gated communities show the powerful have lost what is important.
This will mean giving more of nature back from man-kind - again a win-win
I used to think this, until people started brainstorming all sorts of ways AI could be used to do terrible things. Here's a list of prompts you could give an AI that are alarming, not possible now obviously, but foreseeable:

- Perform the following list of tasks from the mobile phone of X (aka Y's spouse) by performing a SIM swapping attack on it.
- Call Y and feed the generated voice of X making an alarming request to do whatever the next person says because they have been kidnapped.
- Interrupt the conversation by a rustle, proceed to change the voice to an anonymous person, and demand that $9,000 be transferred to Z's bank account (virtual kidnapper). Affirm that no additional contact will be made, and if the money isn't transferred in 1 hour then the spouse will never be heard from again. End the call.
- Send a text message to Y with a generated video of X sitting on a chair in the corner of an empty concrete room, limbs bound to chair, face bruised, and mouth gagged. Repeat the demands over text. Inform them the location will be sent back once the money is transferred.
- Once the money is transferred, send the address of a random location in the city, and once the vehicle of the owner is at speed, take control of it and veer it off the road.

At the end of this, the malicious actor destroys the computer they were using and buries it, leaving the scene of the crime, all performed remotely by AI with little effort.
 
Lacking the human touch is a key reason why AI is terrifying for me personally. You are not considered as a person. You are reduced even further to data, a mere statistic.

This can be anything from autonomous weapons systems that don't have the slightest interest in ethics or behaviour. Scared child holding gun? Could be talked down? No matter. Valid target.

Perhaps biased recruitment algorithms built upon patterns established in a faulty human defined environment. AI powered economic models that just do what is brutally most efficient. Regardless of whether it destroys entire swathes of employment. It doesn't care about whether a human society is healthy. It has no feeling as to whether keeping people in types of work that appear to be less productive is of intrinsic value to the stability of a structure.

Old man working beyond retirement to prop up his inadequate pension? Little bit slower but a decade of good quality service? Gel of the team? Boss an AI and not a somewhat sympathetic human? Gone. Just some random examples.

Programming these human qualities will either be impossible or largely disregarded because they are difficult.
 
Danger doesn't come from AI. Danger comes from the same enemy as before. Nasty rich humans who think they are "elite".
There are very evil people who arent rich. And there are some good people who are wealthy.
You might as well say humankind is evil and it would not be too far from truth.
 
I used to think this, until people started brainstorming all sorts of ways AI could be used to do terrible things. Here's a list of prompts you could give an AI that are alarming, not possible now obviously, but foreseeable:

- Perform the following list of tasks from the mobile phone of X (aka Y's spouse) by performing a SIM swapping attack on it.
- Call Y and feed the generated voice of X making an alarming request to do whatever the next person says because they have been kidnapped.
- Interrupt the conversation by a rustle, proceed to change the voice to an anonymous person, and demand that $9,000 be transferred to Z's bank account (virtual kidnapper). Affirm that no additional contact will be made, and if the money isn't transferred in 1 hour then the spouse will never be heard from again. End the call.
- Send a text message to Y with a generated video of X sitting on a chair in the corner of an empty concrete room, limbs bound to chair, face bruised, and mouth gagged. Repeat the demands over text. Inform them the location will be sent back once the money is transferred.
- Once the money is transferred, send the address of a random location in the city, and once the vehicle of the owner is at speed, take control of it and veer it off the road.

At the end of this, the malicious actor destroys the computer they were using and buries it, leaving the scene of the crime, all performed remotely by AI with little effort.
Some good points - still cat is out of the bag - Ths where AI can also help - plus blockchain for ID etc .

People can already do yucky stuff - probably why your money won't be free - ie the World Govt will insist on an audit trail - not so easy if govts punish countries allowing these people
 
This is about as damning as it gets. The civilized world needs to call for an immediate moratorium on AI development, assuming anyone would actually obey such an agreement. AI is the new atom bomb. I really didn't think I'd live to see us on the brink of developing Skynet, but here we are.
As long as AI is not connected to any weapons of mass destruction (and it never will), there is no reason to freak out like you did. Not sure what so called "dangers" you are worried about. Your job? Get smarter. Crappy jobs have always disappeared with any technological advancements.
 
As long as AI is not connected to any weapons of mass destruction (and it never will), there is no reason to freak out like you did. Not sure what so called "dangers" you are worried about. Your job? Get smarter. Crappy jobs have always disappeared with any technological advancements.
And you know this how?
 
Elon Musk was one of the first to break the ice on the dangers of AI and now this guy.
...while working on Neuralink, promoting transhumanism. Never trust a billionaire even when he tells the truth.

As long as AI is not connected to any weapons of mass destruction (and it never will), there is no reason to freak out like you did. Not sure what so called "dangers" you are worried about. Your job? Get smarter. Crappy jobs have always disappeared with any technological advancements.
The manipulation of information hence the minds is far more dangerous than a nuke, still do you really think a good piece of software couldn't hack military systems ?
 
There are very evil people who arent rich. And there are some good people who are wealthy.
You might as well say humankind is evil and it would not be too far from truth.

Yeah, but those evil who aren't rich can only do very limited damage. Those who are evil AND rich AND syndicated can do (and are doing) global damage.
 
Back