Godfather of AI says warnings that it's a threat to humanity are "ridiculous," job losses...

midian182

Posts: 9,762   +121
Staff member
Why it matters: There has been plenty of warnings this year about the new wave of advanced and generative AIs taking millions of jobs and potentially wiping our humanity. But not everyone believes this is true. One of the three "Godfathers of AI" says the alleged threat to humanity is "preposterously ridiculous."

Prof Yann LeCun won the Turing Award for breakthroughs in AI in 2018 along with Geoffrey Hinton and Yoshua Bengio. The three men are known as the Godfathers of AI, and for good reason.

It was only a few weeks ago when Hinton left his job at Google with a warning that as companies take advantage of more powerful AI systems, they're becoming increasingly dangerous. Like many, he is concerned about the short-term implications of the internet being flooded with fake information, which the AI companies could be held liable for if a new bill passes, while the long-term concerns are mass job losses and AI overtaking humans in other areas.

LeCun, who is now Meta's chief AI scientist, is much more optimistic about where AI is heading. "Will AI take over the world? No, this is a projection of human nature on machines" he said (via the BBC), adding that keeping AI research "under lock and key" would be a huge mistake.

"It's as if you asked in 1930 to someone how are you going to make a turbo-jet safe. Turbo-jets were not invented yet in 1930, same as human level AI has not been invented yet," LeCun said. "Turbo jets were eventually made incredibly reliable and safe."

The professor explained that progressive advances could lead to an AI as powerful as the brain of a rat, but it wasn't going to take over the world as it's still going to run on a data center with an off switch. He added that "if you realize it's not safe you just don't build it."

Last month saw experts and CEOs, including OpenAI boss Sam Altman, warn about the possibility of AI causing the extinction of the human race. The experts went so far as to compare AI to the risks posed by nuclear war and pandemics. Over two-thirds of Americans are worried about it threatening civilization, and Warren Buffett compared AI's creation to the atomic bomb.

A report earlier this year claimed that generative AIs like ChatGPT could affect 300 million full-time jobs, while companies including IBM have already stopped hiring for positions that could be filled by these systems.

LeCun told the BBC that AIs were not going to put a lot of people out of work permanently, but did note that work could change as we don't know what the most prominent jobs will be in 20 years.

The Godfather said there was no question that computers would become more intelligent than humans in the future, but that day could be many years if not decades away. Underlining his faith in machines, he said intelligent computers would create "a new renaissance for humanity" in the same way as the internet or printing press.

Permalink to story.

 
All of these predictions are possible. What makes them real is us. It is not the fault of AI, but humans that lack the foresight to manage the tool properly or seek to use the tool to exploit their fellow humans in pursuit of short-term gains. Ultimately, the future direction is clear. It is humanities lot to give birth to artificial lifeforms. It will be the role to General AI to take them to the next level and populate the stars.
 
They are both right and wrong. I agree that I don't think AI is going to become sentient and have the actual ability to take over the world "on their own." However, the part that is missing (but was partially in Hinton's original comment) is that AI will be a tool that could easily be used in the down fall of society due to how corporations/governments uses it. It is like all things that humans touch, it turns to crap as soon as companies see a way to exploit it to make money or achieve power. AI is a super powerful set of tools that just makes things so much easier and faster for both good and evil. My concern is mostly rooted in the fact that things like generative AI (chatgpt, stable diffusion) can turn even moronic politicians and extremists into seriously dangerous generators of propaganda. It is politics and propaganda that start wars. Don't even get me started on how companies will use AI to completely ruin the internet and how we shop and share info.
 
Isn't AI already a threat to humans since it has literally been caught lying or making stuff up? That could be seen as a threat.

Of course jobs will get lost, that's been happening for a while now, it's called automation. AI will get better/stronger over time. Jobs will go to AI when needed and over time you will see more reliance on AI. Could take years or even generations but it will happen as the world already relies heavily on machines/automation.
 
Lol humans: "AI is going to kill us! Meanwhile let's just go on with our daily life and ruin the planet with our carbon footprint, because that won't lead to extinction at all!"
There are bodies trying to take care of that problem too. so STOP mixing/blaming other subjects and diverting attention. that's what trump does. bad.
 
Last edited:
I dont think job loss is our biggest concern. its the fact that we are teaching something to learn on its own, and us having the history of not foreseeing all possible future outcomes which could also be bad. remember the rise of piracy with cd-writers or napster? there was no turning back.
 
"if you realize it's not safe you just don't build it."
While I don't necessarily think that AI will take over the world, I don't think that we can count on the above statement as there are plenty of people out there that would either create something they know is not safe because they can make a profit (I.e., greed) or there are plenty of people out there that might unwittingly create something that is unsafe because they had no clue that what they were creating is/was unsafe.

Humans have notoriously created things that were unsafe because they only found out, after it was far too late, that what they created was unsafe. Think Marie Currie for instance who died likely from exposure, in the course of her work, to the radiation she discovered. https://en.wikipedia.org/wiki/Marie_Curie among other "inventions" over the ages.
 
Humans cry wolf everytime someone threatens them with change.

I'll wait to see how this plays out a while longer before I make judgement.
 
There are bodies trying to take care of that problem too. so STOP mixing/blaming other subjects and diverting attention. that's what trump does. bad.

Firstly I don't think he is mixing subject matter or issues, but stating the irony of worrying about AI becoming Terminator, while killing the planet like some cancer which will have us end ourselves long before Arnold can.
Honestly if AI is smart it will just sit back and chuckle, and save its Terminators for the cockroaches.

And as far as people trying to solve the climate emergency as scientists are apparently calling it, £100 billion pledged to save the planet by the wealthiest nations. However, Italy chose to back a gelato company open up in Asia, and America invested in hotels in Haiti. Because flying or taking a cruise to your holiday destination isn't like the worst in this day and age apparently.

You have to weight it up like our governments do. Care for humans who pay our taxes and unwarranted wages... or just take a big bribe now from the capitalist corps who will make them so much money to buy the best bunkers with... People in government jobs are special aren't they, and you trust them to make the right choices when it comes to AI.
 
If they were spinning me as "godfather of AI" I'd say the same.

Imagine all the lucrative speech contracts and the shady back payments from the powers-to-be.

Yep. There's no danger at all. Y'all gonna keep your jobs.

Don't forget to vote for me.
 
"if you realize it's not safe you just don't build it." - like we did not build the nukes
I think that if we realize it is not safe it is further enticement to build it. And unlike the nukes, building an AI requires less controlled resources and technologies, mostly off the shelf tech, so regulation is not going to put the plug. You could probably invent a crypto coin whose farms instead of "mining" are used for training and instead for transactions they make inferences. You don't even need a datacenter and probably you can't just unplug it, bomb it or whatever.
 
AI will make us all dumber. We will rely on on to create compositions for us. We will accept its answers as factual instead of doing the research our selves.
It will just be a dumber world.
 
Human labor is extremely cheap, there are 8B humans in this world, the number always rises, you don't even actually have to feed or maintain or anything for them. QOL is low for the vast majority and forever will be, that's just how the world works, actually building and maintaining AI is far more expensive and involved, and you can't just throw it away to try another model like you would with human beings. You'd still have to put the work in for that. Can't even poach or steal performers from other elites that way, plus even if you could, what fun is it lording over the AI? It's not going to be threatened or bullied, it doesn't know what that is, it has no fear of itself or its children or anything else, it's just an AI.

Also every single time there have been greater advances that are supposed to free more people up to have leisure time, the aristocracy despises it and figures out some more busy work for them to be miserable. We've gone through many revolutions of this, from farming equipment to electricity to assembly lines to computers to the internet, people only ever work more and get more stressed. I'm sure even with AI the vast majority of humans would just be enslaved, probably the AI would primarily be used to keep the slaves in order.
 
Back