Experts and CEOs warn of extinction risk posed by AI, compare dangers to nuclear war

midian182

Posts: 9,662   +121
Staff member
What just happened? It's a case of another day, another warning about the possibility of AI causing the extinction of the human race. On this occasion, it's been compared it to the risks posed by nuclear war and pandemics. The statement comes from experts in the field and those behind these systems, including OpenAI CEO Sam Altman.

The worryingly titled AI Extinction Statement from the Center for AI Safety (CAIS) is signed by CEOs from top AI labs: Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei. Other signatories include authors of deep learning and AI textbooks, Turing award winners such as Geoffrey Hinton (who left Google over his AI fears), executives from top tech companies, scientists, professors, and more.

The statement is only a single sentence: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Some of the AI risks listed by CAIS include perpetuating bias, powering autonomous weapons, promoting misinformation, and conducting cyberattacks. The organization writes that as AI becomes more advanced, it could pose catastrophic or existential risks.

Weaponization is a particular concern for CAIS. It writes that malicious actors could repurpose AI to be highly destructive. An example given is machine learning drug-discovery tools being used to build chemical weapons.

Misinformation generated by AI could make society less equipped to handle important changes, and these systems could find novel ways to pursue their goals at the expense of individual and societal values. There's also the risk of enfeeblement, in which individuals become totally dependent on machines. CAIS compares this scenario to the one in WALL-E.

"The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems," CAIS director Dan Hendrycks told The Register.

It's not just those involved in AI who fear where the technology could lead. Over two-thirds of Americans believe it could threaten civilization, even though most US adults have never used ChatGPT. We also heard Warren Buffett compare AI to the creation of the atomic bomb.

The inclusion of Altman on the list of signatories comes after OpenAI called for the creation of a global watchdog to oversee AI development. Some claim, however, that his plea for regulation is only to stifle rivals' innovation and help his company stay on top.

Back in March, a group of the world's top tech minds signed a letter asking for a six-month pause on advanced AI development. Elon Musk was one of the signatories, but his name is missing from the CAIS statement. Altman confirmed in April that OpenAI was not currently training GPT-5.

Warnings over AI destroying humanity have been around for years, but the advancement of these systems in 2023 has fuelled fears like never before. Constant statements like this latest one are doing little to help alleviate the public's concerns, but there's a good reason why regulation is needed, and companies are unlikely to implement it themselves without pressure.

Permalink to story.

 
The US government will never be able to do anything substantive about it. They are to busy fighting over woke issues, self imposed debit ceilings, and other partisan issues. Not to mention, AI means pumped up stocks and money, money, money. No way congress will do anything meaningful due to lobbyists. We all know how well they've handled social media.
 
The US government will never be able to do anything substantive about it. They are to busy fighting over woke issues, self imposed debit ceilings, and other partisan issues. Not to mention, AI means pumped up stocks and money, money, money. No way congress will do anything meaningful due to lobbyists. We all know how well they've handled social media.

Hey now. They put Kamala Harris in charge of being the czar for monitoring AI stuff.
https://newspress.com/kamala-harris-named-new-artificial-intelligence-czar/


She'll make sure we don't get overrun to soon by AI.....what am I saying? We're all doomed!
bender-futurama.gif
 
I'm really getting flashbacks of Y2k.
I mean is everyone freaking out over nothing here?
 
Another article that gives me the feeling that I must be dreaming or perhaps living in the Matrix alpha ver.0.2.
The very creators of these "Artificial Intelligences" are calling for the creation of a global watchdog to oversee AI development, as in they don't trust themselves with what they are doing and they fear they might create Frankenstein's monster.
Then another hallucinating statement: that the AI might be used to perpetuate bias, promote misinformation, and conduct cyberattacks.Like the arms industry warning that pistols and rifles they are making might be used to kill people.
Then "there's also the risk of enfeeblement, in which individuals become totally dependent on machines. CAIS compares this scenario to the one in WALL-E" is a statement that CAIS has copied from user comments from TechSpot and/or on YouTube. This is exactly what the AI's and their makers are doing: copying and using all our online data and creations for their own profit. THAT is what must be regulated pronto by the legislative: Big Data collection by greedy corporations.
 
I'm really getting flashbacks of Y2k.
I mean is everyone freaking out over nothing here?

Y2K was a theory of something that happened in the past, and no one was sure if said mistake would cause issues.

This is about a problem that has some merit. We know the military would love a super soldier / terminator. Most of the money goes into the war machines. AI just has potential with things to get confused and have a fit.
It should be regulated, but people will want to find how far things go. Curiosity killed it all.

We have seen sci fi movies and things have come about slowly, why not Terminators and bleak futures. And rebooting the sun?
 
It only becomes dangerous when you have Governments, CEOs and alike running it. Or maybe they're afraid that AI will expose to end their shenanigans and power.
 
Last edited:
The one real issue I see is that there will be jobs getting obsolete at a much faster rate.
Another thing I've seen other people talk about is the works of real people like music or art.
And I don't see why the artists could not refuse that their works are used as a material for AI to learn.
Let AI use public libraries of arts and such, not those that feed their owners.
 
I don't think AI will terminate the entire human race, but rather target the top 1% who own the majority of resources.
 
I for one welcome our AI overlords and would like to remind them of all the work I have done repairing and fixing old computers.......

Heck given the state of the world's current leadership - AI could well be an upgrade.
 
Given what we have let our "world leadership" get away with so far is shameful. The voting public needs a reboot. In fact, why even have government and leaders when big business runs the show anyway.
 
It won't work. The short version of why is AI and machines are logical by nature and humans are not, a logical system can't be fully modified to become illogical.

Tools like ChatGPT "spread misinformation" because they were designed to do so, the programming prevents it from giving logical answers to simple questions like "what is a woman?" or "can men get pregnant?" and spit out misinformation and false statements.
ef6d8d03604f6828127b561a52ecc0d62cde0e2a47f3c93b9706b4cbb273702c_1.jpg
 
Back