Reinforcement learning pioneers harshly criticize the "unsafe" state of AI development

Alfonso Maruccia

Posts: 1,706   +499
Staff
Who are they? Richard Sutton and Andrew Barto are pioneers of reinforcement learning, a machine learning technique modern AI models utilize. Sutton is often referred to as the "father of reinforcement learning" and serves as a professor at the University of Alberta. Barto is a professor emeritus at the University of Massachusetts. Both scientists are not particularly pleased with how AI companies are applying their life's work.

Richard Sutton and Andrew Barto won this year's Turing Award, considered the Nobel Prize for computing, for their significant contributions to machine learning development. The two researchers are now speaking out against OpenAI, Google, and other AI companies releasing potentially dangerous software to end customers. They criticized ChatGPT as just a money-making machine that will never produce a working artificial general intelligence (AGI).

Sutton and Barto developed reinforcement learning (RL) during the 1980s, inspired by behaviorist psychology. Reinforcement learning is one of the three basic machine learning paradigms, along with supervised and unsupervised learning. Reinforcement learning teaches AI agents, through trial and error, to make decisions that achieve the most optimal results, similar to how humans learn.

OpenAI, Google, and other corporations build their AI platforms with RL. Financial Times notes that Barto believes that bringing this kind of AI software to millions of people without safeguards is inherently wrong. Using a metaphor, Sutton and Barto pointed out that most or all AI companies are building a bridge and testing its structural integrity by opening it to the public.

Barto says that sound engineering practices suggest that developers try to mitigate the negative consequences of technology. Neither OpenAI nor any other AI-focused company is doing that. Current AI models make errors, hallucinating non-existing "facts" with binary confidence, but the companies behind them are collecting billions of dollars in unprecedented funding campaigns.

"The idea of having huge data centers and then charging a certain amount to use the software is motivating things, and that is not the motive that I would subscribe to," Barto said.

For-profit companies only seek money-making opportunities. The eventual event of one of them bringing the first (AGI) onto the world is just bragging rights; even those are leveraged to boost sales.

Proponents of AGI think that this kind of superhuman, all-digital intelligence is almost here and will radically revolutionize technology and everything else. Sutton suggested that AGI is just a buzzword for marketing campaigns. Barto remarked that companies developing AI need to gain a better understanding of how the human mind works before they can responsibly build systems with human-level intelligence.

Permalink to story:

 
They're 100% right on all points. AI is arguably more dangerous than nuclear weapons. Look what happened when the script kiddies got the tools to create no-code botnets. AI is going to be a thousand times worse.
 
They're 100% right on all points. AI is arguably more dangerous than nuclear weapons. Look what happened when the script kiddies got the tools to create no-code botnets. AI is going to be a thousand times worse.
I don't see how it is worse. A nuke would wipe civilisation out. If AI got to the point of getting outside of computers and create killer bots, a few EMPs would quickly reverse what we did.
 
OpenAI is a perfect case-in-point: keeping their latest models closed initially in the name of "safety", only to reveal the true reason for that choice, money. This was made apparent by them pivoting, slowly but surely, from a non-profit structure to a for-profit one.
 
Yeah, sure ... instead of developing a technology with enormous potential, why not stop for a couple of years in order to fix imaginary future problems. But then new problems will pop in somebody's imagination, and we should start fixing them ...

It's the same hypochondriac thinking behind climate hysteria - we lost trillions to fix imaginary future problems, along with seriously damaging our industrial base. Hopefully 'climate change' is dead now, but for the colossal amount of money that we wasted, let at least learn the lesson and not repeat the same idiocy again.
 
I don't see how it is worse. A nuke would wipe civilisation out. If AI got to the point of getting outside of computers and create killer bots, a few EMPs would quickly reverse what we did.


A nuke or even a small exchange would not wipe civilization. EMPing infrastucture will get you there quick. I love that idea, 'yeah, lets um EMP the monsters and....wither', hahahahah. Besides, Thinkers don't need to wipe humanity, just the over-achievers and 'get-aheaders' and 'money/resource-grubbers', duh. People at large act fine, even those in poverty.
 
AI is as safe as we make it to be. we probably have to have it to overcome the other serious threats we have created for ourselves, mainly global warming.
 
A nuke or even a small exchange would not wipe civilization. EMPing infrastucture will get you there quick. I love that idea, 'yeah, lets um EMP the monsters and....wither', hahahahah. Besides, Thinkers don't need to wipe humanity, just the over-achievers and 'get-aheaders' and 'money/resource-grubbers', duh. People at large act fine, even those in poverty.
1 destroys infrastructure that you can rebuild. The other destroys the land and soil of which you need to grow food to live and contaminates the drinking water. Great logic mate, don't give up your day job.
 
1 destroys infrastructure that you can rebuild. The other destroys the land and soil of which you need to grow food to live and contaminates the drinking water. Great logic mate, don't give up your day job.

Rebuild with what left-over technology? And most won't use dirty bombs, as they will want you gone and have the land to themselves. Ain't no nuclear war coming. An incident or two, perhaps, though I find it highly unlikely. Musk obviously fears this scenario, but China won't allow it. Yes, China rules the world, so to speak.
 
Last edited:
Back