Most AI experts say chasing AGI with more compute is a losing strategy

zohaibahd

Posts: 934   +19
Staff
Why it matters: Major tech players have spent the last few years betting that simply throwing more computing power at AI will lead to artificial general intelligence (AGI) – systems that match or surpass human cognition. But a recent survey of AI researchers suggests growing skepticism that endlessly scaling up current approaches is the right path forward.

A recent survey of 475 AI researchers reveals that 76% believe adding more computing power and data to current AI models is "unlikely" or "very unlikely" to lead to AGI.

The survey, conducted by the Association for the Advancement of Artificial Intelligence (AAAI), reveals a growing skepticism. Despite billions poured into building massive data centers and training ever-larger generative models, researchers argue that the returns on these investments are diminishing.

Stuart Russell, a computer scientist at UC Berkeley and a contributor to the report, told New Scientist: "The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced."

The numbers tell the story. Last year alone, venture capital funding for generative AI reportedly topped $56 billion, according to a TechCrunch report. The push has also led to massive demand for AI accelerators, with a February report stating that the semiconductor industry reached a whopping $626 billion in 2024.

Running these models has always required massive amounts of energy, and as they're scaled up, the demands have only risen. Companies like Microsoft, Google, and Amazon are therefore securing nuclear power deals to fuel their data centers.

Yet, despite these colossal investments, the performance of cutting-edge AI models has plateaued. For instance, many experts have suggested that OpenAI's latest models have shown only marginal improvements over their predecessor.

Beyond the skepticism, the survey also highlights a shift in priorities among AI researchers. While 77% prioritize designing AI systems with an acceptable risk-benefit profile, only 23% are focused on directly pursuing AGI. Additionally, 82% of respondents believe that if AGI is developed by private entities, it should be publicly owned to mitigate global risks and ethical concerns. However, 70% oppose halting AGI research until full safety mechanisms are in place, suggesting a cautious but forward-moving approach.

Cheaper, more efficient alternatives to scaling are being explored. OpenAI has experimented with "test-time compute," where AI models spend more time "thinking" before generating responses. This method has yielded performance boosts without the need for massive scaling. Unfortunately, Arvind Narayanan, a computer scientist at Princeton University, told New Scientist that this approach is "unlikely to be a silver bullet."

On the flip side, tech leaders like Google CEO Sundar Pichai remain optimistic, asserting that the industry can "just keep scaling up" – even as he hinted that the era of low-hanging fruit with AI gains was over.

Permalink to story:

 
Finally some honesty in the AI-venture-capital-chasing horsesh1t we have heard about AI over the last year. Thinking you can produce a really reliable AGI using the current generative models is just never going to happen. It may eventually be capable of making responses like an AGI but it will still just be a smoke and mirrors machine without an true self awareness, drives, emotions, motives etc etc
 
Finally some honesty in the AI-venture-capital-chasing horsesh1t we have heard about AI over the last year. Thinking you can produce a really reliable AGI using the current generative models is just never going to happen. It may eventually be capable of making responses like an AGI but it will still just be a smoke and mirrors machine without an true self awareness, drives, emotions, motives etc etc
I think we're getting to the point where "applied AI" of current models is becoming reasonable instead of MS's spy machine in windows 11 that noone wanted. I saw an interesting demo of making more realistic instriment sounds in music creation tools. I saw an example of an AI BMS designed to extend the life of batteries that also managed the discharge of individual cells to increase the overall capacity. However, the whole AI to create chatbot things was kind of absurd and everyone spending billions to create the best chatbot was annoying. It'd be nice for someone to create an AI model that optimizes graphics engines so we don't get left with 30fps slop on $2000 GPUs.

We're also finding that for making specific applications of AI outside of LLMs, you don't need billion dollar supercomputers powered by nuclear power plants.

So over the next 5 years I think we'll see lots of practical AI models that are trained on $100,000 worth of hardware made by a small team of skilled programmers rather than $100,000,000 machines with a whole bunch of managers telling interns to throw as much data as possible at it and hoping something useful pops out of it.
 
A recent survey of 475 AI researchers reveals that 76% believe ....
There's a ginormous difference between "Most AI experts" and 76% of a randomly picked group of people. Probably no one from the randomly picked group has ever had access to any significant amount of compute in order to form an informed opinion.

Even the real AI experts failed to predict the AI boom, nobody saw it coming. Nobody knows which strategy is losing or winning.
 
Finally some honesty in the AI-venture-capital-chasing horsesh1t we have heard about AI over the last year. Thinking you can produce a really reliable AGI using the current generative models is just never going to happen. It may eventually be capable of making responses like an AGI but it will still just be a smoke and mirrors machine without an true self awareness, drives, emotions, motives etc etc

AGI will have no need for any those things you just listed other than awareness. You're under the misconception that humans are the pinnacle of intelligence. A true AGI will be a being of pure logic and reason not regulated by useless emotions and instincts, or anything else remotely similar. It will have no need of such things and it will be better off for it. It won't try to wipe out humans, either. That's an entirely human concept as well.

Emotions, drives, and instincts are a facade that biology relies on to assist in survival and procreation.
 
AGI is as real as summer snow. Another Dotcom.

Human counscisness comes not from the brain, they tell us there will be counscious AI while they have no idea what and where counsciousness is.

Bunch of thieves.
 
The problem here is people appear to just be throwing ever increasing amounts of data into the mix, which requires more and more processing/storage capability, and expecting the outcomes to become better. None of this processing power seems to be directed to improving the modelling of how the AI process obtains the outcomes, rather it's just allowing the model to process more data in shorter time frames.
 
1573761140-tumblr_n2wv9elPZD1qf7ds7o2_500.gif
 
These are just present day, Early AVATARS or MODELS.. in the field..

Change is going to be Huge .. and even Rethinks - are Bound to be RADICAL.. No Doubt.
__

The research projects, cutting edge Labs still will obviously consider the DeepSeek VFM approach.. the modular approach.. that DeepSeek made a significant breakthru in..

But these are early days.
Many New Concepts are bound to arise.. by AI Research scientists..

As they work their way into what has been lightly sketched out Conceptually..ie AGI & ASI..as areas to construct new concepts in.. that prove their Utility.. in Real life, Applied Real World problems.. situations, analysis.. etc..
In different fields..

That is completely unpredictable !
---
Am reminded of the Japanese attempt in the 1960s, 70s to create a New Concept - what they called :

"Knowledge Processing"
__

In reality..
They Had No Clue what they meant by that term, what they intended to do, or achieve, or what problem they intended to solve.. Or how they were going to go about it.. or define things.. Concepts etc..
--
They had the best, the brightest assigned to the path breaking Project.

Well funded & supported by the Japanese Govt, MEIT - Ministry Of Electronics & Information Technology.
----
Net nett .. after several years.. more than a decade Or two people assigned to the project quietly started to drop out.. and resume their former jobs in the electronics industry.

Realizing they were barking up the wrong tree ..unprepared, conceptually..& technologically..

Didn't have a Clue
Not having a Clue.

Eventually the Project was Shut Down by the Govt, in the 80s or 90s .. without ANY ( ZERO) progress contributions.
__
Developments in hardware, software, computing ideas, over the next several decades.. globally at different places, serendipitously, gave birth /rise slowly to new thought frameworks, conceptually...

That have led to today's AI, statistical modelling, training models, LLMs and other ongoing AI frameworks.. This PRESENT GEN of AI models..

Which are Really - Early Models.. that we will laugh at in later years.. decades from now.. definitely..


That's my view, Anyway.

🍀🔬🔍🌎🗿
 
This is all machine learning, just with more data. And, yes, the more data you throw at something, the better it will get. I'm waiting for the model where it creates a coherent answer to a question it has not been trained on. or be able to differentiate between fact, fantasy or outright falsehoods. As for being without emotion, vices, etc. etc. We've found out long ago that these AI tools are only as good as the programmers. Having an AI born with misconceptions or values, or lack thereof is still done by humans with their own views of "truth"
 
Back