OpenAI CEO Sam Altman says superintelligence could arrive in "a few thousand days"

midian182

Posts: 10,633   +141
Staff member
A hot potato: The rapid advancement of generative AI in recent years has led to questions about when we might see a superintelligence – an AI that is vastly smarter than humans. According to OpenAI boss Sam Altman, that moment is a lot closer than you might think: "a few thousand days." But then his bold prediction comes at a time when his company is trying to raise $6 billion to 6.5 billion in a funding round.

In a personal post titled The Intelligence Age, Altman waxes lyrical about AI and how it will "give people tools to solve hard problems." He also talks about the emergence of a superintelligence, which Altman believes will arrive sooner than expected.

"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.

Plenty of industry names have talked about artificial general intelligence, or AGI, being the next step in AI evolution. Nvidia boss Jensen Huang thinks it will be here within the next 5 years, whereas Softbank CEO Masayoshi Son predicted a similar timeline, stating that AGI will land by 2030.

AGI is defined as a theoretical type of artificial intelligence that matches or surpasses human capabilities across a wide range of cognitive tasks.

Superintelligence, or ASI, outperforms AGI by being vastly smarter than humans, according to OpenAI. In December, the company said the technology could be developed within the next ten years. Altman's prediction – a thousand days is about 2.7 years – sounds more optimistic, but he is being quite vague by saying "a few thousand days," which might mean, for example, 3,000 days, or around 8.2 years. Masayoshi Son thinks ASI won't be here for another 20 years, or 7,300 days.

Back in July 2023, OpenAI said it was forming a "superalignment" team and dedicating 20% of the compute it had secured toward developing scientific and technical breakthroughs that could help control AI systems much smarter than people. The firm believes superintelligence will be the most impactful technology ever invented and could help solve many of the world's problems. But its vast power might also be dangerous, leading to the disempowerment of humanity or even human extinction.

The dangers of this technology were highlighted in June when OpenAI co-founder and former Chief Scientist Ilya Sutskever left to found a company called Safe Superintelligence.

Altman says we are approaching the cusp of the next generation of AI thanks to deep learning. "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data)," he wrote.

"To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."

The post also claims that AI models will soon serve as autonomous personal assistants that carry out specific tasks for people. Altman admits there are hurdles, such as the need to drive down the cost of compute and make it abundant, requiring lots of energy and chips.

The CEO also acknowledges that the dawn of a new AI age "will not be an entirely positive story." Altman mentions the negative impact it will have on the jobs market, something we're already seeing, though he has "no fear that we'll run out of things to do (even if they don't look like 'real jobs' to us today)."

It's significant that Altman wrote the post on his personal website, rather than OpenAI's, suggesting his claim isn't the official company line. The fact that OpenAI is reportedly looking to raise up to $6.5 billion in a funding round might also have prompted the hyperbolic post.

Permalink to story:

 
There are only ten million scientists in a population of ten billion. So there's no need to worry that they'll be extinct because they're not smart enough. If that was the case, it would have happened already.

I don't think AI is going to become conscious. Penrose's Orchestrated Objective Reduction (Orch-OR) theory suggests that consciousness requires biological neurons with microtubules and a quantum environment. It's not a proven theory, but I can prove it right now. We have scientific evidence that some birds have self-consciousness because they can recognise themselves in a mirror even if their brains don't have many neurons. We can also see that modern AI (LLMs) which are smarter than most humans still don't have self-consciousness. So the consciousness comes from the biology neurons, not from their structure.

AI doesn't have biology neurons, so it would not become conscious.
 
Hahaha the classic 'in ten years we'll be *insert ridiculous prediction*'

It almost never comes true and isn't based on anything tangible. He's giving himself an extra layer of smug quasi-intelligence by phrasing it in a laughable way.

If those tech predictions were ever true we'd all be using quantum computers by now. But no, maybe in a 'few dozens of months' 😂
 
Last edited:
AI doesn't have biology neurons, so it would not become conscious.
I think you're going to be very surprised in the next 10 years.

We've already simulated worm brains (the OpenWorm project) and worms obviously use neurons. I'm guessing they don't have consciousness but bird brains are only larger versions of the same thing, as are human brains. Nevertheless, I don't think the way forward with true AI is to mimic biological brains.
 
I think you're going to be very surprised in the next 10 years.

We've already simulated worm brains (the OpenWorm project) and worms obviously use neurons. I'm guessing they don't have consciousness but bird brains are only larger versions of the same thing, as are human brains. Nevertheless, I don't think the way forward with true AI is to mimic biological brains.
Way to completely miss his point on consciousness. Also ROFL, we hear this same line every time some fancy tech comes out. We'd be surprised where crypto and NFTs will be in 10 years, self driving cars, nuclear fusion, ece. Creating actual consciousness requires far more power then any hypothetic server farm could ever produce. Emulating the dumptruck of chemical and electrical reactions that make up the human brain requires a LOT of power.
 
Also ROFL, we hear this same line every time some fancy tech comes out

And we've heard THIS one multiple times. There have been at least three AI hype cycles and they've all died down after time. This one is lasting unpleasantly long. The problem is that it's not artificial intelligence, it's just a very big data model, placed in reach by the fact that supercomputers are relatively less expensive now.

"artificial intelligence" is a really awful name for it, and we really ought to start slapping down the people who perpetuate the hype, and rewarding the people who are quietly building useful models.
 
Yeah just like Elon Musk and Google predicted fully self-driving cars "soon" a decade ago, and it is still a work in progress. Whatever self-driving tech is there, is very limited for mass adoption and use worldwide.

These "predictions" are to keep the demand and craze up.
 
Way to completely miss his point on consciousness. Also ROFL, we hear this same line every time some fancy tech comes out. We'd be surprised where crypto and NFTs will be in 10 years, self driving cars, nuclear fusion, ece. Creating actual consciousness requires far more power then any hypothetic server farm could ever produce. Emulating the dumptruck of chemical and electrical reactions that make up the human brain requires a LOT of power.
Please explain the point I missed.

I don't think we know how much power is required for consciousness. I don't think we even know what consciousness is let alone how much "power" it requires. Trying to emulate thought by simulating a human brain is almost certainly the wrong way forward. It's a bit like trying to break the land speed record using fast moving mechanical legs.

They started by saying that computers would never be good at games that required intelligence then Samuel produced his check program in the late 1950's. Then they said that checkers was too easy and that they'd never conquer chess and now AlphaZero can beat anyone and grand masters try to learn from it's games. The same thing happened with Go. Obviously these are games have strict rules and a very defined board but a program called Watson won the $1m prize at a quiz game called Jeopardy! - I'll admit I haven't a clue what this game is (general knowledge?) but people seemed to be impressed.

It's not just games, programs can translate from almost any human language to almost any other language. You can now ask ChatGPT to write the code for your game. You can even get it to suggest ideas for a new games etc. Many programs have actually passed the Turing test now. We certainly do have self driving cars, the latest Tesla software seems to drive as well as anyone (Search youtube and Tesla FSD 12.5.4).

None of these things are conscious but it's a blurry line and things are progressing at a crazy pace the past few years.
 
A few thousand days? What a stupid measurement of time. What's wrong with saying a decade?

Because he might be right saying this; 2000, 3000, 4000, 5000, 6000 days etc, he said it. A few thousand can be everything. He's an oracle. Sigh.
 
Musk 2.0 all over ...

As absolutely believable as "we'll all be driven around by self-driving tech in X years", where X always = X+5
 
Breaking news; guy whose personal finances are dependent on AI hype hypes AI with no evidence of anything.

It is abundantly clear that LLM’s are not moving in the direction of sentience or actual intelligence, but are rather just becoming more sophisticated at statistical ‘reasoning’. There’s a great YouTube series with a guy who asks every chatbot to run a game of dnd for him, and wouldn’t you know. He ALWAYS. Ends up in the whispering woods. He is ALWAYS a rogue. even the new version of chatGPT still saw him ending up in the whispering woods. Was it better than prior versions? Yes. Did it still essentially play the guy the same scenario that every other chat bot runs him through? Yes.

There is NOTHING. That evidences anything towards a general AI in any of the many LLM products, and I really wish corpos would stop selling smoke and mirrors, so we can concentrate on actual applications of AI instead.
 
I wish I could take a break from AI for a few thousand days.... if not more ;p Im getting tired of hearing about it. We get it... enough already!
You're gonna have to face it eventually. The tech landscape has already been completely changed by AI, and will continue to change. Rejecting this new tech and avoiding news of it is only going to hold you back. When the internet first became a reality, there were plenty of skeptics and critics who wanted nothing to do with it. Those people are the individuals today who can barely use a computer and fall victim to simple scams.
Protect your future.
 
You're gonna have to face it eventually. The tech landscape has already been completely changed by AI, and will continue to change. Rejecting this new tech and avoiding news of it is only going to hold you back. When the internet first became a reality, there were plenty of skeptics and critics who wanted nothing to do with it. Those people are the individuals today who can barely use a computer and fall victim to simple scams.
Protect your future.

Sure .. but there were at least 3-4 busts of hype vs reality before the internet really gained actual traction. But surely you can you remember this time being the old hat you are?
 
We can theoretically say.
AI could gain consciousness, in a sense, by constant learning. Where this input might come from ? It kind of needs a loop... Because to learn, it needs to fail at something... So a loop would mean it must just fail, then just try again... until it evolved beyond that loop.

Someone said about a sort of parallel quantum physics.

The AI would need to constantly be aware of some sort of input, until that input changes its internal equations.
But how could AI achieve more than the orchestrated default internal code? Will never have that duality humans have... A computer cannot talk to itself and also observe at the same time as humans can. So a computer would not be able to detach from its inner code in order to change it, and at the same time not crash.

Another way AI might evolve, is in steps.
We need super weather AI system.
AI about possible breakthroughs, etc.
Then we need thousands more AI systems.

At one point a more evolved central AI would emerge and use these systems as a point of focus... Or rather as super awareness.
 
Hope that this AI bubble going to burst soon. Most people doesn't understand AI, they thinks that we are communicating with someone who has consciousness. But It is just like calculator where you give input as 1+2 = and the output result is 3 and programming in the AI is very complex, it mimicks to makes us to feel that we are communicating with livable thing. Yes, certainly AI will be helpful to humans in performing various task but it doesn't have ability like scientists to innovate new technology or new theories. It doesn't have any consciousness to think deeper and innovate new things like humans. Just like computer era when people fear that it is going to destroy all human jobs, AI hyped is doing the same thing. But due to modern media era, it has created bubble in short span of time. I am pretty much sure that numerous number of the AI reasearch companies are going to be failed soon and this AI bubble going to burst in a decade.
 
There are only ten million scientists in a population of ten billion. So there's no need to worry that they'll be extinct because they're not smart enough. If that was the case, it would have happened already.

I don't think AI is going to become conscious. Penrose's Orchestrated Objective Reduction (Orch-OR) theory suggests that consciousness requires biological neurons with microtubules and a quantum environment. It's not a proven theory, but I can prove it right now. We have scientific evidence that some birds have self-consciousness because they can recognise themselves in a mirror even if their brains don't have many neurons. We can also see that modern AI (LLMs) which are smarter than most humans still don't have self-consciousness. So the consciousness comes from the biology neurons, not from their structure.

AI doesn't have biology neurons, so it would not become conscious.
People have been sold the idea that neural nets have something to do with neurons, which the don't. They are just a statistical connectionist model that were almost dropped until Grossberg decided - against his better judgment - to call them NN's almost 40 years ago. As von Neumann explained in the Computer and the Brain, a neuron is both binary and analogue, and that was before people knew about quantum effects, so this is all just hype. Also there are over a hundred different types of neurons!
It's good that some people understand the subject, but how much damage will Musk and Altman do before the public realise they are just conmen.
 
To avoid being wrong. He's thrown out a window from 3 to 15 years.
Put a few zeros after those numbers and it might be close. The fact is it will never happen with von Neumann binary computers, but as long as he can fool a few more VC's, that's all he cares about.
 
I just love these pie-in-the-sky guys. Spout a bunch of crap they think dolts will believe so they can make a killing on profit.
The problem is there are a lot of fanboys, so we will just have to wait until the bubble bursts, which hopefully will be before there is too much data centre pollution. The fact that Altman thinks AI will solve this is just sad.
 
Back