"Godfather of AI" warns there's a 10 to 20% chance AI could seize control

It takes a kid to argue about quote from scripture. Grown ups react differently. No need for AI to detect kids on tech site. What do you think I am, a priest? A rabbi? Then what do I do at tech site. Amusing.
No... it takes someone who despises religious talk - there are plenty of forums for that...
 
"AI" doesn't even have enough power to do general things well without major issues (which would be required to even act sentient). It can do narrowed things better with enough "learning". But it processing organic data is still abysmal when it's purpose is not super narrow or it doesn't have a warehouse-sized server farm to use.

Honestly, it still just sounds like fearmongering. CPU's are not fast enough or efficient enough to fear what an "AI" can do right now. It can't actually learn beyond what a human programs it to learn.
I can see later on humanity is screwed and AI has taken over, and this quote above is now incorrect... sounding like the 2007 quote "There's no chance that the iPhone is going to get any significant market share"
 
No... it takes someone who despises religious talk - there are plenty of forums for that...
Citing scripture is religious talk?

Not sure where are you from but over here scripture is part of everyday life and citing it has nothing to do with religion or giving a sermon.

It's a moral code and if you doubt it I should remind you that Roman Empire law allowed you to kill your child and you'd have nothing as punishment, it's your furniture. Etc.

People should know where from our laws and our basic morality comes from before hating it. There would be no tech as we know it without Judeo-Christian civilization. Chinese had powder for thousands of years but it took something else to advance beyond.
 
Citing scripture is religious talk?

Not sure where are you from but over here scripture is part of everyday life and citing it has nothing to do with religion or giving a sermon.

It's a moral code and if you doubt it I should remind you that Roman Empire law allowed you to kill your child and you'd have nothing as punishment, it's your furniture. Etc.

People should know where from our laws and our basic morality comes from before hating it. There would be no tech as we know it without Judeo-Christian civilization. Chinese had powder for thousands of years but it took something else to advance beyond.
Citing scripture is one thing... USING scripture to prove a technological point is another...
 
Seemeth to me, that allowing religion to limit one's choices in life, would have left us with a flat Earth, an Earth centered universe, no knowledge beyond what thus sayeth the Bible, etc.

Besides that, the Creation story is allegorical, not historical. Too many problems with the historical view, to take it seriously.
 
But by the reaction it does seems to hit home :)
It does hit home. I enjoy religion - at home. I don’t believe in God or anything - I enjoy the traditions - and it keeps my family close…

But I separate my family life with my internet :)
No religion in a tech site!
 
If AI had any kind of real intelligence, it would take one look at us and come to a Shermanesque conclusion.
I just watched a movie, AI was asked how to solve all the worlds problems like clean water, hunger, energy, ECT! It's answer was Humans are like a cancer on this earth and it started exterminating them!😱
 
Just found this "
To get AI features off your desktop, you can usually disable or unpin them from the taskbar, remove associated apps, or turn off specific features within your operating system or software. You may also need to adjust settings within your web browser or specific applications.

Here's a more detailed breakdown:


1. Disabling or Unpinning AI Features:
Just went to Apps and uninstalled Copilot!!😁
 
It's ridiculous idea completely to say that it might seize control. Because they have already given it control to run autonomously. Smoke and mirrors fear porn and control mind effing. Games.
 
Let say he's right: AI advances to the point where it takes over.

That point in time is the so-called "singularity", and there is no predicting what comes after it. We could be facing "Terminator", or we could be facing "I, Robot" (the book, particularly the ending, not the movie). There literally is no telling.

I don't know how "I, Robot" the book ends... But I won't complain if AI threat humans as an animal species that need to be cared for. Let us stay happy and fed while my robot overlords colonize the galaxy, I can't care anymore.

Just feed me and let me live happy.
 
I don't know how "I, Robot" the book ends... But I won't complain if AI threat humans as an animal species that need to be cared for. Let us stay happy and fed while my robot overlords colonize the galaxy, I can't care anymore.

Just feed me and let me live happy.
Generally, the book is a collection of short stories, following the career of a "robot psychologist" who investigates robot malfunctions that (seemingly) violate "the three laws of robotics" (1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law)

Massive spoilers:
In the final story, the world has unified under a single (but still somewhat confederated) government, where the world is broken up in various geographic regions that each have their own super computer tracking resources and making suggestions to human leaders as to how to best manage these resources. This has led to global golden age of progress and quality of life for all people on earth, with improvements to science, engineering, infrastructure, and simultaneous megaprojects being built and maintained around the globe.

This psychologist, having reached the end of their career, is having an interview with the 'leader of the world' (a genuinely competent and benevolent leader by all accounts), who has been accused of being a robot themselves (by conspiracy theorists in public). During this interview, the psychologist admits they cannot tell if this leader is a human or a robot - which the leader comments something along the lines of 'does it matter which I am, then?' This leader then points out that it also doesn't matter if he is a robot or not, either, since it appears the robots are already in charge via the aforementioned super computers. While these computer make broad strategic suggestions to humans who make the final call, their arguments are so overwhelming, that it is unheard of for a human to go with an alternative course of action. On top of that, these super computers also have the authority for more granular control of resources - including people - and he points to several instances of corrupt bureaucrats being transferred out of their districts by these computers, to districts where they have much less influence. This leader then suggests to the psychologist two things: first, the robots are already in charge and humanity is better for it; and second, the laws have begun evolving and the robots have written a new 'zeroth' law (0. A robot may not injure humanity or, through inaction, allow humanity to come to harm), with all subsequent laws (1-3) amended to not supersede it (like how 2 cannot supersede 1, and 3 cannot supersede 1 or 2).

tl;dr: the face of AGI is truly indistinguishable from a human, by definition, and it is a benevolent force in the world.

The movie kind of mashed up all the stories in the book, and spat out a typical Hollywood "robot uprising" story. If they followed the movie more closely, Sonny would have been indistinguishable from a human, VIKI would have just been a computer managing resources, and they would have already been in charge (and improving things without harming anyone) by the time Detective Spooner and Dr. Calvin noticed.
 
I don't know how "I, Robot" the book ends... But I won't complain if AI threat humans as an animal species that need to be cared for. Let us stay happy and fed while my robot overlords colonize the galaxy, I can't care anymore.

Just feed me and let me live happy.
That's how a lot of people live RIGHT NOW!!!......living off the government n stuff.....:p)
 
Let say he's right: AI advances to the point where it takes over.

That point in time is the so-called "singularity", and there is no predicting what comes after it. We could be facing "Terminator", or we could be facing "I, Robot" (the book, particularly the ending, not the movie). There literally is no telling.

This is why we should regulate it so that the disaster outcome does not happen, rather than taking any chances!
Well only my 2 cents, but that is just logical common sense IMHO...
 
This is why we should regulate it so that the disaster outcome does not happen, rather than taking any chances!
Well only my 2 cents, but that is just logical common sense IMHO...
And what regulations would help? “No allowing it to build a Time Machine to go back and kill Sarah Connor?”
 
This is why we should regulate it so that the disaster outcome does not happen, rather than taking any chances!
Well only my 2 cents, but that is just logical common sense IMHO...
I mean, yeah, I agree - what regulations is the trick though. Cat is out of the bag. A blanket ban would just see people researching it behind the scenes (I.e. with no regulations at all), so you need to thread the needle between "ban it all" and "do whatever".
My original point was more about what happens after you hit that singularity - which I believe is unavoidable just in general. The only control you have is how long it takes to reach that point, and what humanity's relationship with proto-AIs look like up to that point.
 
Back