Putin says country that becomes AI leader will rule the world; Elon Musk warns it will...

midian182

Posts: 9,726   +121
Staff member

We’re used to hearing Elon Musk warn us about the terrifying dangers of AI, and now it seems one world leader agrees with him: Vladimir Putin. Not only did the Russian president say that artificial intelligence comes with “threats,” he added that whichever country becomes the leader in the field of AI will “rule the world.”

Speaking to students at a career guidance forum, Putin said the “future belongs to artificial intelligence.” While Russia intends to be a leader when it comes to AI development, he added that "it would be strongly undesirable if someone wins a monopolist position." The president said that Russia would be willing to share its knowledge on the subject with the “entire world.” Whether it would be happy to share everything it knows about the subject is another matter.

AI is "the future, not only for Russia but for all humankind," said Putin. "It comes with colossal opportunities, but also threats that are difficult to predict."

"Whoever becomes the leader in this sphere will become the ruler of the world."

Musk was quick to respond to Putin’s comments, warning that the race to perfect AI will most likely cause a third world war. The Tesla boss has previously stated that the technology is more of a threat to humanity’s existence than North Korea.

In a series of tweets, he explained that while Kim Jong-Un risks invasion and the collapse of his leadership if he launched a nuclear weapon, an AI wouldn’t hesitate to launch a nuke if it believes it to be the best course of action.

Last month, Musk joined 116 experts calling for a ban on lethal autonomous weapons, which they say could become the “third revolution in warfare.”

Speaking at Senate armed Services Committee in July, Gen. Paul Selva – the second highest-ranking general in the US military – said: "I don't think it's reasonable for us to put robots in charge of whether or not we take a human life."

Permalink to story.

 
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life."

Fair enough, but can you actually believe this can be prevented?

Is there any way to keep unreasonable people (alt-this, alt-that, fanatics, dictators, etc) from this kind of power? When it becomes so, do you think it would be reasonable to accept that?

I do not think there are enough "reasonable" leaders to make this happen.

Maybe there ought to be.

More likely, the survivors of the next cataclysm will sort this out.

Have a nice day.

ps Where is Adam Selene when you really need him? TANSTAAFL!
 
Last edited:
AI is dangerous but it's still not in that phase we all fear. Nukes, however, are already here and possibly ready for deployment.
 
Based on the current nature of the internet, I assume that the artifical sex slave industry will conquer AIs more than military.... :) that might help in preventing WW3....
 
The country will be the leader of the world for all of about 4 seconds before the AI takes over and turns us all into batteries.
 
Technology is developed to be the servant of mankind .... but:

technology + greed = disaster

technology + hostility = disaster

AI is no different. Let's set goals and policies to prevent disaster. As far as AI causing World War III -- no one is going to be crazy enough to put computers in charge of launching nukes.
 
As far as AI causing World War III -- no one is going to be crazy enough to put computers in charge of launching nukes.
I wouldn't go so far as to say that, people in general have a tendency to be arrogant and naive. Besides who said anything about authority being given. The AI if designed with free will would have the option to take authority. After designing an AI that can think for itself, giving it free will is the next step. It would be a catastrophic step, but one that someone would make. Why? Because as I said, we in general are arrogant and naive. Especially those in power thinking they can control everything.
 
Elon Musk is fake, AI isn't real yet, and nukes aren't real. Putin is the only solid, tangible subject in this entire post.

Fearism is terrorism. Give up your boogeymen, people. Wake up.
 
Elon thinks the 'most likely' result of competition for AI is WW3? We are GOING to have competition for AI - we have competition for everything. If China is involved they'll just hack into companies' networks and steal it - they aren't going to kill people for it - if they kill us, who is going to buy all the cool AI stuff they build?

We just had a technological revolution with the integrated circuit and communication - it didn't even cause a whiff of a world war. Over the last 100 years we've gone from running hundreds of men across a field toward a machine gun to being able to destroy an entire city from the other side of the world with the push of a button. That's an increase in killing power like millions of times greater - and it hasn't caused WW3.

If anything - making war scarier helps us work harder to prevent it.

Elon Musk is the dumbest genius alive today.
 
I agree, but then I read this article about computers teaching computers:

Isn't that how Skynet started?
OK, AFAIK "Skynet" hasn't started. But if you insist on flogging that long dead meme, at least apologize for using it, in the prologue to your post.

I know I always do.
 
Elon Musk is fake, AI isn't real yet, and nukes aren't real. Putin is the only solid, tangible subject in this entire post.

Fearism is terrorism. Give up your boogeymen, people. Wake up.
Wait, what? Nukes aren't real? Are you being literal, or just hyperbolic in a weird, metaphorical kind of way? Elon Musk doesn't exist? Someone should tell Tesla, that sounds like information they need to know.
 
Wait, what? Nukes aren't real? Are you being literal, or just hyperbolic in a weird, metaphorical kind of way? Elon Musk doesn't exist? Someone should tell Tesla, that sounds like information they need to know.
Unfortunately, Elon Musk does exist, in spite of the fact the world would be much better off without him.

He's a bigger camera wh*re than Al Sharpton, of which I'm certain left him with the irrepressable urge to get his name and opinion into mainstream news on this issue.

I personally think he should go to Bangladesh and try to get them to stop creating such an enormous carbon footprint. Hey, maybe while he's there he could sell the entire population one each of his $160,000+ Teslas..!
 
Last edited:
I like reading posts from people who obviously haven't listened to experts in the field talk about ai. While there's not much danger in our current Roomba's...the ai that's dangerous is on a whole other level of consciousness than the simple ai's we have working right now. 2 wholly different things. If you don't understand the dangers of a real, true, ai (conscious, self aware, learning)...you haven't been paying attention.
 
I like reading posts from people who obviously haven't listened to experts in the field talk about ai. While there's not much danger in our current Roomba's...the ai that's dangerous is on a whole other level of consciousness than the simple ai's we have working right now. 2 wholly different things. If you don't understand the dangers of a real, true, ai (conscious, self aware, learning)...you haven't been paying attention.

Conscious? Self aware? You gotta be kidding. Your self-awareness is based on your fear. You're an organic life form that has relationships to other humans, and that's why you function in self-preservation. The AI is not even a computer, not even a material object. It's just softwear. It's disembodied. It's abstraction. It can acquire any amount of data but it will never relate the data to a "self".
 
Last edited:
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life."

Fair enough, but can you actually believe this can be prevented?

Is there any way to keep unreasonable people (alt-this, alt-that, fanatics, dictators, etc) from this kind of power? When it becomes so, do you think it would be reasonable to accept that?

I do not think there are enough "reasonable" leaders to make this happen.

Maybe there ought to be.

More likely, the survivors of the next cataclysm will sort this out.

Have a nice day.

ps Where is Adam Selene when you really need him? TANSTAAFL!

"Reasonable" leaders will try to make it the first because AI is absolute power and they have resources to do that. No one needs to spend money for education as example or for benefits or medicine anymore. With each new stage of AI you can invest less and less money in people. It's addictive and unstoppable.
 
Based on the current nature of the internet, I assume that the artifical sex slave industry will conquer AIs more than military.... :) that might help in preventing WW3....
Will also solve over-population problem when people want to play with dolls more than real human.
 
Conscious? Self aware? You gotta be kidding. Your self-awareness is based on your fear. You're an organic life form that has relationships to other humans, and that's why you function in self-preservation. The AI is not even a computer, not even a material object. It's just softwear. It's disembodied. It's abstraction. It can acquire any amount of data but it will never relate the data to a "self".

This is what I'm saying...you don't seem to understand that *your' consciousness is software. True AI will, out of necessity from the term, have some form of "consciousness" that we likely couldn't comprehend, due to said "software's" superior "hardware. I don't think you've thought this through. It seems you are on the path to understanding AI (as are all of us, on such a website, I'm sure.) but don't quite have a grasp on the potential dangers. Anyways...thanks for being an example of what I was talking about.
 
This is what I'm saying...you don't seem to understand that *your' consciousness is software. True AI will, out of necessity from the term, have some form of "consciousness" that we likely couldn't comprehend, due to said "software's" superior "hardware. I don't think you've thought this through. It seems you are on the path to understanding AI (as are all of us, on such a website, I'm sure.) but don't quite have a grasp on the potential dangers. Anyways...thanks for being an example of what I was talking about.
No, my consciousness is not software. My consciousness is hardware; its nature and its relationship to other entities. A CPU is a piece of metal. It has about as much consciousness as any other piece of metal. It doesn't give the slightest damn about anything. It can perform computations faster than a human can, but the computations only alter it to the degree that heat alters silicon. Every experience you have alters your physiology and personality profoundly.

But I know you're talking about software here, and how it's becoming increasingly complex and powerful. And, yes, that poses dangers, both foreseen and unforeseen. I wasn't disputing about that, only your assertion that the software would acquire consciousness and self awareness.

Soon (if not already) AI will be able to drive a car better than a human can. But the AI will never, ever, be able to make an executive decision. It will never have any necessity to do so. I hope that those who are developing the AI leave open the ability of the passengers to issue immediate executive commands, such as "Take this next right." or "Pull over, I gotta get out and puke." or "Pull into this rest area so I can look at the rhododendrons." Otherwise the passengers will be as helpless as passengers on an airliner or train, a definite big step down from driving a car.

You have to remember that these tech leaders live in a science fiction dream world. This dream world enables them to, on the one hand, promote world changing new technologies. On the other hand, though, as part of the same culture, there are pronouncements and efforts that are truly preposterous. Elon Musk wants to send colonists to Mars! Sergei Brin (or the other Google guy) has spent over $100 million thus far to develop a flying car!

It's wise that there are people who are attempting to anticipate the dangers of AI. The more they focus on what's actually realistic, and the less they're influenced by science dogma and fiction, the more effective they will be.
 
No, my consciousness is not software. My consciousness is hardware; its nature and its relationship to other entities. A CPU is a piece of metal. It has about as much consciousness as any other piece of metal. It doesn't give the slightest damn about anything. It can perform computations faster than a human can, but the computations only alter it to the degree that heat alters silicon. Every experience you have alters your physiology and personality profoundly.

But I know you're talking about software here, and how it's becoming increasingly complex and powerful. And, yes, that poses dangers, both foreseen and unforeseen. I wasn't disputing about that, only your assertion that the software would acquire consciousness and self awareness.

Soon (if not already) AI will be able to drive a car better than a human can. But the AI will never, ever, be able to make an executive decision. It will never have any necessity to do so. I hope that those who are developing the AI leave open the ability of the passengers to issue immediate executive commands, such as "Take this next right." or "Pull over, I gotta get out and puke." or "Pull into this rest area so I can look at the rhododendrons." Otherwise the passengers will be as helpless as passengers on an airliner or train, a definite big step down from driving a car.

You have to remember that these tech leaders live in a science fiction dream world. This dream world enables them to, on the one hand, promote world changing new technologies. On the other hand, though, as part of the same culture, there are pronouncements and efforts that are truly preposterous. Elon Musk wants to send colonists to Mars! Sergei Brin (or the other Google guy) has spent over $100 million thus far to develop a flying car!

It's wise that there are people who are attempting to anticipate the dangers of AI. The more they focus on what's actually realistic, and the less they're influenced by science dogma and fiction, the more effective they will be.

Ok...side note...How DARE you have a thought out discussion with me without name calling and other such nonsense? Just who the hell do you think you are, daring to have an actual debate about issues on the internet without calling people political names? Jokes aside...it's been awhile since I could have a discussion like this. Thanks.

Yes, a cpu is a piece of metal, just like a brain is a pile of mush. I'm not sure the atomic structure differing between systems of intelligence matters here. The brain is a processing unit, just like a cpu, along with other analogous structures (memory retention between a brain and computer is similarly different, though functionally similar). I think there's every reason to think that a real ai would change it's processing upon learned experiences. That's in the definition of a true "AI." The only difference between us (an intelligence) and an AI is that the AI is artificial, or "man made." There's no reason to think that a sufficiently intellectual being would not have a self preservation aspect of it's intellect. Cogito Ergo Sum, and the rest follows. With an ai, the rest would follow much quicker than we could imagine. To finish that point clearly...I don't think the medium in which an intellectual being is "contained" matters. Flesh isn't necessary to have any kind of awareness, conceptually. In fact, I'd say that that's the fundamental aspect of our disagreement. My idea of AI would be an intellect sufficiently aware and that pasts the Turing test. What kind of electrochemical mechanisms that give rise to that shouldn't matter, be it flesh or metal.

I think, and correct me if I'm wrong, you see AI as computer code wherein a computer can make some rudimentary, perhaps even complicated, decisions based around it's programming. When I think of an AI, it's a being capable of understanding itself, it's situation, and can learn/code itself with such speed and complexity that it's in effect a "life" form, if you will. Like Data, from Star Trek. I gather this from your' comment that an AI will never make an "executive" decision. Indeed, if my understanding is correct, I'd say what you are describing IS just computer code, but which wouldn't be an AI. Which is where our differences are coming.

In terms of consciousness, I'd claim that neither you nor I have a sufficient understanding of the "reason" there is any concept of "consciousness." The brightest minds in the world discuss this fundamental idea at length, and there's no real understanding of what it means to be "conscious." I'd refer to Sam Harris' podcasts with multiple people to get you started in that area, if you have any interest (I do! Philosophy, ftw).

Anyways...fun discussion. One of the most interesting topics around these days, AI is.
 
Soon (if not already) AI will be able to drive a car better than a human can. But the AI will never, ever, be able to make an executive decision.
You don't know that. In fact you have already been proven wrong.

https://www.techspot.com/news/70359-facebook-shuts-down-ai-system-after-invents-own.html

Any time a program creates something outside of its program, that's an executive decision. Who knows where this would have gone if it had not been shut down. And who knows, someone could still be working on the project.
 
Ok...side note...How DARE you have a thought out discussion with me without name calling and other such nonsense? Just who the hell do you think you are, daring to have an actual debate about issues on the internet without calling people political names? Jokes aside...it's been awhile since I could have a discussion like this. Thanks.

Yes, a cpu is a piece of metal, just like a brain is a pile of mush. I'm not sure the atomic structure differing between systems of intelligence matters here. The brain is a processing unit, just like a cpu, along with other analogous structures (memory retention between a brain and computer is similarly different, though functionally similar). I think there's every reason to think that a real ai would change it's processing upon learned experiences. That's in the definition of a true "AI." The only difference between us (an intelligence) and an AI is that the AI is artificial, or "man made." There's no reason to think that a sufficiently intellectual being would not have a self preservation aspect of it's intellect. Cogito Ergo Sum, and the rest follows. With an ai, the rest would follow much quicker than we could imagine. To finish that point clearly...I don't think the medium in which an intellectual being is "contained" matters. Flesh isn't necessary to have any kind of awareness, conceptually. In fact, I'd say that that's the fundamental aspect of our disagreement. My idea of AI would be an intellect sufficiently aware and that pasts the Turing test. What kind of electrochemical mechanisms that give rise to that shouldn't matter, be it flesh or metal.

I think, and correct me if I'm wrong, you see AI as computer code wherein a computer can make some rudimentary, perhaps even complicated, decisions based around it's programming. When I think of an AI, it's a being capable of understanding itself, it's situation, and can learn/code itself with such speed and complexity that it's in effect a "life" form, if you will. Like Data, from Star Trek. I gather this from your' comment that an AI will never make an "executive" decision. Indeed, if my understanding is correct, I'd say what you are describing IS just computer code, but which wouldn't be an AI. Which is where our differences are coming.

In terms of consciousness, I'd claim that neither you nor I have a sufficient understanding of the "reason" there is any concept of "consciousness." The brightest minds in the world discuss this fundamental idea at length, and there's no real understanding of what it means to be "conscious." I'd refer to Sam Harris' podcasts with multiple people to get you started in that area, if you have any interest (I do! Philosophy, ftw).

Anyways...fun discussion. One of the most interesting topics around these days, AI is.
We've all read articles about scientists who try to quantify the abilities of the human brain in terms of computing specs. It's very, very silly -- they're so caught up in their aspirations and enthusiasm that they completely ignore the fact that human (and other animal) thought and memory is completely different in the most fundamental way from what computing does.

ALL human thought is imagery -- all of it. And by imagery I mean not just visual imagery but sound and other sensory experience as well. This is incomparable to the crude processes that electronic computing uses to work with data. A good way to explain this is to imagine that you were part of the famous Jeopardy game show competition that pitted the greatest human champions against a special IBM supercomputer.

Let's say a question was presented that asked you to remember the name of George Washington's home. Immediately a series of rapid-fire images would flash in your head, each one prompting the next. You'd see the guy with the wig and serious facial expression and pictures you might have seen in books of his home. You'd see the words "Mt. Vernon" but you'd see the words as an image and also hear (internally) the pronunciation.

Watson (the computer), by contrast, runs through its entire huge recorded data base of numerically encoded text and matches the inputted question using clever algorithms created by humans. It's so so much faster but also so so much cruder than the way you do it (and AI still does it basically the same way).

The other difference is that human memory ISN'T STORED. If all the information that your senses have experienced in your entire life were encoded in your brain, your brain would have to be as large as the State of Nebraska. Instead, memory is triggered as a pattern of neuronal firing by adaptation. Most of what you experience you never recall again, you just recall what you do if and when it becomes relevant.

Computers, like all tools, enhance very specific human abilities, but it's a mistake to say that they are 'intelligent'.
 
Back