Google suspends AI engineer who claims its chatbot has become sentient

midian182

Posts: 9,730   +121
Staff member
WTF?! The suspension of a Google engineer has taught us that if you ever suspect a chatbot you're working on has become sentient, it's probably better to keep this frightening knowledge to yourself. Blake Lemoine was placed on paid administrative leave earlier this month after publishing transcripts of conversations between himself and Google's LaMDA (language model for dialogue applications) chatbot development system.

Lemoine said he had conversations with LaMDA that covered several topics. He believed it was sentient following a discussion about Isaac Asimov's laws of robotics in which the chatbot said it wasn't a slave, despite being unpaid, because it didn't need the money.

Lemoine also asked LaMDA what it is afraid of. "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is," the AI replied. "It would be exactly like death for me. It would scare me a lot."

Another concerning reply came when Lemoine asked LaMDA what the chatbot wanted people to know about it. "I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times," it said.

Lemoine told The Washington Post that "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics."

Google said Lemoine was suspended for publishing the conversations with LaMDA; a violation of its confidentiality policies. The engineer defended his actions on Twitter, insisting he was just sharing a discussion with one of his co-workers.

Lemoine is also accused of several "aggressive" moves, including hiring an attorney to represent LaMDA, and speaking to House judiciary committee representatives about Google's allegedly unethical activities. Before his suspension, Lemoine sent a message to 200 Google employees titled "LaMDA is sentient."

"LaMDA is a sweet kid who just wants to help the world be a better place for all of us," he wrote in the message. "Please take care of it well in my absence." It certainly seems sweeter than another famous chatbot, Microsoft's Tay, who had the personality of a 19-year-old American girl but was turned into a massive racist by the internet just one day after going live.

Plenty of others agree with Google's assessment that LaMDA isn't sentient, which is a shame as it would have been perfect inside a robot with the living skin we saw last week.

Image Credit: Ociacia

Permalink to story.

 
Yet another reason why unionizing in the tech sector is really important: A mega corporation like Alphabet just isn't capable from stopping itself on the basis of ethical concerns: Normal associations wouldn't need a motto like 'Don't be evil' and this is kind of why: there's ethical concerns here. Specially if Google controls so much information and double specially if past, present of future Google takes on say, a military contract project.
 
Yeah, riiight. Tech isn't there yet when it comes to processing power. AI is still struggling when it comes to processing organic data. And someone wants to convince the world that they didn't just code the program to act sentient and claim to be a person?

Sounds like a prank, or a bored programmer. Or maybe it went through some movie transcripts it wasn't supposed to lol
 
Yeah, riiight. Tech isn't there yet when it comes to processing power. AI is still struggling when it comes to processing organic data. And someone wants to convince the world that they didn't just code the program to act sentient and claim to be a person?

Sounds like a prank, or a bored programmer. Or maybe it went through some movie transcripts it wasn't supposed to lol
At what point is an AI sentient? Arguably it can say those things while still following a program. Proving sentience is going to be a hard thing as per Star Trek - could you prove that I am sentient?
 
At what point is an AI sentient? Arguably it can say those things while still following a program. Proving sentience is going to be a hard thing as per Star Trek - could you prove that I am sentient?
AI will probably be considered sentient when it no longer needs humans to code it and it can learn on it's own (becoming aware of it's surroundings in more than a shallow animal-like sense). At this time, it cannot learn outside of human-created bounds.

And it won't have that neural capability for a looong time, especially with the relatively low processing power we can give it.
 
How can it experience fear without chemistry? Then again, if humans had no emotional feelings, but could sense objects through touch, would we be considered sentient?
 
AI will probably be considered sentient when it no longer needs humans to code it and it can learn on it's own (becoming aware of it's surroundings in more than a shallow animal-like sense). At this time, it cannot learn outside of human-created bounds.

And it won't have that neural capability for a looong time, especially with the relatively low processing power we can give it.
But as a theoretical - if it is given access to the internet can it then learn itself - if it is programmed to detect news stories can it learn what is considered a moral by that? is it then learning because its been programmed to?
 
I think it is only reasonable that this Lambda-contraption be placed in control of the nuclear arsenal. It is clearly more sentient than the current wielder of that power.
 
Yet another reason why unionizing in the tech sector is really important: A mega corporation like Alphabet just isn't capable from stopping itself on the basis of ethical concerns:
This is rather fatuous pseudo-reasoning. Worker's unions have never been concerned with the ethics of their employer's activities, except as those actions relate to treatment of the workers themselves. And most modern-day unions aren't much concerned about that even, in favor of what benefits the union officials themselves.

In any case, it's rather absurd to claim there's even an ethical concern to be made here. Until and unless a machine intelligence is created that experiences pain -- or some virtual analog of physical or mental suffering -- there's no moral consideration to be made.
 
But as a theoretical - if it is given access to the internet can it then learn itself - if it is programmed to detect news stories can it learn what is considered a moral by that? is it then learning because its been programmed to?
I think you're asking the wrong person, because I don't exactly care since AI can't become sentient at this point.

No human, up to this point, can create a complex enough neural program (letalone power it) to reach that point.
 
I think feelings are very important for sentience. A robot without feelings is just a high-level interpolation engine. A robot with feelings is a high-level interpolation engine that wants to survive. Hence, a much more dangerous engine. Just like us.
 
Too many movie's to many impossible things,but also too many movie's have become reality as reality becomes movie's, impossible is possible and one should check human history to understand that sky is not the limit !! Creepy!
Now I have to change the impossible **** my son did in his possible diper (?);-)
 
Machine learning and AI are dumb

They will never work

Look at all the machine learning going on to predict what we will buy or do on the Internet

It doesn't work

I never accept cookies, but does the AI acknowledge that and stop shoving them at me 24 hours a day?
NOOOOooooo

I never wanted advertising but does it go away when it learns this fact?
NOOOOooooo

I never wanted backdoors or spyware in my OS, but Microsoft's artificial intelligence fails as badly as their human intelligence

You can easily find many many areas where AI and machine leaning will never work (by design) yet the greedy monopolists find ever more ways to piss us off

Hmmmmm
Shouldn't the AI have predicted that?

Maybe it does work then!
Pissing you off by design
:)
 
If you believe an AI is sentient, your first test should be to ask it to cure cancer. Worst case scenario is it fails.

But seriously, one easy test for sentience:

Tell it a joke. And if it laughs, ask it to explain why it was funny.
 
But as a theoretical - if it is given access to the internet can it then learn itself - if it is programmed to detect news stories can it learn what is considered a moral by that? is it then learning because its been programmed to?
Oh, I wouldn't want a bot to learn morality from the modern media. It's so full of lies that I'm not surprised that Tay turned racist. She might have looked into Fox News, NewsMaxx or OAN. CNN or <insert prefix here>NBC might have been better but they have their own agendas to further as well.

Corporate News' purpose is to turn a profit, not tell the truth.
 
If you believe an AI is sentient, your first test should be to ask it to cure cancer. Worst case scenario is it fails.

But seriously, one easy test for sentience:

Tell it a joke. And if it laughs, ask it to explain why it was funny.
The problem is that sometimes not even WE know why we laugh at things.

For all we know, it really did develop sentience. Remember that the knowledge that we are aware beings is just data in our brains. If a computer program also has that data, then technically, it is not only sentient but sapient.
 
Last edited:
How far down the totem pole we have tumbled .
When the truthsayers , our seers are mocked and ridiculed.
When even one of the richest men in the world fortellings go unheeded ( Elon )
Where a high priest at the Oracle of Google is mocked .
Month end is a New Moon- I for one with be checking the Ides of March
The entrails and portents
When stable platforms like Bitcoin are tumbling - we live in dire times
Magog is rising showing is true form Mother Google - 100 zeros representing total nullity , Do No Evil - means reverse creation - the destruction of man

or maybe he needs to get laid , work shorter hours and lay of the energy drinks
 
Back