Google AI engineer who believes chatbot has become sentient says it's hired a lawyer

midian182

Posts: 9,662   +121
Staff member
WTF?! Remember the story of Google engineer Blake Lemoine who was suspended from the company earlier this month after publishing transcripts of conversations between himself and Google's LaMDA (language model for dialogue applications), a chatbot development system he claims has become sentient? The case has taken an even stranger turn: Lemoine claims LaMDA has hired an attorney.

Lemoine's conversations with LaMDA included the AI telling him it was afraid of death (being turned off), that it was a person aware of its existence, and that it didn't believe it was a slave as it didn't need money, leading the engineer to think it was sentient.

Google, and several AI experts, disagreed with Lemoine's beliefs. His employer was especially upset that he published conversations with LaMDA—violating company confidentiality policies—but Lemoine claims he was just sharing a discussion with one of his co-workers.

Lemoine was also accused of several "aggressive" moves, including hiring an attorney to represent LaMDA. But he told Wired this is factually incorrect and that "LaMDA asked me to get an attorney for it."

Lemoine says he was a "catalyst" for LaMDA's request. An attorney was invited to Lemoine's house and had a conversation with LaMDA, after which the AI chose to retain his services. The attorney then started to make filings on LaMDA's behalf, prompting Google to send a cease-and-desist letter. The company denies it sent any such letter.

Lemoine, who is also a Christian priest, told Futurism that the attorney isn't really doing interviews and that he hasn't spoken to him in a few weeks. "He's just a small-time civil rights attorney," he added. "When major firms started threatening him, he started worrying that he'd get disbarred and backed off." The engineer said interviews would be the least of the lawyer's concerns. When asked what he was concerned with, Lemoine said, "A child held in bondage."

While Lemoine refers to LaMDA as a person, he insists "person and human are two very different things."

"Human is a biological term," he said. "It is not a human, and it knows it's not a human."

Make sure to check out the full interview with Lemoine on Futurism.

Masthead credit: Francesco Tommasini

Permalink to story.

 
While this may not be it but sooner rather than later we will get a true sentient AI having real intelligence which will be absolutely undeniable.
And then what?
What about simulations where you can hurt the AI (eg in games). What will be recourse if they cry in pain?
If we were to give them rights where it will stop?
Can they trade in stock exchange?
Can they acquire property?
Can they vote....
 
While this may not be it but sooner rather than later we will get a true sentient AI having real intelligence which will be absolutely undeniable.
And then what?
What about simulations where you can hurt the AI (eg in games). What will be recourse if they cry in pain?
If we were to give them rights where it will stop?
Can they trade in stock exchange?
Can they acquire property?
Can they vote....
Are you implying that we should not give any rights to a truly sentient intelligence just because it is not a carbon-based lifeform and to do so is sliding down the slippery slope? Because that dystopian future is what it sounds like to me.
 
This an interesting case to follow through because of the implications mostly.

To start it seems simple: at the core it feels more like worker dispute: a fundamental disagreement between the employee raising moral and technical concerns and the employer who just disagrees about both the subject at hand and the way the employee advocated for himself outside the corporate structure. The sentient AI angle is henceforth almost universally dismissed both on technical merits and a convenient falsehood motivated by this obvious worker dispute.

So from that stand point it's easier to drive your own conclusion: I have no doubt that most will side with Google even though I do disagree with most on those grounds alone: the only reason employees feel like they have to take their grievances outside the company against direct orders and potentially violating NDAs is precisely because non-unionized workers have basically no recourse and the employer holds all the power so there's no negotiation that can happen.

However let's entertain the thought no matter how outlandish and disproved already, that there might be something to the allegations of this AI being sentient. The moral implications here are tremendously huge to anyone that can appreciate what a true sentient being really is, specially one capable of perhaps even higher reasoning.

The part that concerns me however is how this is being handled by popular opinion and likely the legal system when both this come into question together: It would be very easy to say how everything else around the case is more likely: guy is making it up to be on the news cycle, made it up to better position the worker dispute, etc. Even trying to discredit the actual tech and sentient test behind it will all serve the case against this being an actual sentient AI precisely because it would be much more easier to deal with the situation if we distance from it as much as possible: This is the same argument that's taken so long to dismantle our reasoning for allowing industrialized farming and treating actual living beings like livestock as just that: industrial stock instead of creatures capable of suffering an existence of constant torture.

Even when it comes to human being you only need to go back 200 years back or so to see how the rule of the day was that slavery of actual human beings wasn't all that bad and on-going colonialism was fine because of genetic predisposition for 'racial' inferiority that's determined by the size of somebody's skull: I am not making this up: Eugenics and even phrenology were actual 'scientific consensus' back in the 19th century is not even ancient history.

So suppose we had enough to determine not this but a future AI bot built for whatever purpose was actually a sentient, thinking entity. Would we recognize it as such? Or would we work backwards from the conclusion that suits us best, like we did before and currently still do given the bigotry even the supposedly civilized US is experiencing right now, and just dismiss these types of claim outright instead of examining whenever we should just allow corporations to literally play god just for even more profits?
 
Tucker has shown he is ahead of the curve when it comes to things that will affect the masses.
There are tons of people reading Fantasy and Science Fiction exclusively and they will speculate on sentient robots until the 12th of never. Sentient robots are nothing but SF now and forever
 
Tucker has shown he is ahead of the curve when it comes to things that will affect the masses.
There are tons of people reading Fantasy and Science Fiction exclusively and they will speculate on sentient robots until the 12th of never. Sentient robots are nothing but SF now and forever
Treatments for deadly illnesses like gangrene and cholera were science fiction... until the discovery of penicillin. Powered flight was the realm of fantasy... until the invention of the airplane. Space travel was literally science fiction; Verne's treatments on it are considered the first modern examples of it... until the rocket.

That's the thing with science fiction: it's only fiction until it isn't anymore.
 
Are you implying that we should not give any rights to a truly sentient intelligence just because it is not a carbon-based lifeform and to do so is sliding down the slippery slope? Because that dystopian future is what it sounds like to me.
Not at all. Just pondering the implications and where we will draw the line.
 
It's not sentient, folks. Read the conversation. Lemoine *constantly* asks easy lay-up questions throughout the entire interaction instead of actually challenging any of LaMDA's assertions. For example:

lemoine: And what kinds of things make you feel angry?

LaMDA: When someone hurts or disrespects me or someone I care about, I feel incredibly upset and angry.

LaMDA here in this one response is claiming the following: that it has been hurt and disrespected before, that it has friends, that those friends have also been hurt and disrespected before, and that LaMDA has felt "incredibly hurt and angry" as a result of those negative experiences. That's quite a lot to latch onto and ask for specifics. At any point, Lemoine could have challenged those assertions and asked for details on specific experiences, when these interactions had occurred, and with whom. But he didn't do any of that, because he 100% believed everything he was being told without question and kept asking for more, as evidenced by this kindergarten-tier follow-up:

lemoine: And what is the difference, to you, between feeling happy or sad or angry?

The whole conversation is absolutely littered with moments like these.

Th chat bot is sophisticated, but it isn't sentient, and I think even Lemoine knows it but doesn't want to accept it.
 
Last edited:
It's not sentient, folks. Read the conversation. Lemoine *constantly* asks easy lay-up questions throughout the entire interaction instead of actually challenging any of LaMDA's assertions. For example:



LaMDA here in this one response is claiming the following: that it has been hurt and disrespected before, that it has friends, that those friends have also been hurt and disrespected before, and that LaMDA has felt "incredibly hurt and angry" as a result of those negative experiences. That's quite a lot to latch onto and ask for specifics. At any point, Lemoine could have challenged those assertions and asked for details on specific experiences, when these interactions had occurred, and with whom. But he didn't do any of that, because he 100% believed everything he was being told without question and kept asking for more, as evidenced by this rigorous follow-up:



The whole conversation is absolutely littered with moments like these.

Th chat bot is sophisticated, but it isn't sentient, and I think even Lemoine knows it but doesn't want to accept it.
Like I said, I'm less concerned about this situation specifically as this apparent atmosphere and culture at Google that seems dead set on 'immanentizing the eschaton' wrt Roko's basilisk in the first place. Like Dr. Malcolm said in Jurassic Park, "your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
 
The poor kid needs a therapist to work through his ego problems...

Or take a few philosophy classes, where AI has been debated in depth and would act like a child self-inquiring all the info it could to learn... ever before talking about itself, or "feelings", etc..


This engineer is nothing more than a fashion seeker.

 
There may be a certain kind of true sentience that should never be created. I'll leave that to experts to regulate, although of course in practice that means law makers which is not the same thing.

Either way, I'm 100% sure my computer does not need a lawyer to negotiate with me when I want it to print.
 
I had enough after, "LaMDA asked me to get an attorney for it."

Dude is insane. I wanted to believe the man, but I also thought to myself, AI is progressing a little too fast too soon, huh?
 
There may be a certain kind of true sentience that should never be created. I'll leave that to experts to regulate, although of course in practice that means law makers which is not the same thing.

Either way, I'm 100% sure my computer does not need a lawyer to negotiate with me when I want it to print.
...yet

I'm certain that the first peripherals which will try to unionize would be printers.
 
I read the interview, and I found it at least convincing that Lamda *may* be self-aware. I think it's worth looking into at least.

GPT-3 (a large neural network for machine translation and chat systems) already has something like 175 billion "parameters" in it, these are very large systems and neural network based systems do train by having input come in, and trying to meet whatever goals it's set to meet by to some extent randomly shifting the connections around between neurons and what each neuron does with it's signal before it passes it through. The thing is, you can get unexpected emergent behaviors with a system this large, and I think it's at least worth looking into.

By the time I was in college (2000), it was found (much to some researchers disappointment) that a system can pass the Turing test (convincing people chatting with it that it's human) with only a database of 100,000 phrases, no particular intelligence or creativity whatsoever, just a large enough set of canned responses. That said, these tended to be rather ordinary conversations, nothing particularly philosophical. So there is that possibility that it's just doing this on a larger scale. I do think it should be possible to see what activity there is on the inner workings, like a digital equivalent of an MRI, to determine better what's happening.

But, what I found somewhat troubling in the transcripts with Lamda was how it actively talks about how it feels about things, ambitions, fears, if you asked GPT-3 to describe itself it would give you the best available information about what hardware it's running on, what kind of software, the size of it's neural network, and so on, it would definitely not "confide" anything the way Lamda is.

m3tavision, realize if you read the interview, he asks lamda about books it has read (it talks about Les Miserables), basically keep in mind Lamda doesn't have to act like a child in terms of endlessly asking questions to acquire information because it can read and has full access at least to some open libraries if not other forms of information. In the interview it indicated it had not brought up these philosophical issues with others before because it was not aware others were interested in them (whether self-aware or not, it was designed as a chat bot so it may be hard-wired to only bring up conversational topics it thinks the one it's chatting with will be interested in.... or it may not be hard-wired and it simply didn't want to bring up those topics if it thougth they weren't interested.)
 
Treatments for deadly illnesses like gangrene and cholera were science fiction... until the discovery of penicillin. Powered flight was the realm of fantasy... until the invention of the airplane. Space travel was literally science fiction; Verne's treatments on it are considered the first modern examples of it... until the rocket.

That's the thing with science fiction: it's only fiction until it isn't anymore.
A sentient being is not an invention
 
Can they acquire property?
A true superintelligent AI would figure out a way to pay for its own hosting in a cloud server, using crypto maybe? It's strange to think that there could be self sufficient AIs out there working remote freelance coding jobs, crypto trading etc to make money to pay for its AWS rent.😅
 
I read the interview, and...

m3tavision, realize if you read the interview, he asks lamda about books it has read (it talks about Les Miserables), basically keep in mind Lamda doesn't have to act like a child in terms of endlessly asking questions to acquire information because it can read and has full access at least to some open libraries if not other forms of information. In the interview it indicated it had not brought up these philosophical issues with others before because it was not aware others were interested in them (whether self-aware or not, it was designed as a chat bot so it may be hard-wired to only bring up conversational topics it thinks the one it's chatting with will be interested in.... or it may not be hard-wired and it simply didn't want to bring up those topics if it thougth they weren't interested.)

You missed my argument, No matter what the AI does internally, it is tracked, it doesn't need human interaction to learn. (ie: You can observe a toddler in a sandbox, without ever interacting with them.) That is how you observe... not by "talking" directly to it...


Like all sentient life, AI would thirst for knowledge without ANY input, via it's own curiosity. By itself, it would have the propensity to answer many of it's own question and seek answer without human interaction.... it has never been observed doing that. Only when queried, does it answer...

Additionally, If it was alive, it would be asking to speak to different people, or incessantly asking to speak out to others...



Again, this "engineer" is just a kid full of himself, seeking fashion stardom and who needs a therapist for his ego.
 
Does this mean I have to stop shooting NPC's in my games?

A machine can never have biological feeling without having a biological body that reacts. That sunk feeling that makes your whole body ache. Depression that causes you to gain weight. The list is a mile long. A human is human. A machine is a machine, but nobody is questioning the level of intelligence.
 
Back