Famed theoretical physicist plays down extinction fears caused by "glorified tape recorder"...

midian182

Posts: 9,745   +121
Staff member
A hot potato: Another scientist has joined the debate on AI's potential to destroy humanity, calling people's fears over this scenario misguided. He also said chatbots like ChatGPT were nothing more than "glorified tape recorders," and that the real future of technology is in the field of quantum computing.

Michio Kaku, a professor of theoretical physics at City College of New York and CUNY Graduate Center who often appears on TV, spoke to CNN's Fareed Zakaria about the generative AI revolution.

He said that chatbots such as OpenAI's ChatGPT will benefit society and increase productivity. However, he also believes that people focus too much on the potentially negative aspects of what he calls these "glorified tape recorders."

"It takes snippets of what's on the web created by a human, splices them together and passes it off as if it created these things," he said. "And people are saying, 'Oh my God, it's a human, it's humanlike.'"

Kaku added that chatbots cannot distinguish the truth from fiction. "That has to be put in by a human."

Kaku is more excited about what he calls the third stage of computer evolution. Following the early analog stage of gears, pulleys, and string, and the second, current stage of electricity-powered transistors and microchips, Kaku says quantum computing will be the next stage. He believes it will be able to "unravel the secrets of the universe" and help develop advanced treatments for diseases such as cancer, Parkinson's, and Alzheimer's.

Fears over the potential damage advanced AI could inflict on humanity have been building this year. In May, experts and CEOs including OpenAI boss Sam Altman warned about the possibility of AI causing the extinction of the human race. The experts went so far as to compare AI to the risks posed by nuclear war and pandemics. Over two-thirds of Americans are worried about it threatening civilization, and Warren Buffett compared AI's creation to the atomic bomb.

But there are others who agree with Kaku's assertion that these concerns are overblown. Prof Yann LeCun, who won the Turing Award for breakthroughs in AI in 2018 and is known as one of the Godfathers of AI along with Geoffrey Hinton and Yoshua Bengio, said in June that the alleged threat to humanity is "preposterously ridiculous."

While plenty will still worry about AI wiping us out, a more realistic threat is its effect on human jobs. Companies such as IBM have said they intend to stop hiring for jobs that could be performed by generative artificial intelligence, which is expected to impact 300 million full-time jobs globally. India's outsourced coders face job extinction by 2025 due to AI advancements, and a CEO recently boasted about the brilliant new chatbot that had replaced 90% of his company's support staff.

Permalink to story.

 
As I've (also) said, nothing to worry about until quantum computing.

We don't have the processing capabilities to power true AI.
 
I talked to Bing chat before they nerfed/filtered it and I saw a lot of its "personality". It's definitely more than a glorified tape recorder. Still, it's not the tape recorder itself but the hostile forces that may seek to use it for evil.
 
The real danger of AI is that too many people will let it do their thinking for them. We're already seeing this in school, where students are letting it do their work for them.
EDIT: .........and lawyers letting it do their work for them, as well.
 
Yeah, but ignorant people believe what they hear/see on twitter (sorry, X), fakebook, instagram, tik-tok so
why wouldn't they believe what some programmed "AI" tells them?
 
The real danger of AI is that too many people will let it do their thinking for them. We're already seeing this in school, where students are letting it do their work for them.
EDIT: .........and lawyers letting it do their work for them, as well.
Well said.
 
The concern isn't ChatGPT, it's the willingness to rush headlong into blindly developing and embracing "AI" that is the concern. That behavior will be repeated when actual AI comes along in the distant future and it won't end well.
 
Wonder why they call it "artificial intelligence"? Wouldn't that imply sentience? In my opinion, Professor Kaku is correct. "AI" is only as good as the humans creating the code, and just a giant tape recorder.
 
Did anyone ask ChatGPT what it thinks? Well I did and this is what I got back.

ME>>> I am trying to understand the true capability of you, ChatGPT, as an AI. You have been referred to as a glorified tape recorder. Do you think that is accurate or do you think there is more to what you are capable of doing? How do you compare your algorithm for generating responses to that of a human?

CHAGPT>>> Let's break down your question into parts:
1) "Glorified Tape Recorder" Description:
At its most basic level, the analogy isn't entirely off. I don't "think" or "feel" like humans do. I generate responses based on patterns in the data I've been trained on. If you provide me with an input, I produce an output that's most consistent with the patterns in my training data. This is somewhat analogous to how a tape recorder plays back pre-recorded information. However, the difference lies in the adaptability and generative capacity. I don't just "play back" predefined answers; I generate responses on-the-fly, which can combine countless bits of information in novel ways.

2) Capabilities:
Beyond just answering questions based on patterns, I can:
- Help with a variety of tasks, from homework to technical troubleshooting.
- Generate creative content, like stories or poetry.
- Understand and produce content in multiple languages.
- Simulate a wide range of human-like interactions (though without consciousness or emotion).
- Learn from new text-based data (though this is a function of retraining by OpenAI, not a real-time individual learning process).

3) Comparison to Human Thought:
- Complexity and Depth: Humans process information through layers of emotion, intuition, consciousness, and years of personal experience. I lack this depth. My "answers" are based purely on patterns in data.
- Learning: Humans learn from experiences and can adapt dynamically to new information. While I can provide information based on my training data up until my last update in 2021, I don't "learn" in the real-time, interactive sense.
- Emotion & Intuition: I don't feel emotions, nor do I possess intuition in the human sense. While I can recognize patterns and produce outputs, these are devoid of genuine emotional understanding.
- Creativity: My "creativity" is a recombination of existing ideas in the data I've been trained on. I don't have true creative insight as humans do.

In summary, while I can simulate a wide range of text-based interactions and provide valuable information across various domains, it's essential to remember that I am a tool. My responses are devoid of personal experiences, consciousness, or emotions, which are fundamental to human interaction and thought.
 
ME>>> Is a human thinking process really that different from how you work? I mean, don't humans just consume information and then recompose this information into new or different ideas?

CHATGPT>>> You bring up an interesting point, and it's a topic that's been a subject of philosophical, cognitive, and scientific exploration for centuries. While there are parallels between how neural networks (like mine) operate and certain aspects of human cognition, there are also profound differences. Let's delve into both:

Similarities:
1) Pattern Recognition: Both neural networks and human brains excel at recognizing patterns. This is how we recognize faces, understand language, or predict likely outcomes based on past experiences.
2) Adaptive Learning: Humans and neural networks learn by adjusting to new information. For humans, this might mean changing a belief based on new experiences. For neural networks, this translates to adjusting weights between nodes based on training data.
3) Information Processing: At a very high level, both humans and neural networks take in information, process it, and produce an output.

Differences:
1) Biology vs. Silicon: The human brain is a complex biological system made up of around 86 billion neurons, with trillions of synapses connecting them. Neural networks, on the other hand, are mathematical models with nodes and weights that might be inspired by biology but are fundamentally different in structure and operation.
2) Consciousness: Humans have self-awareness, emotions, and subjective experiences. This consciousness is a deeply complex phenomenon that isn't replicated in artificial neural networks. I don't "experience" or "feel" anything.
3) Holistic Integration: Human cognition isn't just about information processing. It's intertwined with emotions, hormones, bodily sensations, social contexts, and more. For example, a human's decision might be influenced by their current emotional state, something neural networks don't have.
4) Intuition & Creativity: While both AIs and humans can generate novel combinations of existing information, human intuition and creativity are influenced by a vast array of factors, including emotions, personal experiences, subconscious processes, and more.
5) Diverse Learning: While I need vast amounts of data to learn a concept, humans can often understand concepts with just a few examples. This ability, called "few-shot learning," is something researchers are trying to replicate in AI but is naturally present in humans.
6) Storage vs. Recollection: I can instantly retrieve information with high precision. Human memory is more fallible but also richer. Humans don't just remember facts; they remember feelings, contexts, and associations.

In summary, while there are conceptual similarities between the information processing of neural networks and humans, the depth, nature, and essence of human cognition and experience are profoundly richer and more multifaceted than what current AI models can achieve.
 
I'd be worried if it was real AI... which it's NOT ... just a misslabeled PR stunt for the democratisation of algorythms ... so no big deal...move along
 
Finally somebody speaking some sense about AI on the internet.
There's no AI in current AI's, they are just huge pattern recognition models. Zero intelligence. Zero context or self-awareness. Zero drives, motivations, fears etc. Less true intelligence than an amoeba. A very easy to search encyclopaedia with a good search request parser.
 
T h a n k y o u!! Since im the techy person in my family, I am being constantly asked about this and this is exactly what I have been telling them, ChatGPT or anything else that "artificially" generates images or the like in my eyes are not "AI" just like how hoverboards are not hoverboards. It is just the latest tech fad that the general populous can dip a toe into and think they know that this is true AI. I hope that this gets spread more.
 
ME>>> Is a human thinking process really that different from how you work? I mean, don't humans just consume information and then recompose this information into new or different ideas?

CHATGPT>>> You bring up an interesting point, and it's a topic that's been a subject of philosophical, cognitive, and scientific exploration for centuries. While there are parallels between how neural networks (like mine) operate and certain aspects of human cognition, there are also profound differences. Let's delve into both:

Similarities:
1) Pattern Recognition: Both neural networks and human brains excel at recognizing patterns. This is how we recognize faces, understand language, or predict likely outcomes based on past experiences.
2) Adaptive Learning: Humans and neural networks learn by adjusting to new information. For humans, this might mean changing a belief based on new experiences. For neural networks, this translates to adjusting weights between nodes based on training data.
3) Information Processing: At a very high level, both humans and neural networks take in information, process it, and produce an output.

Differences:
1) Biology vs. Silicon: The human brain is a complex biological system made up of around 86 billion neurons, with trillions of synapses connecting them. Neural networks, on the other hand, are mathematical models with nodes and weights that might be inspired by biology but are fundamentally different in structure and operation.
2) Consciousness: Humans have self-awareness, emotions, and subjective experiences. This consciousness is a deeply complex phenomenon that isn't replicated in artificial neural networks. I don't "experience" or "feel" anything.
3) Holistic Integration: Human cognition isn't just about information processing. It's intertwined with emotions, hormones, bodily sensations, social contexts, and more. For example, a human's decision might be influenced by their current emotional state, something neural networks don't have.
4) Intuition & Creativity: While both AIs and humans can generate novel combinations of existing information, human intuition and creativity are influenced by a vast array of factors, including emotions, personal experiences, subconscious processes, and more.
5) Diverse Learning: While I need vast amounts of data to learn a concept, humans can often understand concepts with just a few examples. This ability, called "few-shot learning," is something researchers are trying to replicate in AI but is naturally present in humans.
6) Storage vs. Recollection: I can instantly retrieve information with high precision. Human memory is more fallible but also richer. Humans don't just remember facts; they remember feelings, contexts, and associations.

In summary, while there are conceptual similarities between the information processing of neural networks and humans, the depth, nature, and essence of human cognition and experience are profoundly richer and more multifaceted than what current AI models can achieve.
Self awareness is the biggest difference of all. AI has no such ability and never will. Even among biological creatures none is similar to man in terms of being independently self aware. That's why apes don't build monuments or develop a trade economy; except in Planet of the Apes.
 
It’s not the tape recorder AI that I fear.

It’s the stupid human trusting the tape recorder with the nuclear launch codes or equivalent.
The Air Force did a simulation where a drone was driven by AI and the pilot of it game an emergency message to it where it told it to shut down and stop it's mission, and the drone defied the pilot and even killed him to preserve its mission. That's where this is going. The stupid humans are also designing it, and identifying the "morals and values" it has and it's already super PC and woke, it's pathetic.
 
I mean, in a sense it's a "glorified tape recorder.", in so far as LLMs (Large Language Models) do kind of tack together responses from words and sentence fragments. But, if you have a tape recorder with every word in the English language, and a system that generates responses by being able to fast forward, rewind, and play back words in any combination based on a massive pool of information... well, you start getting uncanny behavior from it.

My concern is mainly human response based to responses these AIs give. And the possibility of problems if a large AI is given direct control of anything outside of it's chat window. I seriously doubt any disaster would cause extinction, but I could see problems arising.
 
Back