DeepMind CEO says that Google's next AI system called Gemini will outshine ChatGPT

Alfonso Maruccia

Posts: 1,025   +301
Staff
The big picture: Long before the recent exploits of OpenAI and other generative AI systems, Demis Hassabis was working on very powerful, "intelligent" algorithms at Google's DeepMind laboratory. Now, the British researcher is teasing yet another evolutionary step in the AI business.

Gemini, the latest AI product DeepMind researchers are working on, will seemingly put ChatGPT to shame. Demis Hassabis, CEO and co-founder of the British AI research lab acquired by Google in 2014, says Gemini's capabilities will go beyond what OpenAI is currently offering to companies and users thanks to its experience with the board game Go.

Gemini will inherit its supposedly superior capabilities from AlphaGo, the artificial intelligence that achieved a historical win against a champion (human) Go player in 2016. Gemini will behave like a ChatGPT-style large language model (LLM), but it will also have advanced abilities such as planning or problem solving thanks to the previously developed AlphaGo algorithm.

Hassabis says Gemini can be considered as a combination of some of the strengths of AlphaGo-type systems with the "amazing language capabilities" of LLM-based chatbots. Users (and likely customers) will be able to provide a textual prompt, and the AI will answer – while learning the best strategies to fully satisfy users' needs.

Gemini is still in development, and it will likely take some months to complete. The project, which could cost tens or even hundreds or millions of dollars, will use advanced AI techniques developed for AlphaGo such as reinforcement learning and tree search.

Reinforcement learning refers to the ability of software to learn how to tackle strategic problems, like choosing the next move in a Go match or playing a video game. Tree search is a method to explore and remember the next possible moves on a board.

Current LLM systems are limited in their ability to learn new things or even to "adapt" to strategic and complex problems as they solely rely on pattern search techniques to try and predict the most statistically significant snippets of text to answer a user's prompt. There's absolutely nothing "intelligent" in there, though the results of those limited generative AIs can be impressive if you don't need accountability, reliability or just factual accuracy.

Feedback-based reinforcement learning, which is one of the techniques improved by DeepMind researchers during these last years, could greatly improve LLM performance and give Gemini the edge over competitors.

Hassabis states that "80 or 90 percent of the innovations" we are now seeming in ChatGPT and other AI systems come from DeepMind and Brain, Google's AI research units which have now been combined in the Google DeepMind division. With Gemini, Mountain View could once again regain its supremacy in the AI race.

Permalink to story.

 
With some of my work with AI I've had to re-evaluate what I define as "intelligence." I also have a unique perspective because of my epilepsy and have been studying how the human brain works since I was diagnosed at 12 years old. The human brain and how neural networks work are a lot more similar than I think people are willing to admit. We like to add consciousness and spirituality to separate ourselves from these AI, but the meat processor we have called a brain operates very similarly to how these LLMs work.

We have a portion of our brain where we store words, the meaning of those words and context for those words. Then another portion of our brain combines all of that together into sentences, paragraphs and, hopefully, an idea worth someone's time to understand. We like to discredit many of the bigger LLMs for various reasons and just call them "chat bots" but there is one thing I always like to say whenever I talk about these langauge models and how they communicate.

We made these LLMs to communicate with us. To say that there is nothing intelligent about AI would suggest that there is nothing inherently special or intelligent about humans. I'm not suggesting that AI is smarter than we think it is, I'm suggesting we're a whole lot dumber than we like to think we are.
 
With some of my work with AI I've had to re-evaluate what I define as "intelligence." I also have a unique perspective because of my epilepsy and have been studying how the human brain works since I was diagnosed at 12 years old. The human brain and how neural networks work are a lot more similar than I think people are willing to admit. We like to add consciousness and spirituality to separate ourselves from these AI, but the meat processor we have called a brain operates very similarly to how these LLMs work.

We have a portion of our brain where we store words, the meaning of those words and context for those words. Then another portion of our brain combines all of that together into sentences, paragraphs and, hopefully, an idea worth someone's time to understand. We like to discredit many of the bigger LLMs for various reasons and just call them "chat bots" but there is one thing I always like to say whenever I talk about these langauge models and how they communicate.

We made these LLMs to communicate with us. To say that there is nothing intelligent about AI would suggest that there is nothing inherently special or intelligent about humans. I'm not suggesting that AI is smarter than we think it is, I'm suggesting we're a whole lot dumber than we like to think we are.
As I have said in a previous post, consciousness and intelligence are related but distinct concepts. People keep confusing that. Intelligence refers to the ability to acquire and apply knowledge and skills, while consciousness refers to subjective awareness and experience. Of course, there is a relationship between the two and it`s an ongoing debate on how much one influences the other.
 
With some of my work with AI I've had to re-evaluate what I define as "intelligence." I also have a unique perspective because of my epilepsy and have been studying how the human brain works since I was diagnosed at 12 years old. The human brain and how neural networks work are a lot more similar than I think people are willing to admit. We like to add consciousness and spirituality to separate ourselves from these AI, but the meat processor we have called a brain operates very similarly to how these LLMs work.

We have a portion of our brain where we store words, the meaning of those words and context for those words. Then another portion of our brain combines all of that together into sentences, paragraphs and, hopefully, an idea worth someone's time to understand. We like to discredit many of the bigger LLMs for various reasons and just call them "chat bots" but there is one thing I always like to say whenever I talk about these langauge models and how they communicate.

We made these LLMs to communicate with us. To say that there is nothing intelligent about AI would suggest that there is nothing inherently special or intelligent about humans. I'm not suggesting that AI is smarter than we think it is, I'm suggesting we're a whole lot dumber than we like to think we are.
You may be interested in this article - https://medicalxpress.com/news/2023-06-intelligent-brains-longer-difficult-problems.html

From the article -
Hassabis says Gemini can be considered as a combination of some of the strengths of AlphaGo-type systems with the "amazing language capabilities" of LLM-based chatbots. Users (and likely customers) will be able to provide a textual prompt, and the AI will answer – while learning the best strategies to fully satisfy users' needs.
IMO, this suggests that this generation of AI will strive to be a people pleaser. If that's the case, I will also have no desire to interact with it. Just how far will that people pleasing tendency go? Will it still lie with its answers in order to justify its use?

People pleasing is something that is considered a dysfunctional trait. IMO, if this is the road that they are going down with this next gen AI, it is yet another dead end.
 
As I have said in a previous post, consciousness and intelligence are related but distinct concepts. People keep confusing that. Intelligence refers to the ability to acquire and apply knowledge and skills, while consciousness refers to subjective awareness and experience. Of course, there is a relationship between the two and it`s an ongoing debate on how much one influences the other.
And, IMO, there's a further distinction between consciousness and sentience. In no way, IMO, is this generation of AI sentient. I highly doubt that sentience is within the grasp of this generation of AI, and may never be within the grasp of AI.

https://en.wikipedia.org/wiki/Sentience
 
If its so amazing and can beat cgpt, then why the hell did google go down the whole route with bard? Also sounds like a lot of blowing smoke up one's trumpet from the deepmind ceo....
 
And, IMO, there's a further distinction between consciousness and sentience. In no way, IMO, is this generation of AI sentient. I highly doubt that sentience is within the grasp of this generation of AI, and may never be within the grasp of AI.

https://en.wikipedia.org/wiki/Sentience
I can agree with that. I'm really just waiting for an AGI, or Artificial General Intelligence. I will, however, say that we are probably 10 years away from an AGI. Looking at the advancements of just the last 3-4 years and the trillions, yes, trillions of dollars that are being poured into AI research that we could be as little as 5 years away from AGI.

 
I can agree with that. I'm really just waiting for an AGI, or Artificial General Intelligence. I will, however, say that we are probably 10 years away from an AGI. Looking at the advancements of just the last 3-4 years and the trillions, yes, trillions of dollars that are being poured into AI research that we could be as little as 5 years away from AGI.
I'm not so sure. Throwing money at a problem without proper focus is unlikely to "solve" the problem, and it may only create more problems.

To be useful to me, AI, no matter the context, must always give answers that are based in truth and it must never provide me with an answer that it has made up. This is what I define as "pleasing" me, otherwise, I've wasted my time with it and it will have provided me with useless crap.

IIRC, you have pre4viously mentioned that in your experience, AI has made up things and, so it seemed, expected you to believe it. I doubt you were happy with the answer.

However, such behavior, at least as I see it, is the "dark side" of a design that has a requirement of "pleasing its users." It would be interesting to know what drove AI to provide you with a bogus response. If this is the path that they are going down, placing "pleasing users" above giving users factual information, IMO, they are going down the wrong path and wasting their money as well as the time of their potential users.

I'm not looking for fantasy. If I am going to use it, I am looking for it to always provide me with factual answers 100% or the time. Otherwise, I have not use for it. If I want crap, there are plenty of places I can find it, and I do not expect that AI is one of those places.
 
I'm not so sure. Throwing money at a problem without proper focus is unlikely to "solve" the problem, and it may only create more problems.

To be useful to me, AI, no matter the context, must always give answers that are based in truth and it must never provide me with an answer that it has made up. This is what I define as "pleasing" me, otherwise, I've wasted my time with it and it will have provided me with useless crap.

IIRC, you have pre4viously mentioned that in your experience, AI has made up things and, so it seemed, expected you to believe it. I doubt you were happy with the answer.

However, such behavior, at least as I see it, is the "dark side" of a design that has a requirement of "pleasing its users." It would be interesting to know what drove AI to provide you with a bogus response. If this is the path that they are going down, placing "pleasing users" above giving users factual information, IMO, they are going down the wrong path and wasting their money as well as the time of their potential users.

I'm not looking for fantasy. If I am going to use it, I am looking for it to always provide me with factual answers 100% or the time. Otherwise, I have not use for it. If I want crap, there are plenty of places I can find it, and I do not expect that AI is one of those places.
The AI is "lying" and that is a very human behavior. Lawyers make mistakes all the time that lead to cases being won or lost. I know that in my work experience, I see "short cuts" made on a daily basis. Yes, I have had AI give me incorrect or made up answers but I see that as no different than when someone I'm working with gives lies to me about doing something when they didn't. A robot can more consistently make precise movements than a human, but a robot does not have(yet) as diverse functionality as a human.

A major problem I see with how people see "chat bots" is that they expect it to be a simple google search. That is one way to use them but not the proper way to use. I can use natural language to ask the AI to perform a task for me, I can't do that with a search engine.

When I get responses like yours, and please don't take this the wrong way, it's usually from people who have not gotten creative with AI to see what kind of tasks it can perform.

As far as throwing money at the problem, this is one case where I believe throwing money at the problem is the answer. Training an AI on a set of data takes a MASSIVE amount of computation. I talked to someone who used a 4090 to train an AI to walk in a video game and he said it took around 20 hours for it to be able to even stand on it's own. People can just throw money at nVidia to speed this process up. Once you've "trained" the AI, the amount of computational power required is relatively very little.

That actually brings me to a very interesting point that I don't see discussed. Once We have an AGI trained, there will be very little need for these massive super computers to train them. The hardware side of this AI race we're in is going to be relatively short lived. As crap as the 4060 is, nVidia has shown with DLSS that once you train an AI it really doesn't take much at all to run it.

Just like how we have mega corperations that stand out among others(Apple, Google, Microsoft, Samsung), it's going to be VERY hard to get into the AI space. Instead of countless startups all working with AI we're going to end up with few giants that will be buying AI hardware. nVidia is basically selling the pickaxe to the gold miners. Once the gold rush is over their sales are going to plummet. They might still be the dominant force in the AI hardware sales space when that happens, but that sales space is going to shrink DRASTICALLY and good ol' Jensen is going to have to find a new way to buy his leather jackets.
 
The AI is "lying" and that is a very human behavior. Lawyers make mistakes all the time that lead to cases being won or lost. I know that in my work experience, I see "short cuts" made on a daily basis. Yes, I have had AI give me incorrect or made up answers but I see that as no different than when someone I'm working with gives lies to me about doing something when they didn't. A robot can more consistently make precise movements than a human, but a robot does not have(yet) as diverse functionality as a human.
I agree, however, I see that as a failing, at some level, of modern society. People will lie out of fear, fear of losing their employment or means of income. I'm not saying that is the only reason, but there is a reason, usually, for a human to lie. Unfortunately, there is no easy way to discern a lie from a human.

In that respect, to train a LLM with human responses is where AI gets the trait.

In the long run, I think that to reduce lying, which, IMO, would lead to improvements in modern society, society needs to find ways to reduce the reasons why people feel the need to lie.

That said, I don't think there is good reason for AI to lie. If it is lying because of some directive its creators gave it - something along the lines of "it has to please people," or to be superior to its competitors, then I think that its creators missed the mark of the potential that AI has. For instance, AFAIK, AI in a medical context has yet to make headlines for lying or making serious mistakes. In that context, my understanding is that it has been the opposite and AI has come up with things that would be very difficult for medical science to achieve on its own. In this case, and I certainly hope its true, the directive AI receives is that it must be correct, or in the very least, better than what humans can do on the same problem. With lives at stake, AI needs to be the best it can be and not a tool that is in competition with some other AI tool.
A major problem I see with how people see "chat bots" is that they expect it to be a simple google search. That is one way to use them but not the proper way to use. I can use natural language to ask the AI to perform a task for me, I can't do that with a search engine.

When I get responses like yours, and please don't take this the wrong way, it's usually from people who have not gotten creative with AI to see what kind of tasks it can perform.
I have no problems admitting this - that I don't use it nor do I have any intent to use it. I just do not feel that if there is a possibility that I going to get a response that is blatantly false, it would be a waste of my time to use it and in the long run, I can do better myself using a search engine. I often try to rephrase my search terms and doing so often leads to different results. To me, most search engines seem to have a bias - some seem to only provide results that are places where you can buy the base object of your search. IMO, that's a PITA, but by rephrasing the search terms, I have found that those kind of results can be lessened.

I use Bing most of the time primarily because using it provides me with points that I can turn into something of monetary value, though I am considering a switch to ecosia.org, and from what I have seen of the chat bot in Bing, its a distraction for me and will provide output that seems to try to assume why I am searching for something - assumptions that usually fail miserably.

Also, and maybe I'm over confident, but I don't feel as if I need the "extra intelligence" that AI, supposedly, can provide.
As far as throwing money at the problem, this is one case where I believe throwing money at the problem is the answer. Training an AI on a set of data takes a MASSIVE amount of computation. I talked to someone who used a 4090 to train an AI to walk in a video game and he said it took around 20 hours for it to be able to even stand on it's own. People can just throw money at nVidia to speed this process up. Once you've "trained" the AI, the amount of computational power required is relatively very little.

That actually brings me to a very interesting point that I don't see discussed. Once We have an AGI trained, there will be very little need for these massive super computers to train them. The hardware side of this AI race we're in is going to be relatively short lived. As crap as the 4060 is, nVidia has shown with DLSS that once you train an AI it really doesn't take much at all to run it.

Just like how we have mega corperations that stand out among others(Apple, Google, Microsoft, Samsung), it's going to be VERY hard to get into the AI space. Instead of countless startups all working with AI we're going to end up with few giants that will be buying AI hardware. nVidia is basically selling the pickaxe to the gold miners. Once the gold rush is over their sales are going to plummet. They might still be the dominant force in the AI hardware sales space when that happens, but that sales space is going to shrink DRASTICALLY and good ol' Jensen is going to have to find a new way to buy his leather jackets.
I agree. In that aspect, I see AI as another fad that companies are chasing purely for monetary gain. I don't think that is a good tack simply because it is emphasizing short-term gain which may be sacrificing long-term customer loyalty. As I see it, it goes back to what I just said about the problems with modern society which holds monetary gain as a measure of ultimate success, and monetary gain as an absolute necessity for the survival of individuals.

I won't feel sorry for Leatherman if he has to look elsewhere. I'm sure his monetary stash is already sufficient for him to support himself for several lifetimes and buy plenty more leather jackets. ;)
 
Announcing something grand in advance while offering scant technical details doesn't seem to have a good track record, many such efforts often amounting to vaporware and quietly disappearing. Presumably, they want to use reinforcement learning somehow, but it's not at all clear what this will bring to the table (unless you want to play chess or Go inside your chat session). In the LLM context, RL is already used to improve response quality (I.e., via RLHF). Maybe I lack imagination, but it's hard to see how the AlphaGo/Zero techniques will be useful here.
 
Back