TechSpot means tech analysis and advice you can trust. Read our ethics statement.
Forward-looking: What if you could talk to a chatbot that knew you? Not one that simply references past statements and questions you have made, but one that has an intimate profile of who you are that it can pull from to chat with you and answer questions on a deeper level.
While Google is busy integrating its generative AI model Gemini into Pixel, Bard, and other existing products, developers are proposing an entirely new one, codenamed Project Ellmann. Ellmann aims to grab a "bird's-eye view" of users' lives by looking at their photos and web-browsing habits and provide more personalized chat. The project takes its name from biographer Richard David Ellmann.
Google has not officially announced the not-yet-in-development project. The news comes from an internal summit meeting presentation leaked to CNBC. The software proposes to use Gemini to analyze Google Photos and search histories, then develop an overview (profile) of the user's life.
The presentation used the example, "Imagine ChatGPT, but it already knows everything about your life."
Since the software would have many pertinent details surrounding users, it could respond to questions with more specific and accurate answers. For example, someone could ask, "What should I get my husband for Christmas?" Since Ellmann might know from pictures that the husband likes to fish, the algorithm could respond with related suggestions rather than offering a generic list of items.
Of course, the project raises the question of whether people will trust such an invasive technology. Chances are, many people won't like the idea of an AI going through their search histories and photos to get more relevant answers to a query. Ads already try to do that, and most people don't care for that level of scrutiny.
A Google spokesperson told CNBC:
"Google Photos has always used AI to help people search their photos and videos, and we're excited about the potential of LLMs to unlock even more helpful experiences. This was an early internal exploration and, as always, should we decide to roll out new features, we would take the time needed to ensure they were helpful to people, and designed to protect users' privacy and safety as our top priority."
There is also the fact that Google kills more ideas than it completes. Just look at the Google Graveyard to see that. Some projects never get off the ground, and we never hear about them, but everything in the graveyard has been a released product at some point. Whether Project Ellmann gets a green light is yet to be seen, but even if it makes it to public consumption, the odds of Google killing it are pretty high.
Naysaying aside, it is an intriguing application of current generative models and LLMs. We will likely see similar efforts from other companies wanting to beef up their assistants with advanced AI. Microsoft has already integrated its GPT model into Bing and Edge. But everything comes with a cost. Sometimes that's your privacy.
On the one hand, having an AI assistant that could book a haircut without asking which salon you prefer would be convenient. On the other, I'm not sure how many people would willingly hand over access to their personal lives for that convenience, especially to companies that might not have the best reputation regarding user privacy (don't worry I won't name names, Meta). A high level of transparency in how these models function is needed to move it forward positively. Otherwise, it just becomes another evil corporate weapon.