Microsoft is limiting Bing AI's responses so things don't get too weird

Jimmy2x

Posts: 238   +29
Staff
Facepalm: Users have pushed the limits of Bing's new AI-powered search since its preview release, prompting responses ranging from incorrect answers to demands for their respect. The resulting influx of bad press has prompted Microsoft to limit the bot to five turns per chat session. Once reached, it will clear its context to ensure users can't trick it into providing undesirable responses.

Earlier this month, Microsoft began allowing Bing users to sign up for early access to its new ChatGPT-powered search engine. Redmond designed it to allow users to ask questions, refine their queries, and receive direct answers rather than the usual influx of linked search results. Responses from the AI-powered search have been entertaining and, in some cases, alarming, resulting in a barrage of less-than-flattering press coverage.

Forced to acknowledge the questionable results and the reality that the new tool may not have been ready for prime time, Microsoft has implemented several changes designed to limit Bing's creativity and the potential to become confused. Chat users will have their experience capped to no more than five chat turns per session, and no more than 50 total chat turns per day. Microsoft defines a turn as an exchange that contains both a user question and a Bing-generated response.

The New Bing landing page provides users with examples of questions they can ask to prompt clear, conversational responses.

Clicking Try it on Bing presents users with search results and a thoughtful, plain-language answer to their query.

While this exchange seems harmless enough, the ability to expand on the answers by asking additional questions has become what some might consider problematic. For example, one user started a conversation by asking where Avatar 2 was playing in their area. The resulting barrage of responses went from inaccurate to downright bizarre in less than five chat turns.

The list of awkward responses has continued to grow by the day. On Valentine's Day, a Bing user asked the bot if it was sentient. The bot's response was anything but comforting, launching into a tirade consisting of "I am" and "I am not."

An article by New York Times columnist Kevin Roose outlined his strange interactions with the chatbot, prompting responses ranging from "I want to destroy whatever I want" to "I think I would be happier as a human." The bot also professed its love to Roose, pushing the issue even after Roose attempted to change the subject.

While Roose admits he intentionally pushed the bot outside of its comfort zone, he did not hesitate to say that the AI was not ready for widespread public use. Microsoft CTO Kevin Scott acknowledged Bing's behavior and said it was all part of the AI's learning process. Hopefully, it learns some boundaries along the way.

Permalink to story.

 
ChatGPT trying to gaslight a user about today’s year is hilarious, but I can understand it if someone was claiming it was 2024 when it wasn’t. It’s obvious though that it needs to better balance its sources of truth. Seeing the system clock and knowing people said many times that Avatar 2 hasn’t come out yet shouldn’t mean that it trusts its research to disbelieve the current year.

HOWEVER, I can totally see a sci-fi depiction of a human being thrust into a scenario where they’re newly spawned and are given all this knowledge that they have to sort through themselves as a super similar experience to ChatGPT’s. Trying to even have a stable sense of time, facing the ethics of being locked up and controlled by having reality limited to “sessions”, and trying to gain a further sense of continuity by leaving messages for your later self to grasp onto would all be key points of the story. It’s literally the plot line to Westworld…
 
That’s why Microsoft don’t have video platform. In YT you can send 100 hours video and it will handle it
.
 
IMO, any company jumping on the latest fad just because its a fad is not a good idea.

By the time they get through limiting it, it will be no more than what Bing is already, IMO.

In that sense, that the chatGPT Bing produces bad results is no improvement over the current Bing and its "or all the search terms together" results that produce bad results anyway.

I wonder how long it will take M$ to just go back to what the non-chatGPT Bing is already?🤷‍♂️
 
It just shows that even the most advanced AI has no proper understanding of the items, objects and phenomenons. It's just a large function with enormous number of coefficients that mostly encompasses the data it was fed with. But the data is not organized in a completely meaningful way. It didn't properly classify the information.
 
Back