When people actually start to realize that everything about "AI" (AI -

) is fake, I bet the current AI will go the way of Clippy and Cortana - both died slow deaths until M$ realized that people don't want crap. It appears that it is going to take M$ and other "AI" or as I have started to put it AHI (a-hole intelligence) purveyors a long time to learn that you can't keep producing the same crap over and over and expect different results.
If I can't trust AI to give me valid factual results, which I cannot as other TS users, inparticular
@yRaz have said, it will give you completely made up answers, I have less than no use for it. In Bing, I've taken to using uBlock origin's element hiding helper to eliminate it from my Bing experience. Bing and any search engine results are bad enough, I don't need AHI to make the answers worse. AHI is lipstick on a pig.
So I've been doing A LOT of work with AI recently and am even working on making my own. I also have epilpsy so I've been learning about how the brain works my entire life, although I wont pretend to be a neurologist. The brain is not the 'CPU' of the body as it's often been thought of and we're still struggling with that idea around 40 years after it was proposed. The Brain does not have an operating system or instruction sets but it does a very good job of looking for patterns and making connections with things it has learned in the past, IE, large sets of data. Is this starting to sound familiar?
The main argument I see around why AI isn't real is that we know the mechanisms behind how it does what it does. That actually isn't true, most people describe AI creating content as "hallucinating" and then you get your results from that hallucination. We know a lot more about how AI works than the brain but don't let that confuse you, we don't actually know how either one works. So what I take away from peoples arguments saying AI doesn't work like a real brain is that there is so much uncertainty in how the brain works relative to how AI works that we say "AI isn't real". It's a VERY weak argument. One that I feel maybe entirely false.
The thing is, humans are also really good at making up stories. We have buildings filled with them, they're called Libraries. And talking about books brings me to an interesting point. What happens in your head when you read a book? Many people who read a lot, myself include, experience a phenomenon where they stop seeing the words and start vividly creating something akin to a movie in their head. Is this starting to sound familiar?
Humans make stuff up all the time, often times unintentionally. Maybe I misheard you, maybe I heard you correctly but the "speech to text" center of my brain interpreted this wrong so I give a wrong response. If I have learned anything from my work with AI it is that it is important to make sure your inputs have as little room for interpretation as possible. And that's just a good communication rule in general. When I start a project with ChatGPT I talk down to it about the subject that I'm working on until I'm confident it is familiar with the subject I'm working on.
The thing is, to get AI to appropriately do it's job and lower its error rate you have to treat it differently than a google search. People want to Ask AI a question like a google search but that it's how to get the best answers out of it. You need to know how to set limits and give proper instructions but doing that is more abstract than writing some code when interacting with it.
Lowering the error rate with answers from AI is very similar to lowering the error rate when interacting with humans. Humans make mistakes constantly, far more than we're willing to admit to ourselves. We make stuff up and hallucinate constantly whether we realize it or not. And as with everything, we need to check its work in the same way that I would need to check someone elses work if I have them perform a task for me. There is an idea that AI should be infallible but it is that very nature that makes it such a strong assistant.
I find it annoying that people think we should intentionally make it worse so it doesn't take peoples jobs, that is entirely the wrong approach. If someones job can be easily replaced by these early AI's then how good are they really at that job? Are we to ignore these tools and impair them because we're worried about our jobs. How important are "jobs?" We seem to think that fast food workers aren't important so we replace them with kiosks and robots, but those are just tools to streamline and make a job more efficent. There is an idea that somehow going to college and becoming educated makes that job more important, it doesn't. This is inherent to the ideas of capitalism and I'm glad people are finally talking about it because the whole idea that "your job isn't important because X, Y and Z."
Our economy is going to start being filled with AI. High level jobs are going to be replaced by AI and the world will be better for it. We do not need to stifle innovation to save someone's "job," what we need to do is re-evaluate how our economy works. Capitalism is not needed in an AI dominated economy and I find it infinitely ironic that capitalists are the ones pushing AI research right now. And just a reminder, no ones job is important and in the eyes of a company, everyone is replacable.