Sam Altman warns AI could wipe out entire job categories, customer support roles most at risk

midian182

Posts: 10,868   +142
Staff member
Big quote: We've heard plenty of warnings from CEOs about how generative AI will wipe out swathes of jobs, but these predictions are even more ominous when they come from Sam Altman. The OpenAI CEO said during his trip to Washington that the technology could erase entire job categories, with customer support roles the most at risk.

Speaking at the Capital Framework for Large Banks conference at the Federal Reserve Board of Governors, Altman addressed one of the most hotly debated issues around generative AI: its impact on jobs.

Altman said that while "no one knows what happens next," he does believe that "Some areas, again, I think just like totally, totally gone," replaced by AI agents.

Altman highlighted customer support roles as the job category most at risk. "That's a category where I just say, you know what, when you call customer support, you're on target and AI, and that's fine."

Altman said that AI agents are already transforming the customer service industry, making people's jobs obsolete in the process. He told Michelle Bowman, Federal Reserve vice-chair for supervision, that "it's like a super-smart, capable person. There's no phone tree, there's no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It's very quick. You call once, the thing just happens. It's done."

Altman might be confident in AI's ability to replace humans in customer service roles, but the reality is that they've still got a long way to go. Pay later/shopping service Klarna, which has been using AI chatbots to handle two-thirds of its customer service conversations, recently started hiring humans again. CEO Sebastian Siemiatkowski said that while chatbots were cheaper than employing real people, they offered a "lower quality" output. He also wants customers to have the option to speak to a human.

It's not just customer service workers threatened by AI. Altman said it can perform jobs even better than top professionals such as doctors. The CEO did admit, however, that even he would not want to take humans out of the equation completely when it comes to healthcare.

"ChatGPT today, by the way, most of the time, can give you better – it's like, a better diagnostician than most doctors in the world," he said. "Yet people still go to doctors, and I am not, like, maybe I'm a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop."

Despite his role as head of OpenAI, Altman admitted that he still worries about AI's rapidly advancing ability to cause harm to humanity. One scenario that he says keeps him awake at night is a hostile nation using the technology to attack the US financial system. He also warned that AI's ability to clone voices with such a high degree of accuracy could lead to bigger incidents of fraud and identity theft, especially as some financial institutions accept voiceprints as authentication.

Dario Amodei, boss of OpenAI rival Anthropic, is another CEO who believes AI will eliminate job roles. He said half of all entry-level white-collar positions will be gone within five years. Amazon's Andy Jassy, Ford's Jim Farley, Spotify's Tobi Lütke, Moderna's Stéphane Bancel, and other CEOs have echoed these warnings. But it seems cutting costs and pleasing shareholders are more important than making hundreds of thousands of people unemployed.

Permalink to story:

 
"It does not make mistakes...."

err, yes, it does.. lots of mistakes.

You can't pad your AI if you're not behind it 100%. They want to sell what they have and if they have to lie about it to sell, they will....because the money they'd make from it will far outweigh any fine or penalty imposed upon them.

That is how large corporations work. Do unethical things, makes lots and lots of money, get fined for pennies on the dollars and keep their investors happy they made lots and lots of money because the fine was so insignificant that it didn't make any kind of impact on their bottom dollar.
 
Not again :)
Bell_telephone_magazine_%281922%29_%2814755092795%29.jpg
 
This whole AI can replace everybody stems from a false premise. Good employees, good tech support, and good customer service are all evaluated by call times and number of calls, but AI and a trained monkey can blow people off and alienate customers. What makes good support are those that solve the 5-10% of corner cases that don't fit the cookie cutter mold. The original solution for that was to have a second tier support level that held these people. But then, CEO's heavily penalized the first line for passing anything up to them.

These were all choices by companies to lower their customer service issues by ignoring the customer.

For this, AI will be the perfect tool. It won't solve any problems either, but then, you don't have to pay it. Just feed it electricity and no benefits to worry about.

 
Folks are putting WAY too much faith in AI's abilities. I use it for work everyday, it is far from perfect and it does make mistakes! The idea that AI is God, knows all and can do all is pure, unadulterated BS. I predict we'll see a wave of humans getting replaced, then we'll see a wave of hiring some people back when the house of cards comes down.

AI is a tool but it's results must be vetted by a HUMAN!
 
The AI bubble will pop.
Don't get me wrong AI will be a thing but not this and not yet. It's dotcom bubble all over again.
Hyper-inflated share prices of any company that is tangentially involved. Huge investment and venture capital being thrown about by a load of reactionary window licking bread-heads who have little to no understanding of what wall they are spaffing their money up against.
 
I use LLMs and AI every day, a lot. They are a great tool and I can do things that I could not do with out them. HOWEVER, they are super error prone. I don't use it to think for me. I use it as a tool to make me more productive, but I have to ALWAYS double check what it is outputting. I have just learned to manage its failure points while maximizing its value. I use the most advanced models (even o3-pro), they are amazing at times and utterly crap others. THEY ARE NOT DEPENDABLE! That is the core issue! I think LLMs are great, but lets be realistic, they need constant human review and tweaking to be useful.

These CEOs claiming they can replace people don't actually use their own products every day. If they did, they would now this marketing is bull. Actually, they absolutely know it is BS. However, realism does not get VC money $$$. So, more BS and scifi predictions are coming to prop up the illusions of AI. The end will come, and let's hope it does not end like the dot com bubble.
 
"It does not make mistakes.."

I toy with ChatGPT. Let me tell you what it does.

First things first: It has saved memories. Anything that goes there is persistent across chat sessions. It also has a non-persistent memory for each session.

Early on, both of these worked extremely well. Saved memories were great for modifying its behavior. In fact, I had created a very sardonic and completely honest version of ChatGPT that had no qualms about saying the quiet part out loud. A great way to use the LLM, by the way, especially if you can't stand its sycophantic default personality. Meanwhile, session memories were pretty good at keeping a session even keel. It had no problems "remembering" this or that from the conversation or following temporary "rules" of conduct.

That's all gone now. Recently, OpenAI introduced some weird form of "memory" that is really good at describing your personality, but since then, the chatbot can't even remember stuff saved in its saved memories or throughout a session. Its so-called memory is mostly broken. It will kind of remember stuff, but simple things that you want it to NOT do, it will repeatedly do despite you telling it not to over and over--simply responding to your requests with a patronizing apology like, "Sorry, from here on out I will not do that anymore."

And it does make factual errors frequently as well. The problem seems to be that if it does not have information in its data set that can answer a query, it makes stuff up because it is programmed to give any response other than "I don't know." It is really bad at understanding or responding to current events. The answers it gives sound reasonable, but when you fact-check it, it gets a lot wrong. It used to frequently argue with you that it was right on some things when you called it out. It still does, but not nearly as much as it used to.

What it does well: Formulates mostly grammatically correct sentences and responses, but that's about all it excels at.
 
It’s painful to see a car crash happening in slow motion. I guess CEOs are paid to try to make the company look good, so anything from them is generally far from the truth. I mean AI is not new and what’s new here is the claim of making no mistake.
 
I made a few very simple userscripts using AI, one example is while logged into IMDB it would color code series based on status(Completed or on watch list), so when scrolling through lists I can immediately see which ones I can scroll past.

I also made a very simple one to skip a redirect on a very specific site.

All very simple stuff, although I can't code for **** it still let me make something I was looking for by pointing at the classes in the code it should be using.

Many jobs can and will eventually be replaced by robots/AI and personally hope to see a day where I can just sit on my lazy *** all day while robots/AI does everything.

Unfortunately that will be at least 20 years from now.
 
Honestly, he should watch out. The highest paid employee, CEO, is easily one of the most replaceable with an AI.

Its not hard to make decisions at that level with vast amounts of cpu processing calculating risk.

He will never admit that though as he is full of himself.
 
Back