IBM CEO says AI will boost programmers, not replace them

Skye Jacobs

Posts: 749   +15
Staff
A hot potato: The role of AI in the future of programming has become a hot topic among tech industry leaders. During a recent interview at the SXSW conference, IBM CEO Arvind Krishna weighed in on the debate, asserting that AI will not replace programmers anytime soon. Instead, he believes AI will serve as a powerful tool to enhance their productivity, enabling developers to work more efficiently.

Krishna estimates that AI could write 20 – 30 percent of code but emphasizes that its role in more complex tasks will remain minimal.

His views contrast with more optimistic predictions from other industry leaders. Dario Amodei, CEO of Anthropic, has forecast that AI could generate up to 90 percent of code within the next three to six months. Meanwhile, Salesforce CEO Marc Benioff has suggested that his company may stop hiring traditional engineers by 2025 due to AI-driven productivity gains.

However, Benioff also underscores the importance of human expertise and is actively reskilling Salesforce's workforce to collaborate effectively with AI tools.

Krishna's perspective aligns more closely with Benioff's, emphasizing that AI will enhance productivity rather than eliminate programming jobs. He explained, "If you can produce 30 percent more code with the same number of people, are you going to get more code written or less?" Krishna believes this increased efficiency will drive market share gains and fuel the development of new products.

As a company deeply invested in AI-powered solutions, IBM has positioned AI as a complementary tool rather than a replacement for human programmers. While Krishna has previously mentioned pausing hiring in back-office roles that AI could automate, he now underscores AI's augmentative role in creative and technical fields.

Drawing historical parallels, Krishna compares today's AI debates to past concerns over calculators replacing mathematicians or Photoshop making artists obsolete. He acknowledges unresolved challenges, such as intellectual property issues surrounding AI training and outputs, but sees AI as a net positive, improving product quality across industries.

"It's a tool," Krishna stated. "If the quality that everybody produces becomes better using these tools, then even for the consumer, now you're consuming better-quality [products]."

Krishna also predicts that AI will become significantly more energy-efficient, citing emerging techniques from companies like Chinese AI startup DeepSeek. He envisions a future where AI consumes "less than one percent of the energy it's using today," making it more accessible and cost-effective.

However, Krishna remains skeptical about AI's potential to drive groundbreaking scientific discoveries or generate entirely new knowledge. Instead, he argues that quantum computing – a field in which IBM is heavily invested – will be the key to advancing scientific understanding. "AI is learning from already-produced knowledge, literature, graphics, and so on," Krishna explained. "It is not trying to figure out what is going to come next."

His stance contrasts sharply with that of OpenAI CEO Sam Altman, who has suggested that "superintelligent" AI could emerge in the near future and play a crucial role in accelerating innovation.

Permalink to story:

 
While accurate, that's a bit misleading: For a couple years the best model for whenever you need programming has been to have the 'Senior' devs and management located where it's easy to find (And inevitably, eventually replace) the talent but the bulk of the more menial programming work goes to the 'Junior' dev positions and guess what? Almost always fully remote to overseas locations if they can help it because well, why pay a Junior dev in the US 50-60k for an entry level job if you can get away with paying someone on a different country 15k, 12k, at times even lower than that?

Sure more inexperienced Juniors need lots of coaching, training and supervising to make sure they're delivering the code on time and on a functional state but guess what? Why have a team of 3-4 in-house Senior Devs supervising 15 Junior devs abroad if you can cut it down to 10 Juniors, a single Senior and very liberal use of plagiarism a.k.a. ChatGPT and other AI tools?

It's never going to be good enough to produce production ready code but I have seen it cut down time a lot on debugging and explaining basic functionality of existing code to Junior Devs for example.

So tl;dr: Yes coders will still be needed but most of the audience here and reading about it on most western publication in English will probably lose their jobs in the immediate to not-to-distant future.
 
I've been coding with LLMs since ChatGPT launched, and let me be clear: even OpenAI’s best model isn’t replacing software engineers anytime soon. These models are fantastic tools, but that’s all they are, tools. They eliminate about 80% of boilerplate coding and handle obscure language features way better than most humans. Huge efficiency boost! They’re also amazing at adding comments and documentation (arguably their best use).

I love using LLMs to bring ideas to life. They let me focus on algorithms and logic instead of wrestling with syntax. But they also make dumb mistakes and don’t really "understand" what I’m building. With 25 years of experience to lean on, I constantly have to fix errors, from small annoyances to major logic fails. And forget about them keeping track of a large codebase. Context limitations force me to spoon-feed chunks one at a time.

Bottom line? LLMs do the grunt work while I handle the thinking. I think of it like "pair programming." But when they go off the rails (which happens a lot), I sometimes wonder if I’d be faster just coding it myself.
 
From my experience working with AI on code development, it has been a helpful tool. It's not prefect by any means. There have been some suggestions it has given that leads to dead ends.

So like any other tool, it comes down to knowing when to use it and for what.
I'm with you there! I typically don't code many tools at work because I know it would take me too long. AI tools have helped me accomplish the same goals in 30 minutes or less. I'm no longer reading how to use different modules, APIs, or others for several days. It's been a wonderful tool for improving my productivity as far as entry level code is concerned.
 
I'm with you there! I typically don't code many tools at work because I know it would take me too long. AI tools have helped me accomplish the same goals in 30 minutes or less. I'm no longer reading how to use different modules, APIs, or others for several days. It's been a wonderful tool for improving my productivity as far as entry level code is concerned.
As long as you understand the fundamentals. My concern with people new to development is being too depending on AI for writing all the code and then they will not know when it is wrong.
 
Funny thing is, if the model doesn't have an example it's unlikely to provide code for what you want. AI models most of the time are simply large QnA retrievable databases on the fly. If they were as good as some of our expert programmers they would have taken our jobs already 😂
 
We don't use it for AI codegen (in fact it could get you fired), we have some real concerns about the inputs and outputs from AI code generation could reveal proprietary, company private or other sensitive data to outside sources. It would only be of interest to us if we could get it hosted on an internal network. There is some work in that direction for AI generative documents/emails.
 
What is the risk of reinventing and implementing the wheel 10000 times with AI instead of creating a universally used method? Because in the former case, you risk that the code base quickly becomes an un-maintainable mess.
 
Take no notice of Anthropic, it's all hype. They only advertise for script kiddies: no jobs for real programmers who might expose the truth behind this bogus company.
 
Back