John Carmack suggests the world could run on older hardware - if we optimized software better

I'd be perfectly OK with less software; that is of a higher quality, than the deluge of absolute trash we have today.

Strangely, the lack of modern coding language and lesser hardware did NOT stop companies and individuals from putting out metric TONS of software int eh 80s and 90s. All the "productive boosting" has resulting in is ever inflating lead times, development delays, and wasted productivity on garbage like micro transaction stores or worthless DRM that doesnt work.
I have fond memories of the games I played on the Z80. Recently, I revisited some of those old titles and they were still fun, but the experience just doesn’t compare to modern games. I also replayed NFS: Porsche Unleashed (which I really enjoyed back in the day). I remembered it having stunning visuals and great handling, at least by NFS standards. But replaying it now, it felt far less impressive, mostly because I'm comparing it to today's games. This just goes to show how gradually software evolves. It can feel like progress has stalled, but when you look back far enough, the difference is like night and day.
 
He is not wrong, no reason is the world thinks the wrong way around, make something heavier then make something stronger to deal with it, but the truth is if you optimise the software, you can run on older hardware for many many years longer, the only good thing with new hardware is power effciency and density
 
Nothing new, and this is the best practice if the software is better designed and coded, both user interface and the way they work and use less CPU and memory resources, my old friend (should be 77 now) in US did his great DOS UltraDMA driver + cache in less than 8K, why a simple printer driver download more than a Gigabyte?
 
As someone who learnt how to code in machine language, before moving on to assembly, the likes of FORTH, and then compiled code, I always saw how much larger, and potentially inefficient programs were becoming, though tremendously easier of course.
 
You don't need to write assembly, good old C++ will do the magic.
People have no idea how bad the code that runs in today's games is.

It's no so much that the code is *bad* per se, it's that the fundamental design is often lacking.

One trend I've seen across the industry the past two decades is an over-reliance on designing software to conform to OOP principles...even when it doesn't make sense. I've seen plenty of over-engineered software that is almost impossible to debug or understand because it's been so over-abstracted out that no one really understands how it works. There's also going to be an inherent performance penalty with this approach (to what extent depends a lot on what you're doing). It isn't the code is "bad", it's just over-engineered.
 
Amen to that brother, look at Linux how nice it run's with no crapware and bloatware in it. Microshait wants YOU to pay for the shait they deliver. They could optimize, but why do that ?
 
Nothing new, and this is the best practice if the software is better designed and coded, both user interface and the way they work and use less CPU and memory resources, my old friend (should be 77 now) in US did his great DOS UltraDMA driver + cache in less than 8K, why a simple printer driver download more than a Gigabyte?

Tbf, that driver likely supports multiple families of printers, each with their own differences in internal processing, and almost certainly supports code-paths for multiple OS's (even if only one path is active for any specific OS). Much easier for both the developer and end-user to package it this way.
 
When hardware has limits, software gets optimized. We saw that with old computers like the C64 and ZX Spectrum, with developers creating genuine miracles with minimal hardware capabilities. Consoles were the same... if the lifecycle of a console was 5 years, then the games would be highly-optimized for that platform.
 
attempt to mandate hardware DRM locks via TPM at worst.

That's my take. M$ ignored the potential of smart phones and the stores that came with them. Apple and Google are making money hand over fist due to the exclusive nature of their respective stores. So now M$ wants the same thing (always late to the party) and the only way is to lock windows down so we have to use their store for software. TPM 2.0 will let them do just that, and IMHO from win8 on that's been the long term goal.
 
Oh yeah. I'm fully guilty of this. I've got some financial software I've written up (autotrading and backtesting). It's in Python. It's not well optimized, I would guess if I went through the entire thing and wrote it just for performance I could double the speed. It's written to be simple and failsafe. In this scenario, I'd also (most likely) have to rewrite the whole spiel in C, and in some cases (some of the data sets are bit large) probably further optimize for memory use (... if you're going to have to regress to older CPUs, you're also going to have that older motherboard that won't take 16+GB RAM. Of course there's still some "garbage tier" notebooks with 4GB in them, and many with 8GB so ideally one would already be used to at least giving some care to RAM usage.)

So the forgetting how to make new CPUs is an interesting thought experiment. So we aren't having that happen, but...

1) It does seem like Moore's Law dropped off roughly around 2010 or so. Core 2 Duo era, you were maybe not seeing quite a doubling every 18 months, but it was still maybe every 2 years. It's now completely typical to only see a 10% increase, if that (Intel's recent issue of having 14th gen show near-0 speedup, and slowdown in some workloads, compared to 13th gen.)

2) You *can* put tons of RAM in a lot of systems, but so many ship with 4-8GB, so maybe double to 2-4GB they shipped with 20 years ago.

3) And with move from hard drives to SSDs, well, again, systems were shipping with 250GB hard drives like 20 years ago, and I even purchased a 750GB *IDE* hard drive back in the day. 750GB-1TB storage was completely typical around 10-15 years ago. You can buy huge hard drives now, but with so many people doing "SSD or nothing" the amount of storage space in the typical computer has actually decreased compared to like 10-15 years ago.

I find the 3rd point the most disappointing -- SSDs above 1TB are still far too expensive, and notebook drives top out at 2TB because bigger ones are "too tall" (and, due to low demand, the HDD companies are not applying new tech to 2.5" drives to get more data per platter that they do on 3.5" ones.) I actually got a notebook with room for a SATA drive, so I have a 1TB SSD and 1TB HDD in there, I'd LOVE to get far more storage in there. Damn the memory and SSD companies for running a cartel (the SSD prices are fairly high partially due to collusion and price fixing. Samsung, Hynix, etc. get investigated for collusion... at that gov't speed so it takes 4 or so years.. found guilty, fined, they keep colluding, US or EU etc. start their next investigation for collusion which takes another 4-5 years, fined again, rinse and repeat for at least the last 25 years.)
 
Last edited:
WE have to optimize our code? How about Windows, Microsoft 365, Adobe Suite, etc etc etc. WE do not and cannot optimize the code delivered to us. It is the software deveoper's responsibility.

Having said that, code bloat is integral to the Microsoft DNA. Optimize the code? Optimize is not in the Microsoft lexicon. I am sure that when Microsoft software designers gather in a room to hash out design of a product, the most complicated design always wins out.
 
Last edited:
In a way it makes sense but getting the world to agree will be impossible. Imagine everyone being at a somewhat equal level in technology, of course we can't have that now can we? No money to be made. But we have seen this already happen in China when semiconductor sanctions were placed on them and China used old hardware to develop domestically.
 
Optimizing software is the perfect job for AI. There is already a working version of the software with know inputs and outputs. AI could iteratively optimize by using a large set of know inputs and outputs and benchmarking each modified routine for improvement against the original an other AI produced versions.
 
I would have to agree with John here. Actually the software of the current age is a set of very difficult and detailed logic bundles which have survived the ages, so in a since we are still using the same software just a matured version of the identical core set. Building upon it in a zig zag manner from all directions has made it a bit chaotic. However going back to formula is a daunting task one I might attempt at some point later.
 
Optimizing software is the perfect job for AI. There is already a working version of the software with know inputs and outputs. AI could iteratively optimize by using a large set of know inputs and outputs and benchmarking each modified routine for improvement against the original an other AI produced versions.

You just described unit testing. Which we already do. Granted the AI could do the optimising faster or explore more areas but it would be an expansion on existing work. I use AI to write unit tests as it is just a lot of tedious boilerplate most often.
 
I'm on my work laptop now and just checked the highest memory users.

Firefox (6GB), Visual Studio(1.5GB) and Edge(1.5GB) are by far the highest, So if you want to talk about system wide optimising we need to start with browsers and websites.
 
No duh? People have been calling out the laziness of software developers for eons now.
At least in this regard you have no idea how things works. If the devs were lazy and it would affect the product and the money spend they would be fired and replaced. The devs cannot afford to be lazy in this savage capitalistic society. Its the publishers and shareholders that dictate the quality of their product because they pour money into it, so they decide for it. The devs are just the hired guys for a given job, and they have almost no saying for it.
 
Carmack is right, but it’s not as black-and-white as “coders just need to try harder.”

Performance isn’t always the top priority in a real-world dev environment. Often, you're juggling legacy constraints, tight deadlines, business-driven feature work, or dealing with inherited technical debt. Optimization becomes a luxury unless it's directly affecting user experience or bottom-line metrics.

That said, I think Carmack’s point is more philosophical: we’ve normalized inefficiency to the point where bloated, wasteful code is accepted as the default. He’s pushing for a shift in mindset, not just blaming individuals.

We need more support from leadership and better tooling to make performance part of the development culture — not just something a few devs squeeze in during code reviews or weekends.
 
You just described unit testing. Which we already do. Granted the AI could do the optimising faster or explore more areas but it would be an expansion on existing work. I use AI to write unit tests as it is just a lot of tedious boilerplate most often.
I do unit testing too and that is not what I am suggestion, I am suggesting AI iteratively generate new code based on the code it tested in order to optimize it and then test then unit test the new code AI code to see if it is an improvement. Experiments have already been done with AI optimizing FFTs (AI took C code and crafted assembly code) and AI has written optimized code humans likely could never write.

Most unit test validate the outputs from the same inputs and doesn't usually involve code optimization phase (in most cases its a human in the loop phase as opposed to Generative AI).

Subscription based AI cloud service do more than just boilerplate coding.
 
Last edited:
I do unit testing too and that is not what I am suggestion, I am suggesting AI iteratively generate new code based on the code it tested in order to optimize it and then test then unit test the new code AI code to see if it is an improvement. Experiments have already been done with AI optimizing FFTs (AI took C code and crafted assembly code) and AI has written optimized code humans likely could never write.

Most unit test validate the outputs from the same inputs and doesn't usually involve code optimization phase (in most cases its a human in the loop phase as opposed to Generative AI).

Subscription based AI cloud service do more than just boilerplate coding.

Where I work we do as you say, We have timings on some unit tests and we optimise with both unit tests and continuous intergration systems making sure it's still working after and see if the timings reduce or sometimes not :). I see what you mean about the AI trying novel approaches but I'm not sure it's quite ready to be let loose yet on any maintainable codebase (Like ours), With human supervision and inspection of the end result though it could work. It's food for thought :) Not sure I would want it to change the language to lower level though, Our code is in C and C++ which should be fast enough.

I am overly cautious as I have to maintain and modify this code for a long time, Some of it is from the early 90's and is still working, So somebody has to be able to understand it for upgrades and changes later.

Prompted by what you said I am going to try a few things with copilot and see how it performs on this kind of work. :)
 
Pretty rubbish nostalgia really, only Carmack has a point. Our coding becomes worse and worse as we get more processing power. AI also accelerates the... enshitification. With most companies investing in dubious endeavours instead of precision and usability.
 
Back