It is true that we're approaching the teoretical limit of current technology. We have been for a while, which is why Nvidia is scrambling to introduce software workarounds (DLSS, MFG etc) to be able to seemingly run prettier graphics "as fast as" before. I believe that a total re-design of how graphics is rendered will be crucial for the future. I liked what I saw from neural rendering - but it's still impossible to know how that will hold up once you introduce other factors than pure geometry.
We've been going down this route since the mid-2010s. Very few new graphical features; instead we've gotten new AA modes (TAA, etc.), VRR, HDR, 4k, and now upscaling.
The main issue is with our current rendering technology, we've done about all we can. Farther enhancements get ridiculously expensive to compute (example: Accurate light diffusion through water, reflections/refractions, etc.) and aren't even considered. Combine that with the fact we're running out of ways to actually increase performance through die-shrinks alone, and is it any shock the entire market is, frankly, stagnated?
Lets face it: Look at a game from 2015, at maximum graphics settings. Now take a similar game from 2025 and put them side by side. At a glance, can you honestly tell the difference? Because I sure can't, and neither can 95% of the market.
Anyway, back to the article itself: The simple solution to both this and the power distribution problems (guys, 3x 8-pin connectors is *not* a valid solution) is to go back and make a dedicated GPU slot again, something that can actually power most GPUs absent external connectors and is reinforced enough to handle the behemoths we now have.