China's first gaming GPU, the Lisuan G100, performs like a 13-year-old Nvidia GTX 660 Ti

Rather minor as in it wasn't wide spread.
I consider it major when it starts showing up on tech sites / tech youtube channels.

Issues affecting a few or that just aren't severe are happening on both sides of the fence regularly imo.


Who is? Not me. I said major issues (as in wide spread and severe). I fully agree that both sides have issues, but the major ones most recently have been on NVIDIAs side. AMD has gotten all the driver bashing on every single site I visit in the comments section for like 3 decades now whilst in actuality it isn't actually much worse or better than NVIDIA.

AMDs often slower in fixing things. But considering NVIDIAs valued at 3 trillion and AMD at 0.23 trillion USD it's a miracle they even compete in the same space at roughly the same level.

I'm sure NVIDIA will fix the current(?) black screen issues. Personally for drivers I see AMD and NVIDIA are a wash for me and have been for a long time now. I keep flip flopping between the brands (RX 290X, GTX 960, GTX 970, RX R9 Fury, RTX 3060, GTX 970 Ti, RX 580, RX 6700 XT etc etc - my GPU history is diverse and mostly just affected by value and good deals).
Heck, ages ago I used both at the same time and even that experience was surprisingly good and stable. I had a NVIDIA GT 520 hooked up over expresscard slot on a Laptop with AMD 7850M graphics so I could get a WQHD signal out of the laptop for my external monitor.
Most of Nvidia's latest issues are far overblown, people are pretending houses are burning down and every single 4090/5090 has burning connectors happening.

Yes the connector sucks, but we're talking about a handful of people that will actually experience issues with them.

You say the driver issues are minor, to the people experiencing these issues with their devices it isn't, especially if they don't realize the cause of it. You don't want to know how much time I spent on my AMD 9950x3d freezing my entire computer during gaming. I spent days trying to find a solution, you know how frustrating it is to have a newly bought computer be an expensive paperweight because no typical solution seems to work? Only to find out the CPU "malfunctions" if you turn off the iGPU for some reason, despite having a dedicated GPU? That one innocuous tweak that I applied to every single Intel CPU I owned that had zero negative side effects had me on a goose chase for days, with me being close to RMAing the entire machine. The irony being I bought this PC to avoid having to deal with not having a computer for weeks when my i9-13900k eventually bites the dust. I had zero issues with my i9 in the years I had it, and the replacement caused me headaches and frustration from day 1.

It's not an issue like 12-14 series Intel degrading CPU's, that's a widespread issue and will probably occur to many people using these CPU's.

Most issues from both AMD/Nvidia are overblown, but both have had their fair share of hard- and software issues over the years. Perhaps they seem minor at AMD side because they have fewer sales and also take zero risks in terms of GPU's because they're afraid of a single bad chip happening.

It's less of a deal when a 700 bucks card dies than a 2k one, you think it there was widespread issues with the 5050/5060 cards it would be as overblown as it is with the 4090/5090's? Most sites would report it once or twice and don't care about it after.
 
It's less of a deal when a 700 bucks card dies than a 2k one, you think it there was widespread issues with the 5050/5060 cards it would be as overblown as it is with the 4090/5090's? Most sites would report it once or twice and don't care about it after.
I think the news would have been even prominent ;)
It's because in that particular case, things melting and permanently damaging themselves makes for spectacular headlines and people remember it because it's a potential fire hazard.

Cheaper cards having a hardware problem that only affects a small subsection can make for prominent news, I remember the RX 480 going overspec on the PCI-E slot power draw making for lots and lots of articles and videos. It didn't affect many people, afaik nothing was permanently damaged due to it and it was resolved through an update.

But yes on average the expectation is that the more a product costs the less flaws are tolerated which imo is a fair stance. It's why NVIDIA dropping 32-bit PhysX support was a stupid choice imo. Those cards cost good money, how much can it really cost to properly support PhysX?
At least they opensourced it now I suppose.
 
Back