Graphics card makers respond to RTX 30-series capacitor controversy (updated)

Well, one thing is certain, the nVidia reference spec has five POSCAPs and one MLCC array. Therefore, the cards with five or six POSCAPs are BELOW nVidia's spec and that IS the AIB partner's fault.

You are thinking of Igor's lab's image:

https://www.igorslab.de/wp-content/uploads/2020/09/Bottom-POSCAP-vs-MLCC.jpg

The image was made to show the possible choices. If you look at other capacitor locations, you can see the outline for either POSCAPs or an MLCC array in all 6.

If you read article, it doesn't state that 1 POSCAP is required. It simply states that either can be used in any of the 6 locations

https://www.igorslab.de/en/what-rea...tabilities-of-the-force-rtx-3080-andrtx-3090/

Techpowerup comes to the same conclusion as well:

"Another reason for this, according to Igor, is the actual "reference board" PG132 design, which is used as a reference, "Base Design" for partners to architecture their custom cards around. The thing here is that apparently NVIDIA's BOM left open choices in terms of power cleanup and regulation in the mounted capacitors. The Base Design features six mandatory capacitors for filtering high frequencies on the voltage rails (NVVDD and MSVDD). There are a number of choices for capacitors to be installed here, with varying levels of capability. POSCAPs (Conductive Polymer Tantalum Solid Capacitors) are generally worse than SP-CAPs (Conductive Polymer-Aluminium-Electrolytic-Capacitors) which are superseded in quality by MLCCs (Multilayer Ceramic Chip Capacitor, which have to be deployed in groups)."

Mind you even if we didn't have 6 POSCAP designs, this issue is still occurring on cards with 1 or even 2. According to Igor, Nvidia did not give AIBs enough time to test or bin their cards so they were going into this pretty blind.
 
And in my opinion they created the problem in the first place by grossly underspeccing their batteries. iPhones have absurdly small battery capacities compared to equivalent Android phones and peak power delivery of a lithium rechargeable cell is directly proportional to capacity. Android phones can afford to lose far more capacity than Apple phones before power instability even becomes a problem. IMHO they were technically correct that the throttling was the correct move but customers still have every right to be infuriated both because they hid this from them and because it was entirely preventable by actually putting a decently sized battery in the damn thing in the first place.
iPhone batteries are smaller than android counterparts yet the phones always last way longer. It’s because they are far more efficient and this has the added bonus of charging quicker. They do this whilst absolutely running rings around Android phones performance wise too. The cheapest iPhone the SE smokes even a Note20 in every test.
 
You mean because Nvidia set the minimum spec too low. AIBs were following Nvidia guidance. There are always going to be budget models regardless of card. Not everyone needs ln2 cards. It was up to Nvidia to set the floor and they messed up.

So what cards are going to be affected by this driver performance reduction and what solution is being provided to owners of cards that are effected (a clock reduction is not a solution)?

I believe we will also need a retest with the updated driver as well to see the impact on performance. If they reduce clocks by 5%, that would be significant.

The results are not representative anyway due to the AMD cpu being used so the performance is heavily crippled.
 
Back