AMD Radeon RX 7900 XT vs. Radeon RX 6800 XT

I would like everyone to step back and remember that you can already get a 7900XT for $500. In this special promotion they throw in a free motherboard, CPU, RAM, memory, storage, case, power supply, and controller.

This special deal is called an Xbox Series X
 
I would like everyone to step back and remember that you can already get a 7900XT for $500. In this special promotion they throw in a free motherboard, CPU, RAM, memory, storage, case, power supply, and controller.

This special deal is called an Xbox Series X
lol the gpu in the Xbox Series X = 7900XT since when?
 
lol the gpu in the Xbox Series X = 7900XT since when?
It delivers similar frames when comparing the same games/resolution/quality. This has been known for quite some time.
It turns out that Windows and PC drivers still take a pretty big bite out of performance. While it took the 3070 to finally generate more teraflops than a Series X, the efficiencies of being able to create games for a unified platform kept the consoles on top until 4080/4090 came around.
 
AMD will be reluctant to do this -- the operating margin in its Gaming sector isn't great (16% in Q4 2022, 14% for the whole year). The bulk of that sector's revenue comes from PS5, Steam Deck, and Xbox Series X/S SoC sales, but given that these are already low margin products, significantly dropping the prices on dGPUs will only hurt that sector even more.

That said, at least it actually made some operating income in Q4 -- the Client sector (I.e. Ryzen) actually made a loss.

The polls make for interesting reading and I feel that a reasonable portion of people have unrealistic expectations concerning performance improvements with new GPUs. An increase of 50 to 60% would require substantial changes in architecture design and chip size.
The poll doesn't specify that the upgrade needs to be just one generation newer than the current card. Most gamers wait two generations.

I had a 1080Ti and skipped the 2xxx series. Now I have a 3080 (12GB) and I definitely got a at least a 60% increase!
 
The poll doesn't specify that the upgrade needs to be just one generation newer than the current card. Most gamers wait two generations.

I had a 1080Ti and skipped the 2xxx series. Now I have a 3080 (12GB) and I definitely got a at least a 60% increase!
1080ti was six years ago and was a $600ish card. You probably got four years of use out of that card before the 3080. Was it a whole new PC or did you put a 3080 into an 8th gen Intel / first gen Ryzen?
 
Very good article. Steve is right that if you're financially limited, then the 6800xt makes more sense.

But there are other considerations. I bought a 7900xt last month. Here's why:

I needed a card that a 650w PSU could drive. It had to fit in my cherished LianLi PC-V700, which has problems with wide or tall cards. It also needed to drive a 4k monitor at 60hz+. And I wanted it to be quiet.

The 7900xt hit all points. So far it's been a good choice.



 
1080ti was six years ago and was a $600ish card. You probably got four years of use out of that card before the 3080. Was it a whole new PC or did you put a 3080 into an 8th gen Intel / first gen Ryzen?
New PC, 9700K (oc'd to 5ghz across all 8 cores), 2019. The 3080 is doing quite well at 3440x1440.

I bought the Alienware AW3423DW QD-OLED monitor that sports a 175HZ refresh rate. I needed more horsepower and pulled the trigger on a Newegg flash sale for the 3080 12GB model for $810. Sold my EVGA AIO 1080Ti on ebay for $350, so it worked out well.

The 1080Ti served me well for many years.
 
Last edited:
The 6800XT, 7900XT, and 7900XTX are the only cards worth buying right now at the high end, unless you really want RT. <$600, $800, and $1000. Thanks for the comparison TS.
I believe the 6950XT is also a great card that can be had today below $599 with cpu Combo

and comes within 7% performance delta
https://videocardz.com/newz/family-...to-7-faster-than-rx-6950-xt-but-costs-28-more
https://www.microcenter.com/site/content/bundle-and-save.aspx
imo the 6950XT will outlast any 12 gigabyte vram cards in a few years.
 
Last edited:
Cost break down for what? Dies costs cents to make. Millions are produced and whatever is good in the batch is used. However more dies are bad than good so you wanna break down the cost of a card it's probably around 25-50 to produce and ship for a single card but in the making what you don't see is all the dies that were bad, the employees on the line testing that need paid, or even the the engineers that devople the cooling or look of the card. So yes cards prices need to be increased to make up for the losses and still profit. You can't look at just the cost of the card but what actually goes into making the card!
You must have lost your sanity if you believe that a modern GPU die in the recent manufacturing process costs pennies. Such a high-end chip costs hundreds of dollars.

A 5nm wafer should already cost close to $20k
 
You must have lost your sanity if you believe that a modern GPU die in the recent manufacturing process costs pennies. Such a high-end chip costs hundreds of dollars.

A 5nm wafer should already cost close to $20k
The list price for 5nm is $17k. Big players (Apple, AMD, Nvidia...) will get a big discount. For the 4080 Nvidia is getting about 180 chips off of each wafer. I suspect Nvidia pays about $10k a wafer due to their relationship, volume, and future plans to produce at 3nm and beyond. Let's also say that only 90% of chips are functional for a 4080. That would put each GPU chip at around $60 each. The RAM, board, cooler, packaging, assembly, and shipping probably push the total cost of goods up to $300 for a 4080. That leaves $900+ in pure profit to distribute between Nvidia, board partner, and retailer.
 
The list price for 5nm is $17k. Big players (Apple, AMD, Nvidia...) will get a big discount.
There is no 'list price' -- the $17k figure that one can find on the Internet comes primarily from this source, which is a well-reasoned estimate, but an estimate nonetheless. TSMC may well offer a discount, but that's likely to be offset by factors mentioned by Nvidia:

"We have placed non-cancellable inventory orders for certain products in advance of our historical lead times, paid premiums and provided deposits to secure future supply and capacity and may need to continue to do so in the future." [Source]

Let's also say that only 90% of chips are functional for a 4080. That would put each GPU chip at around $60 each.
The AD103 in the 4080 just has two TPCs disabled -- the rest of the die is fully functional. To have a 90% yield like that would be nothing short of miraculous, but if it is that high, then Nvidia wouldn't need to pre-order as many wafers as it would if the yield was, say, 70% for a 4080-die and thus wouldn't get much, if any, of a discount.

The RAM, board, cooler, packaging, assembly, and shipping probably push the total cost of goods up to $300 for a 4080. That leaves $900+ in pure profit to distribute between Nvidia, board partner, and retailer.
The GPU dies are initially assembled, packaged, and tested by contracted companies, such as Amkor Technologies initially. I mention this particular company because it's one of the few to provide a decent amount of financial data -- for FY2022, its revenue was $7 billion, with 86% of that being for packaging and the rest for testing, with an overall gross margin of 19%. The cost of the work that Amkor and other packaging & testing companies do needs to be added to TSMC's wafer costs. (I only mentioned Amkor but the other firms Nvidia uses report similar revenue figures, though rarely anything about margins).

So even if a single 4080-die from TSMC costs Nvidia just $60, the packaging, testing, & distribution will add to that. How much that adds is too hard to guess, as Amkor doesn't provide sufficient information about how much of its sales can be attributed to GPUs, other than Consumer (AR & gaming, connected home, home electronics, wearables) forms 22% of its end user distribution.

One has to add on the fact that Nvidia does some final QC work itself, but all of its GPUs are stored in the contracted distribution centers -- so all that needs to be added too. Then there's the cost of the die trays being shipped to Foxconn, Gigabyte, et al for final graphics card manufacturing. So the finished GPUs do a fair amount of traveling before they get anywhere near a final distribution hub for retailers, and again, that adds to the final cost.

AMD and Nvidia obviously make a decent profit on their GPUs (they wouldn't be in the business if they didn't). High-end cards sell in relatively low volumes, compared to other semiconductor products, and looking at the gross margins that Nvidia reports I would suggest that the profit is more likely to be in the region of $500, taking all of the above into account.
 
The list price for 5nm is $17k. Big players (Apple, AMD, Nvidia...) will get a big discount. For the 4080 Nvidia is getting about 180 chips off of each wafer. I suspect Nvidia pays about $10k a wafer due to their relationship, volume, and future plans to produce at 3nm and beyond. Let's also say that only 90% of chips are functional for a 4080. That would put each GPU chip at around $60 each. The RAM, board, cooler, packaging, assembly, and shipping probably push the total cost of goods up to $300 for a 4080. That leaves $900+ in pure profit to distribute between Nvidia, board partner, and retailer.

Do you believe what you are writing? The discounts were something like 3% lol

Since that price list was revealed, TSMC has mentioned that it will increase prices on 2 or 3 occasions, in addition to ending the discount offered to companies that buy in high volume. GPUs are not CPUs, you sell the hardware and have to pay a huge software team to support that line of GPUs for many years. Margins are pretty thin on products for the average consumer(gamers).

Nvidia's margins should be at least double AMD's at this point. Note that the 4070 has only 12gb of vram and uses a small chip compared to the competitor.
 
$750, really? More like $600. Seriously. I wish there was a cost break down per card of what it costs AMD to produce one. You would be utterly shocked.

I know there is a lot of overhead involved with that, staff, R&D, everything. But the more cards they sell the less it costs per card.
IMO it's a $500 card at best. The prices are so damn inflated that we are normalizing it crazy prices.
 
Nvidia's margins should be at least double AMD's at this point. Note that the 4070 has only 12gb of vram and uses a small chip compared to the competitor.
Except the 7900XTs are the defects when producing Big Navi 31 for the XTX line - so in many ways they are free. One of the side benefits of chipletting GPUs.
 
Except the 7900XTs are the defects when producing Big Navi 31 for the XTX line - so in many ways they are free. One of the side benefits of chipletting GPUs.
Nothing to do with chiplets -- that's simply chip binning and it's been done with GPUs since the mid-90s, though defect dies were just run at lower clock speeds at that time (disabling defective sections for other products didn't really start until the early 2000s). Given that the manufacturing, packaging, testing, and distribution costs for every die from a wafer are exactly the same, it's somewhat of a flawed argument to suggest that they're free in some ways.
 
Nothing to do with chiplets
The use of chiplets breaks up the GPU into a bunch of tiny chiplets. One tiny flaw on the wafer only impacts a tiny chiplet, so only 20-60 sqmm of water are lost instead of a whole 300 or 600 when the die is a single rectangle on the wafer.
 
The use of chiplets breaks up the GPU into a bunch of tiny chiplets. One tiny flaw on the wafer only impacts a tiny chiplet, so only 20-60 sqmm of water are lost instead of a whole 300 or 600 when the die is a single rectangle on the wafer.
The Navi 31 comprises two chiplets, only one of which is tiny -- the Graphics Compute Die, which is 304 mm2 in area, and multiple Memory Controller Dies, which are 37 mm2 in area. The 7900 XT has a partially disabled GCD (12 CUs are non-functional) and out of the six available placements for MCDs, five are filled and the sixth is fitted with a blank die for heatsink load distribution. The GCD is reasonably large, roughly the same size as the Navi 22, but a lot smaller than the Navi 21, due to the L3 cache and MCs being in separate chiplets.

However, AMD has, so far, only released two products using this chip design (7900 XT and XTX) -- one with a fully enabled GCD and the other with 88% of the compute ability, operating 4% slower. Truthfully, that's not enough die usage to determine how effective AMD's design is, compared to previous ones, but the fully monolithic Navi 21 generated 9 products, from 3 primary chip bins (100%, 90%, and 75% compute ability) and up to 4 clock domains per bin.

So far, there's nothing to suggest that the chiplet design has made much of a difference and if it has, it's to AMD's benefit only.
 
This is performance Stagnation. If the 7900XT was named correctly as the 7800XT it would have been around $700 and pretty reasonable purchase. At it's current price this isn't a performance increase per dollar.
 
Decent Value at $800 @Steve That's astounding, and humorous. Shame that GPU pricing has pushed us to declare that. I see the prices hovering around the $760-$799 still for the 7900XT. I was looking to upgrade, but it made more sense for me to pick up the 6950XT with a free copy of Starfield for $575 yesterday... 50% reduction from MSRP!!! Someone is making huge profits on GPU's.....
 
Back