AMD Radeon 7800 XT emerges in Geekbench, possibly hinting at imminent unveiling

Daniel Sims

Posts: 1,335   +43
Staff
Something to look forward to: As Team Red teases the remainder of the RDNA3 lineup, analysts and customers are speculating about which models will be released first to compete against Nvidia's mid-range RTX 4000 series cards, some of which have yet to arrive. The appearance of the Radeon RX 7800 XT on Geekbench might offer a clue, even if it doesn't reveal a true indication of likely performance.

A Wednesday entry on Geekbench lists the as-yet-unannounced AMD Radeon RX 7800 XT in a Vulkan benchmark. The GPU's score is likely far from the performance of the fully-released card, but its emergence on the site suggests that Team Red could reveal it soon.

AMD CEO Lisa Su recently confirmed that "mainstream" Radeon RX 7000 graphics cards would hit the market this quarter, but did not specify which particular cards. Around the same time, a now-deleted GitHub pull request listed 12 GPUs in the series, both released and unreleased, including the 7800 XT.

Moreover, previous rumors suggest that the 7800 XT, 7700 XT, and 7600 XT will debut at Computex in late May or early June. The more affordable Radeon RX 7600 is likely to emerge first, as Navi 32 and 31 yield issues could delay the release of the 7700 XT and 7800 XT. This situation adds intrigue and anticipation to the 7800 XT's presence on Geekbench.

The card scored 113,819 in a Vulkan test while paired with a 6GHz Ryzen 9 7950X and 58GB of system RAM running Debian Linux. The score is similar to listings for the RTX 3070 or RX 6700 XT and much lower than the card's likely real-world performance.

The system is likely an early internal AMD engineering sample. With the 7800 XT probably still weeks from launch, the drivers are nowhere near ready, and power constraints may also be engaged.

Geekbench doesn't reveal anything about the GPU's specs, but previous information suggests it will use the full Navi 32 die, giving it 60 compute units (7,680 shader units). It's also expected to feature 64MB of Infinity Cache across four memory dies and 16GB of 21 Gbps GDDR6 VRAM with a 250-to-285-watt TBP. The 7800 XT would likely compete with the RTX 4070 Ti.

Whichever new RDNA 3 GPUs Team Red releases first and regardless of their performance, they are expected to invigorate the market for new mainstream graphics cards, following customers' lukewarm response to Team Green's RTX 4070. Nvidia could limit that card's supply due to slow sales, and the RTX 4060 might not impress users either.

Permalink to story.

 
I think the rumours are claiming 7800XT is slower than 4070 Ti but faster than 4070. It'll need to be $599 max if that's true.
 
I think the rumours are claiming 7800XT is slower than 4070 Ti but faster than 4070. It'll need to be $599 max if that's true.

If they can do $600, it'll sell like hotcakes. Of course, I said the same thing about the 4070, and ate my words. Unfortunately, I think it'll release just above that with limited availability after Nvidia's misstep to see what the market does before flooding it with cards.
 
This whole generation of GPUs is boring as sht.
Just out of curiosity, and meant as a genuine question, what would you want to see in the next generation of AMD, Intel, and Nvidia GPUs?
 
This whole generation of GPUs is boring as sht.
It is except for the 4090, ignoring prices for a moment, it's one of the biggest leaps in performance we've seen in a while. Adding pricing back in though, yeah far more boring.
 
This whole generation of GPUs is boring as sht.
Boring because they stealing money from end-users who always supported them, completely wrong by them.
I wont buy another GPU till I feel they have felt the pain they are dishing out
 
This whole generation of GPUs is boring as sht.
Yes, horrible prices and ridiculous amounts of memory in many Nvidia cards, plus mediocre products by AMD so far. Next gen will need better prices or I will pass again. Power consumption in 4090 is a scandal too.
 
Just out of curiosity, and meant as a genuine question, what would you want to see in the next generation of AMD, Intel, and Nvidia GPUs?
I would like to see more focus on the mid-range, essentially more raw performance without any upscaling and ray tracing as a possible alternative in that segment, tighter competition from AMD, they need to match Nvidia not just be the budget option. Better price/performance ratio.
 
I would like to see more focus on the mid-range, essentially more raw performance without any upscaling and ray tracing as a possible alternative in that segment, tighter competition from AMD, they need to match Nvidia not just be the budget option. Better price/performance ratio.
Probably going to be a while, if ever, before we see a significant improvement in the mid-range. Fab costs have risen massively and aren't going to get any cheaper for the foreseeable future. The GCD in Navi 32 is likely to be the same size as the entire Navi 23 die, but literally be double the chip inside (twice as many CUs, ROPs, and cache), but that's only been achieved by using TSMC's N5 node, which is a fair bit more expensive than N7.

AMD, or any GPU vendor, is going to be reluctant to field that kind of chip in a low-margin product, hence why Navi 32 is going to be an RX 7800-class die, whereas the Navi 33 (same internals as the Navi 22) will be N6-fabbed (to make it smaller and improve yields) and stuffed into 7600/7500 models.

There are only two ways of improving raw performance -- higher clocks and more shaders, TMUs, ROPs, cache, etc. In previous years, TMSC and the like only raised their fabrication prices in a linear manner, with a relatively low gradient. Now that it looks more like the ********** of K2, the only way that one can continue to improve GPUs and not eat into the profit margin is either to raise prices or release products that are only marginally better than the previous generation (or do both!).
 
There are only two ways of improving raw performance -- higher clocks and more shaders, TMUs, ROPs, cache, etc. In previous years...
In previous years, Nvidia being my example here, they would use the same Fabrication process (28nm) but still get a leap in performance by making substantial architecture and design changes.

Are the days gone of that happening? Now it's just use the same GPU architecture and design but simply add more everything (cache, CU's, ROP etc...) to it?
 
In previous years, Nvidia being my example here, they would use the same Fabrication process (28nm) but still get a leap in performance by making substantial architecture and design changes.

Are the days gone of that happening? Now it's just use the same GPU architecture and design but simply add more everything (cache, CU's, ROP etc...) to it?
To a large degree, yes -- that era is over now, for gaming GPUs at least. If one compares the innards of an AMD to an Nvidia one, they're virtually identical. A far cry from a decade ago. It probably won't be long before the core structures are exactly the same and the only differences will be in the ancillaries, such as matrix and ray tracing units.
 
To a large degree, yes -- that era is over now, for gaming GPUs at least. If one compares the innards of an AMD to an Nvidia one, they're virtually identical. A far cry from a decade ago. It probably won't be long before the core structures are exactly the same and the only differences will be in the ancillaries, such as matrix and ray tracing units.
Explains why they go after the latest and greatest fabrication then, if your only option is stuff more of everything in, you need the better fab to tame size, power, heat etc...
 
Back