Intel Battlemage desktop GPU hits Geekbench with 12GB of VRAM, 2,850 MHz boost

Daniel Sims

Posts: 1,876   +49
Staff
What just happened? Although Intel's second-generation Xe2 Battlemage graphics architecture recently debuted in the integrated graphics of Lunar Lake laptops, details about the dedicated desktop variants remain scarce. A potential mid-range model from the lineup has surfaced on Geekbench, but questions about pricing and driver support linger.

An Intel graphics card from the upcoming Arc Battlemage series has appeared on Geekbench for the first time. Although its performance score aligns with mid-range GPUs, its VRAM capacity, exceptionally high clock speed, and lack of detailed specifications raise significant questions.

Listed under the generic "Intel Xe Graphics RI" designation, the unnamed card achieved a 97,943 OpenCL score, placing it in close competition with the Nvidia GeForce RTX 4060 and Intel Arc A770.

However, the mysterious Battlemage GPU recorded a maximum clock speed of 2,850 MHz, significantly higher than the boost clock of any card currently on the market. It's still unclear what role drivers may have played in its performance. Drivers were a primary weakness of Battlemage's predecessor, Alchemist, but Intel has made considerable progress in this area since entering the discrete GPU market.

Additional details suggest that this new card might fall into the upper budget tier of the Battlemage lineup, or possibly at the lower end of the mid-range segment. It could be a successor to the A380 or an entirely new product.

The card's 160 execution units could represent 20 Xe2 cores, which is a substantial increase from the A380's eight cores, but slightly below the 24 cores found in the next-tier A580. Moreover, its 12 GB of VRAM places it between the A580's 8 GB and the A770's 16 GB.

Moreover, Benchleaks identified the GPU as PCI-ID 8086, which had previously appeared in a Linux kernel driver database as a product based on the Battlemage G21 graphics processor. Reports from established leakers suggest that the G21 is the most powerful GPU Intel plans to release in its dedicated Battlemage series, likely supporting multiple products.

The PC used for the Geekbench test featured an Intel Core i5-13600K CPU running at 3.5GHz with a balanced power plan, along with 32 GB of DDR5 RAM clocked at 4,788 MT/s.

Intel is expected to unveil discrete Battlemage cards later this year. If the company avoids the shipping delays that plagued Alchemist, the new cards could hit the market before next-generation GPUs from AMD and Nvidia, which are expected to debut at CES 2025 in January.

With a few weeks head start, the right price, and good drivers, a new mid-range Intel GPU might help it regain the market share that was completely wiped out.

Permalink to story:

 
Terahertz ICs have already been made, so yes he could because others already have. He's also correct about it not doing much because such a fast chip has to be severely function limited.
no he couldnt. ill bet you 1 million dollars he himself could not do it. im not arguing there are high frequency chips..
 
Last edited:
We've known for decades that clock speed means nothing relative to other architectures. I could make a chip with terahertz speeds, doesn't mean it'll be able to do anything.

I know what you mean, but you can't hide the sun with your thumb. There are no graphics card with that clock, and whatever would get close, is high end, not mid nor low end. It doesn't mean it will be the fastest, but yes, it means quite a lot.
 
The battle to keep VRAM away from the poors continues.

Meanwhile Nvidia 8GB cards perform like AMD 16GB cards in many new games:

https://www.techpowerup.com/review/god-of-war-ragnarok-fps-performance-benchmark/5.html

6800XT at 3070 / 3070 Ti level...
Yeah VRAM really futureproofs a card I see :D

However, this Intel 12GB card is not their best SKU. Top Battlemage will get 16GB.

99.9% of PC gamers don't need 20, 24 or even 32GB VRAM as 5090 is rumoured to get...

12-16GB is plenty, especially if you don't care about Ray Tracing or Path Tracing, which most don't. RT uses far more VRAM than raster. Whats funny here is, AMD can't do RT proper anyway, meaning 20-24GB VRAM is pure waste.

Going all out on VRAM is pointless if the GPU lacks power to begin with. Just look at 6700XT today. Beat by 3070 in 9 out of 10 games. Launch price was the same for both cards. People praised 6700XT for having 12GB and said it would age like wine - Aged like milk instead due to lacking GPU power and RT capabilities (many games has forced RT these days, but can be reduced)

What matters more for longevity, is upscaling and DLSS beats FSR with ease.

I will take superior upscaling (with built in AA and sharpening, replacing any other 3rd party AA solution) over "just VRAM" any day. DLSS Quality looks GREAT at 1440p. FSR at 1440p, not so much...
 
Last edited:
Meanwhile Nvidia 8GB cards perform like AMD 16GB cards in many new games:

https://www.techpowerup.com/review/god-of-war-ragnarok-fps-performance-benchmark/5.html

6800XT at 3070 / 3070 Ti level...
Yeah VRAM really futureproofs a card I see :D

However, this Intel 12GB card is not their best SKU. Top Battlemage will get 16GB.

99.9% of PC gamers don't need 20, 24 or even 32GB VRAM as 5090 is rumoured to get...

12-16GB is plenty, especially if you don't care about Ray Tracing or Path Tracing, which most don't. RT uses far more VRAM than raster. Whats funny here is, AMD can't do RT proper anyway.

Going all out on VRAM is pointless if the GPU lacks power. Just look at 6700XT today. Beat by 3070 in 9 out of 10 games. People praised 6700XT for having 12GB and said it would age like wine - Aged like milk instead due to lacking GPU power and RT capabilities.

What matters more for longevity, is upscaling and DLSS beats FSR with ease.
What about VR? I know the main thing about VR is bus capacity but I wonder if VRAM helps also?
 
Intel is floundering overall. Their forray into the discrete graphics market is too little, too late.
Absolutely not, it was perfect timing due to DX12 becoming the standard for most games and DX12/Vulkan generally works very well on Intel GPUs.

Intels biggest issues has been in DX9-10 titles. Huge improvements has been made already. DX11 performance is mostly fine. Eventually, it won't matter much.

Intel is not leaving GPUs behind now. They don't do it solely for gaming marketshare. They are into AI as well. Intel can't stay focussed on x86 only.

Intel need GPUs going forward. SOCs, iGPUs, AI in general.

Gaudi 3 looks great. Intel is much closer to AMD in AI market than AMD is to Nvidia.


2024 Revisit from Gamers Nexus:


"As for conclusions, they’re simple: When Intel Arc works, it works exceptionally well for the price class that it’s in. This is a stark change from initial launch when several games just simply didn’t launch or stuttered in unplayable ways. Intel has made remarkable improvements in its drivers."

Also, Xe2 looks very impressive - Look at Lunar Lake performance. Beats both AMD and Qualcomm (ARM) in performance vs efficiency.
 
Last edited:
Absolutely not, it was perfect timing due to DX12 becoming the standard for most games and DX12/Vulkan generally works very well on Intel GPUs.

Intels biggest issues has been in DX9-10 titles. Huge improvements has been made already. DX11 performance is mostly fine. Eventually, it won't matter much.

Intel is not leaving GPUs behind now. They don't do it solely for gaming marketshare. They are into AI as well. Intel can't stay focussed on x86 only.

Intel need GPUs going forward. SOCs, iGPUs, AI in general.

Gaudi 3 looks great. Intel is much closer to AMD in AI market than AMD is to Nvidia.


2024 Revisit from Gamers Nexus:


"As for conclusions, they’re simple: When Intel Arc works, it works exceptionally well for the price class that it’s in. This is a stark change from initial launch when several games just simply didn’t launch or stuttered in unplayable ways. Intel has made remarkable improvements in its drivers."

Also, Xe2 looks very impressive - Look at Lunar Lake performance. Beats both AMD and Qualcomm (ARM) in performance vs efficiency.
They launched at least 1 generation too late and it took almost another generation to get the drivers in sufficiently working condition.
 
Meanwhile Nvidia 8GB cards perform like AMD 16GB cards in many new games:

https://www.techpowerup.com/review/god-of-war-ragnarok-fps-performance-benchmark/5.html

6800XT at 3070 / 3070 Ti level...
Yeah VRAM really futureproofs a card I see :D

However, this Intel 12GB card is not their best SKU. Top Battlemage will get 16GB.

99.9% of PC gamers don't need 20, 24 or even 32GB VRAM as 5090 is rumoured to get...

12-16GB is plenty, especially if you don't care about Ray Tracing or Path Tracing, which most don't. RT uses far more VRAM than raster. Whats funny here is, AMD can't do RT proper anyway, meaning 20-24GB VRAM is pure waste.

Going all out on VRAM is pointless if the GPU lacks power to begin with. Just look at 6700XT today. Beat by 3070 in 9 out of 10 games. Launch price was the same for both cards. People praised 6700XT for having 12GB and said it would age like wine - Aged like milk instead due to lacking GPU power and RT capabilities (many games has forced RT these days, but can be reduced)

What matters more for longevity, is upscaling and DLSS beats FSR with ease.

I will take superior upscaling (with built in AA and sharpening, replacing any other 3rd party AA solution) over "just VRAM" any day. DLSS Quality looks GREAT at 1440p. FSR at 1440p, not so much...


How convenient of you to forget all the RTX 3070 stuttering and texture loading problems 🙈 maybe not pick a card that has had demonstrable issues with memory capacity in real world gaming mr fanboy 🫣
 
How convenient of you to forget all the RTX 3070 stuttering and texture loading problems 🙈 maybe not pick a card that has had demonstrable issues with memory capacity in real world gaming mr fanboy 🫣
You are the fanboy spreading misinformation, could not care less which brand I use. Luckily I use 4090 meaning AMD don't have anything for me and won't have for years, since they left high-end.

Reality tho, is that AMDs 16GB cards are beat by a 8GB card, and there's zero issues with textures and stuttering, would be revealed in the minimum fps testing as usual. That is the whole point of minimum fps testing ;)

3070 mostly had problems in The Last of Us, which was AMD sponsored. The game even had issues on many AMD cards as well. The game was fixed long ago. It was a rushed console port, nothing else. Not a baseline for PC gaming at all.

This is not the first game where 16GB AMD cards are close to 8GB Nvidia ones, or even beat:


3070 8GB beats Radeon 6800 16GB even at 4K/UHD at Ultra, minimum fps included.

Sad bue true. AMD is lacking behind in more and more new games coming out, due to inferior arch and lacking features. This is not fanboyism, it is reality.

I was a 6800XT owner before I got my 4090. I know exactly what I am talking about. My 6800XT died on me, had 110C hotspot temp since day one (AMD says its fine, look at their official forums if you don't believe me) and tons of issues in all kinds of games. FSR was horrible as well.

With my 4090 I am getting 60C load temps, 85C hotspot peaks, silent operation, zero issues in any games, DLSS/DLAA which is awesome, Frame Gen that actually works and option for RT and even Path Tracing which AMD can't do at all.

It is not a coincident that AMD soon drops below 10% dGPU marketshare ;)
 
Last edited:
It is not a coincident that AMD soon drops below 10% dGPU marketshare ;)
AMD has many integrated GPU solutions for PC.

Intel has many integrated GPU solutions for PC.

Nvidia has, how many?

No-one cares about how many GT710 Nvidia sells.
 
AMD has many integrated GPU solutions for PC.

Intel has many integrated GPU solutions for PC.

Nvidia has, how many?

No-one cares about how many GT710 Nvidia sells.

Nintendo uses Nvidia in Switch, 150+ million units sold. AMD lost Switch 2 deal, due to using way more power and because FSR is beat by DLSS. FSR is generally really bad at low res, where DLSS does fine.

Sony did not use FSR in PS5 Pro, but PSSR, Sony in-house built solution, that beats FSR as well. Even XeSS beats FSR. You see, AMD don't have the R&D funds to compete really.

Even Nvidia don't care how many GT710 Nvidia sells. They look at profitable stuff only. There is no money in low end GPU market. People with low funds buy second hand.

Anything else?
 
Last edited:
Wow a 97K OCL score on geekbench with a max clock speed of 2850MHz is pretty rubbish if you ask me considering my RX7800XT gets OpenCL Score 147406 @ MaxCLK of 2250MHz
that makes that Intel GPU pretty inefficient at OpenCL with a 600MHz faster clock speed and a 49,463 deficit in points
 
Wow a 97K OCL score on geekbench with a max clock speed of 2850MHz is pretty rubbish if you ask me considering my RX7800XT gets OpenCL Score 147406 @ MaxCLK of 2250MHz
that makes that Intel GPU pretty inefficient at OpenCL with a 600MHz faster clock speed and a 49,463 deficit in points

This is a 250-300 dollar SKU, 7800XT is 500 dollars.

Since when did Geekbench matter for gaming perf tho. Pretty terrible benchmark for this. Nvidia wins easily.

7800XT is at 3070 level.
 
I dont care if it clocks to a gazillion or 5 MHz, what is its performance like? Is Intel going to catch up to the high end this time? Will they have a 4070 tier card, maybe a 4080? Or are we going to be stuck with more x6x class cards for another generation? Because only launching low end and lower midrange cards isnt working well for them.
 
I dont care if it clocks to a gazillion or 5 MHz, what is its performance like? Is Intel going to catch up to the high end this time? Will they have a 4070 tier card, maybe a 4080? Or are we going to be stuck with more x6x class cards for another generation? Because only launching low end and lower midrange cards isnt working well for them.
AMD will be fighting against 5060 series soon with RDNA4 so what is the difference here exactly?

AMD left high-end gaming GPU market officially. Top RDNA4 SKU will deliver 7900GRE / 4070 series performance. Focus will be improved RT performance. Raster performance not really going up. It is actually going down vs Radeon 7900XTX and 7900XT... Hopefully efficiency is improved as well.


Intel is coming for AMDs prime marketshare, which is low to mid-end and Xe2 looks very good so far. AMD should be worried, considering XeSS beats FSR with ease as well. Upscaling is here to stay. It replaces AA in pretty much all new games. It is listed in game requirements in most new games as well.

Nvidia has high-end for themselves and neither AMD or Intel will change anything about this. They are both years behind.

AMD has been in the GPU market for almost 20 years now, Intel is a new player and already can match AMDs mid-end stuff with Battlemage and drivers improved immensely in just a few years. AMD should be worried.

When Intel starts spitting out own GPUs on 18A and better, they can undercut AMD with ease, because AMD is stuck with TSMC that raised prices alot over the last 5 years. AMD can't even afford to use their prime-nodes anymore. Apple is always 1st at TSMC but now both Intel and Nvidia has priority over AMD too, both are using TSMC 3N long before AMD will.

Lets see if AMD will come crawling to Samsung for cheap chips, the performance will suffer tho. I know AMD has been in talks with Samsung before.

You generelly don't need to use top tier nodes to compete in low to mid-end, so it would make sense for AMD to go cheaper. Margins will probably be higher then.
 
Last edited:
Back