Intel Arc A770 desktop GPU debuts in the Geekbench database

mongeese

Posts: 643   +123
Staff
Forward-looking: Intel has finally started releasing its mobile Arc GPUs and is poised to release its first desktop Arc GPUs later this Summer. Somewhere near the top of the pile will be the Arc A770, a fully-equipped part with 512 EUs, a 2-2.5 GHz boost clock, and 16 GB of memory.

Intel first announced the flagship Arc hardware as the ACM-G10 GPU and then paper-launched it as the A770M for mobile. Laptops with the A770M should appear on shelves in the coming months. Its unannounced desktop equivalent has now been found in the Geekbench OpenCL database.

It's not a ground-breaking leak because the hardware itself has been leaked so many times, including once previously in the Geekbench database. It is, however, the first time that it's officially been called the Intel Arc A770 Graphics to give it its full name.

It's also another dot on the clock speed map: 2.4 GHz. Late last year, we talked about rumors of 2.5 GHz, then, in February, we saw 2.4 GHz for the first time. Intel themselves teased 2.25 GHz last month but were perhaps referring to a different model.

And, the critical bit: aligning with past leaks, the A770 achieved an OpenCL score of 85,585 points. Slightly more than the RX 6600 XT, about the same as the RTX 2060 Super and RTX 2070, and a good bit less than the RTX 3060. In other words, thoroughly midrange.

But that isn't the whole story. Geekbench breaks down the OpenCL score into its component categories, of which there are 11. Some architectures are better in some classes than others. For example, the 6600 XT is about 35% faster than the 3060 in the Gaussian blur test, despite being slower in eight categories and having an overall worse score.

Although the 3060 bests the A770 in more than half the categories, the A770 takes a significant lead in the particle physics and Gaussian blur tests. It loses by the largest margins in the Sobel, histogram equalization, horizon detection, and Canny tests, all of which are based on computer vision.

From this, you can see a bit of a pattern: the A770 does poorly in tests sensitive to memory but is otherwise computationally powerful.

Given the on-paper specs of the A770M, it's a surprising result that the A770 presumably shares: 16 GB of GDDR6 clocked at 17.5 Gbps and connected via a 256-bit bus. That's not a bad memory subsystem, so perhaps this is a quirk of OpenCL, or maybe, the Alchemist architecture has a memory bottleneck.

Realistically, there wouldn't be significant ramifications if it did. At worst, it might make the architecture better suited to lower resolutions than higher ones and reduce its longevity. Nevertheless, it's interesting to see what makes the Alchemist architecture different from Ampere and Navi.

In terms of gaming performance, the OpenCL scores don't suggest much. As it says above, the 6600 XT is more than 10,000 points behind the 3060, but in our review, we found that it was faster in most games at 1080p and 1440p. Leaked benchmarks from before we knew its name put the A770 ahead of the 3060 and in the realm of the 3070 Ti.

Permalink to story.

 
Don't be surprised to see some ORM-s gorging on this mediocre product. Intel has done many under-the-table deals like that in the past. And they want this product out there badly, pulling out every last rabbit, if they have to.

Do we want this Arc thing or not? - is an "Arkanoid" problem :)
 
Last edited:
Don't be surprised to see some ORM-s gorging on this mediocre product. Intel has done many under-the-table deals like that in the past. And they want this product out there badly, pulling out every last rabbit, if they have to.

Do we want this Arc thing or not? - is an "Arkanoid" problem :)
We don’t know if it’s mediocre yet, we don’t have anywhere near enough information to get to that conclusion. For all we know it’s possible that it could be really good. It could cost $200 and beat a 6600XT for all we know.

I’m hoping it’s better than what Nvidia and AMD make now. I don’t understand why anyone wouldn’t want that…
 
I'll take whatever they bring to market if it fills a consumer demand. I don't expect fireworks performance wise. Totally fine for a first generation effort to be solidly midrange. If priced correctly there is plenty of market to go around. It'll be the second and third generation iterations that will hopefully start to nudge the leaders. If they help banish the discrete GPU shortages the world suffered forever into the trashcan of history it is welcome.
 
You'd swear people want this to fail. Which is weird. Because even if Intel only show up in the low-mid end discreet segments, it will eventually become a drag on GPU pricing across the board. If they choose to undercut the encumbent competitors, it could trigger some of the biggest GPU price drops in years.
 
Don't be surprised to see some ORM-s gorging on this mediocre product. Intel has done many under-the-table deals like that in the past. And they want this product out there badly, pulling out every last rabbit, if they have to.

Do we want this Arc thing or not? - is an "Arkanoid" problem :)

Considering the way AMD and Nvidia have manipulated the GPU market over the years, I wouldn't have a whole lot of sympathy for them. You reap what you sow.
 
We don’t know if it’s mediocre yet, we don’t have anywhere near enough information to get to that conclusion. For all we know it’s possible that it could be really good. It could cost $200 and beat a 6600XT for all we know.

I’m hoping it’s better than what Nvidia and AMD make now. I don’t understand why anyone wouldn’t want that…
INTEL and Cheap don't fit in the same sentence. So I doubt is gonna be cheaper than its equivalent on AMD and NVidia. Even less with the lack of GPUs.
 
Whatever it really turns out to be most will pan it
No matter how bad it is most commenters will meatshield it because "muh competition!"
You'd swear people want this to fail. Which is weird. Because even if Intel only show up in the low-mid end discreet segments, it will eventually become a drag on GPU pricing across the board. If they choose to undercut the encumbent competitors, it could trigger some of the biggest GPU price drops in years.
case in point.
 
Based on this maybe I am not way off in my assumption that these will be failures pretty much immediately for gaming but might actually find a rather solid standing in workstation and data center use?
 
We don’t know if it’s mediocre yet, we don’t have anywhere near enough information to get to that conclusion. For all we know it’s possible that it could be really good. It could cost $200 and beat a 6600XT for all we know.

I’m hoping it’s better than what Nvidia and AMD make now. I don’t understand why anyone wouldn’t want that…

The key for me will be the FP32/FP64 percentage as AMD has had a huge advantage over Nvidia in this for a long time, with the latest AMD releases being more and more similar to the Nvidia percentages.
 
I think Intel's biggest issue will be graphics drivers and optimising game by game performance. I think Arc could be good in 1-2 gens time but first gen products always have teething issues.
 
You'd swear people want this to fail. Which is weird. Because even if Intel only show up in the low-mid end discreet segments, it will eventually become a drag on GPU pricing across the board. If they choose to undercut the encumbent competitors, it could trigger some of the biggest GPU price drops in years.

I could care less either way. But I also consider how difficult bringing a new discrete GPU to market can be, and Intel's past performances. Playing in the enthusiast's market space is totally different then the CPU IGP users one. They are much less demanding and don't stress the hardware nearly as much as someone playing an intense AAA game on a high or mid level GPU does. It's not that I think Intel can't do it, but like with AMD and Ryzen catching up to the iCore CPUs, it's not going to be easy or happen overnight IMHO.
 
Until Intel has these cards on the shelf of my local Micro Center I could careless.
 
Back