Intel's Arc A770 graphics card goes on sale October 12, starting at $329

AlphaX

Posts: 98   +20
Staff
Something to look forward to: After recently making the plunge into the discrete GPU market with the Arc A380, Intel has decided to try its hand at the mid-range market. During the Intel Innovation keynote, CEO Pat Gelsinger officially announced the upcoming Arc A770's MSRP and release date.

Intel took the opportunity to announce its upcoming products at its Innovation keynote on Tuesday. Along with its Raptor Lake processors, Pat Gelsinger also revealed Intel's flagship Arc A-Series graphics card, the Intel Arc A770.

The A770 follows up on Intel's Arc A380, which consumers and reviewers panned on release. Driver issues and poor performance in many AAA games plagued the graphics card's launch. Thankfully, Intel's Arc A770 packs a much bigger punch, battling alongside GPUs like Nvidia's RTX 3060 and AMD's RX 6600 XT.

Intel has previously claimed that the Arc A770's raster performance ranked slightly ahead of the RTX 3060, and its specs prove why. The A770 comes equipped with 4,096 shading units, 32 Xe cores, 512 Intel XMX engines, and a 256-bit memory bus supporting a bandwidth of up to 560 GB/s. It is a significant improvement from the A380, a card that sometimes struggled to outperform the RX 6400.

Gelsinger also claims the A770 can perform up to 65% better than the competition in ray-tracing performance. Unfortunately, he didn't cite specific graphics cards and benchmarks for this claim. Intel has previously released videos comparing the A770 to the RTX 3060 in ray-tracing performance in the past, so it is fair to assume that this may be the "competition" Gelsinger mentions.

Gelsinger confirmed that samples of the Arc A770 are on the way to reviewers and companies, which affirms recent rumors of embargoes being lifted in the coming days. He closed the presentation with the official MSRP and release date.

The Arc A770 launches on October 12, starting at $329. It is worth noting that the A770 has two different models, featuring either 8 GB or 16 GB. Expect a minor price bump for the 16 GB model. Thankfully, Intel has not pulled any trickery with having two graphics cards under the same name. Users who opt for the 8 GB model of the Arc A770 will not lose any significant performance, with the only downgrade being a minor drop in the overall memory speed.

Permalink to story.

 
16GB variant could be a winner for 6-8K video editors and other indie producers at this price - as long as solid support can be established with software such as Premiere Pro, Resolve, etc.
 
This time it seems like Intel is doing much better and targeting the entry GPUs market. Nice choice. Hopefully Intel can surprisingly be a good new competitor that makes NVIDIA and AMD "slow down" in terms of increasing their prices.
 
The odds are against them going the distance, but at the very least it will be nice if Intel can shake things up in the moment. That said, everyone should want them to win because 3 big players in the discrete graphics market will surely make things more interesting for the consumer.
 
Looking forward to the launch! Ever since Ive been interested in PCs, the GPU wars have only been fought between NVIDIA and AMD. Fingers crossed that Intel can shake things up
 
Last edited by a moderator:
Do they managed to patch the drivers and software? Looks that way if they set price and launch date.
Same here waiting for reviews and maybe some price cuts in entry/mid cards.
Card design is better than 4 slot behemoths these days.
 
My biggest questions (and perhaps it's already been answered), are related to drivers and continued support. Is intel good at updates for their nonGPU products? If so, do we expect them to be good for GPU drivers too? Should we anticipate them to release game day drivers for the big AAA titles like Nvidia and AMD do?

Don't get me wrong, I'm excited to see how these perform, just a lot of unknowns beyond performance imo.
 
That price looks very competitive. Seems good against 3060ti, 3070 or even 6600XT? Or am I wrong?
Lol, this is Intel we are talking about. If the price is competitive, it's because the performance is not. That doesn't make these GPUs a bad choice but don't get fooled into thinking these are a no-brainer purchase.

The day an Intel top-tier GPU unambiguously beats a Nvidia or AMD top-tier GPU, expect prices to exceed the sky-high performance.
 
So, on paper, the A770 is better than the GeForce RTX 3060 Ti -- a card that shares the same MSRP. FP32 is about the same, but the A770 has far better texture and pixel rates, four times more L2 cache, and more memory bandwidth. It even has better theoretical states than the Radeon RX 6800 (which can't account for the massive benefit the Infinity Cache brings), which is a card that has an MSRP $160 higher.

But we all know how the A380 turned out and that should have categorically trounced the RX 6400...
 
So, on paper, the A770 is better than the GeForce RTX 3060 Ti -- a card that shares the same MSRP. FP32 is about the same, but the A770 has far better texture and pixel rates, four times more L2 cache, and more memory bandwidth. It even has better theoretical states than the Radeon RX 6800 (which can't account for the massive benefit the Infinity Cache brings), which is a card that has an MSRP $160 higher.

But we all know how the A380 turned out and that should have categorically trounced the RX 6400...
As Steve Ballmer once said in an alternate reality where Microsoft entered the GPU market, "Drivers, Drivers, Drivers!"
 
Regardless of the fact that the A770 will not make a big market splash and will likely be frustrating for early adopters due to early drivers, I am glad to see Intel finally getting these cards to the market. Also, XeSS seems to be a true competitor to DLSS 2 and apparently doesn't need tensor cores to function. The reviews have been very positive putting it only slightly inferior to DLSS 2, mostly because of worse ghosting artifacts. But even DLSS 2 has had several updates to clean that up a bit. Nvidia's announcement of DLSS 3 I guess helps them a little to continue to justify the tensor cores, but XeSS proves that DLSS 2 or at least something very similar to DLSS 2 can be done on all modern GPUs without the need for the tensor cores.
 
"Gelsinger also claims the A770 can perform up to 65% better than the competition in ray-tracing performance. Unfortunately, he didn't cite specific graphics cards and benchmarks for this claim."

That's because he's full of it. The hardest thing in the world to prove is a lie.
 
Regardless of the fact that the A770 will not make a big market splash and will likely be frustrating for early adopters due to early drivers, I am glad to see Intel finally getting these cards to the market. Also, XeSS seems to be a true competitor to DLSS 2 and apparently doesn't need tensor cores to function. The reviews have been very positive putting it only slightly inferior to DLSS 2, mostly because of worse ghosting artifacts. But even DLSS 2 has had several updates to clean that up a bit. Nvidia's announcement of DLSS 3 I guess helps them a little to continue to justify the tensor cores, but XeSS proves that DLSS 2 or at least something very similar to DLSS 2 can be done on all modern GPUs without the need for the tensor cores.
I'm pretty certain that FidelityFX FSR proved that before XeSS.
 
I don't need a video card that dims the lights if I turn the computer on.
I just need a stable video card to do photo editing in photoshop. That's the max
I run on my computer. Gave up gaming around 2005-2010
 
Also, XeSS seems to be a true competitor to DLSS 2 and apparently doesn't need tensor cores to function.
XeSS doesn’t require tensor cores but will make use of hardware that accelerates matrix operations, if it’s present. Both Intel and Nvidia have such units dedicated for this (tensor cores and XMX units, respectively).
 
XeSS doesn’t require tensor cores but will make use of hardware that accelerates matrix operations, if it’s present. Both Intel and Nvidia have such units dedicated for this (tensor cores and XMX units, respectively).
Yeah, sounds like when reviewed on cards without tensors it presents a lower quality and much lower uplift. So, it is much more similar to DLSS than it is to FSR in that regard.
 
I'm pretty certain that FidelityFX FSR proved that before XeSS.
XeSS is much closer to DLSS than FSR, that being said, apparently cards without tensor cores are not seeing anywhere close to the same performance uplifts, so tensor cores do seem to be necessary for DLSS/XeSS at least for worthwhile uplifts in FPS at quality settings.
 
XeSS and DLSS are very similar, as both use a DNN to calculate the pixel values. FSR uses no deep learning mechanism at all - it's Lanczos resampling, across three buffers, plus some after-tweaks.

All three can be carried out using standard shader cores, but given the nature of the calculations involved with DNNs, dedicated matrix multiplication units will always provide a much better performance uplift.

Each of these methods adds extra time to the whole frame processing pipeline, so if it's taking forever to do this, then there's little point in reducing frame resolution just to lose the gains to the upscaling algorithm.
 
XeSS and DLSS are very similar, as both use a DNN to calculate the pixel values. FSR uses no deep learning mechanism at all - it's Lanczos resampling, across three buffers, plus some after-tweaks.

All three can be carried out using standard shader cores, but given the nature of the calculations involved with DNNs, dedicated matrix multiplication units will always provide a much better performance uplift.

Each of these methods adds extra time to the whole frame processing pipeline, so if it's taking forever to do this, then there's little point in reducing frame resolution just to lose the gains to the upscaling algorithm.
Still what Intel has done is provided an open sourced solution that can be taken advantage of by Nvidia and Intel cards and perhaps even future AMD cards as there are some rumors that tensor/matrix cores are making it into RDNA3. Obviously, it is on Nvidia whether or not to open up DLSS to these other cards with tensor cores. I hope they do, but performance may very depending on the differences between the cores. If tensors do come to RDNA3 though, I would imagine that AMD will have their own solution (FSR 3.0?). Either way, at this point it looks like tensors/matrix cores are likely in the future of GPUs at least for the foreseeable future. Nvidia has proven the value of the technology. That being said, I stand corrected from my original statement on tensors, they are necessary for DLSS/XeSS for those technologies to be actually be worthwhile. Initial reviews of XeSS that I watched did not really make it clear, but some subsequent testing with non-tensor cards proves that tensors are required for meaningful uplifts at quality/balanced settings, performance settings did provide a meaningful uplift, but FSR 2 remains a better option for cards without tensor cores.
 
Back