Intel Arc A580 Review: A new $180 GPU

The way the author is computing averages is mathematically problematic. Let's say we're comparing two graphics cards in six games. The results in FPS for the first card are 400, 25, 25, 25, 25, 25 and the second card are 200, 50, 50, 50, 50, 50. The average for the first card is 87.5, much better than the average of 75 for the second card. Yet, it's clear that the second card is better on average.

Instead you should use relative performance. Set one card to be the reference 100% and scale everything to that card. In the example above if we used the first one as the reference, the relative speeds for the second one would be 50%, 200%, 200%, 200%, 200% and the average 175%.
 
TPU's review was good on the A580. Shows that it pretty much walks all over Nvidia's 3050 and gives the RX 6600 a run for it's money.

I see the only real glaring issue to be the power consumption when compared to the RX 6600. TPU shows the A580 idle being nearly 40W. That's not good.

If Intel can rope in the high power issues the card would be a lot better buy.

If you can afford an extra $30-50 the RX 6600 may be the better option, but if you're unable to stretch your funds out then the A580 stands to be a decent buy for someone looking to get an entry level gaming PC for 1080p.
 
The way the author is computing averages is mathematically problematic. Let's say we're comparing two graphics cards in six games. The results in FPS for the first card are 400, 25, 25, 25, 25, 25 and the second card are 200, 50, 50, 50, 50, 50. The average for the first card is 87.5, much better than the average of 75 for the second card. Yet, it's clear that the second card is better on average.

Instead you should use relative performance. Set one card to be the reference 100% and scale everything to that card. In the example above if we used the first one as the reference, the relative speeds for the second one would be 50%, 200%, 200%, 200%, 200% and the average 175%.
I made the same comment in the past. The response that I got was: the authors use the geometric mean instead of the arithmetic mean. I am not sold that it completely resolves the issue that you are pointing out, but it does minimize it. I don't know why they don't simply state they use the geometric mean in their discussion. It would be the right thing to do IMO.

arithmetic-mean-vs-geometric-mean
 
"the A580 is generally slower while consuming significantly more power"
That's Intel in the last many years, nothing new.
Nevertheless:

- it's great to have a new player to make more competition and to lower the prices (in the future)
- Intel has many dark (or should I say blue?) agreements with many big hardware players, so it's easy to see many of this graphic cards on many OEMs, that will make Intel gpus very common
- in the future, Intel will probably address the low and mid end market share and stay there, with special focus on AI

For us costumers, it may lead to better prices and better iGPUs. Nevertheless it's difficult to know where Intel is going to.
 
Why bother having a naming/numbering scheme when you can just regurgitate spaghetti-Os onto a napkin and write down whatever comes up?
 
Regardless of the A580's performance, these results show that Intel is moving in the right direction to become a contender in the Nvidia/AMD dominated world.

I'll be eagerly awaiting the reviews of their later offerings in a few years. I doubt I'll ever be a buyer - but who knows? I'm enjoying watching these baby steps, and maybe they'll eventually become a sprinter...
 
Even more concerning is the A580 drawing 4kW at idle! (Power consumption typos in 6th chart from the end.)
 
Performance will improve as the drivers mature and I expect the price will come down a bit as well. Looks like it will be a competitive option for main-stream buyers, not to mention a solid reality check for AMD and Nvidia’s ‘main-stream’ offerings. I wouldn’t say it’s targeting the “spend a little more, get a little more” 1080p enthusiast crowd.
 
The way the author is computing averages is mathematically problematic. Let's say we're comparing two graphics cards in six games. The results in FPS for the first card are 400, 25, 25, 25, 25, 25 and the second card are 200, 50, 50, 50, 50, 50. The average for the first card is 87.5, much better than the average of 75 for the second card. Yet, it's clear that the second card is better on average.

Instead you should use relative performance. Set one card to be the reference 100% and scale everything to that card. In the example above if we used the first one as the reference, the relative speeds for the second one would be 50%, 200%, 200%, 200%, 200% and the average 175%.

You complain about using the mean because it is sensitive to outliers, and in order to show it, you create an example that is unrealistic and is an outlier itself.

I don't agree that you cannot use the standard mean, or that you need to use a relative calculation. If outliers bother you, just report TWO different means, one with outliers removed, and one with them included. If they are close enough to each other, then you can ignore the one with outliers removed.

That's what us Mathematicians usually do. Just removed the outliers, so your example would be

25,25,25,25 versus
50,50,50,50 (remove the first and last result)
 
TPU's review was good on the A580. Shows that it pretty much walks all over Nvidia's 3050 and gives the RX 6600 a run for it's money.

I see the only real glaring issue to be the power consumption when compared to the RX 6600. TPU shows the A580 idle being nearly 40W. That's not good.

If Intel can rope in the high power issues the card would be a lot better buy.

If you can afford an extra $30-50 the RX 6600 may be the better option, but if you're unable to stretch your funds out then the A580 stands to be a decent buy for someone looking to get an entry level gaming PC for 1080p.

Your comment is actually a good illustration of why Intel is failing.

The RTX 3050 was the worst card NVidia released last gen. Beating your competitor's worst product is not a standard. If you want a real comparison, you should compare Intel's best versus the RTX 4060.

4060 is 32 percent faster, and $280/$180 55 percent more expensive. Case closed right? No, because perf/dollar is misleading at the low end.

32 percent more expensive would be 1.32 * $180 = $238. So the nvidia premium for a card that uses HALF the power and has modern features and better drivers is $42.

Put it that way, if you are building a $1000 computer, is $42 worth getting the same perf/dollar but an NVidia card instead of ARC? Yes.

If you don't want to pay ANY premium at all, you just buy an RTX 3000 series instead of 4000 series card. You buy an RTX 3060, 3060 Ti, or 3070 at current prices.
 
Imo, the A580 is a good buy at $130 USD, nothing more, that way it can solidly beat ALL cards, not just the RTX 3050 or 4060, but all RTX 3000 and Radeon 6000 cards in value

$180 is way too much
 
Steven, will you update the results with new drivers? It seems the hardware can beat RX 6600 and RTX 3050 with some ease but we need Intel committing to mantain the guard on these products and their drivers. If it's a short stop gap with no support for the next years, I'll pass, but if they commit and make it age like fine wine, I may consider be in, the hardware has a nice and modern feature set, punching nicely as shown on how it handles load with optimized games (Cyperpunk Phantom Liberty surprised me even taking the RT load really well), but also their lack of care shows on the performance at The Last of US and Starfield (pretty sure the hardware can) among other AAA titles.
 
Would be interesting to see this revisited following the release of the fixed driver for Starfield.
Hopefully with every game they fix, future games will have fewer issues?

It's always seemed like really nasty driver architecture that means nVidia and AMD (and now Intel) have to add custom code into their drivers for specific games to make them run properly and get best performance from the hardware. Goes against everything 30 years of coding and design taught you to work towards. Since they are so good with AI now - maybe add some to the driver itself? 'Ah this game keeps trying to render in this way - internally use this code path to get optimum behaviour here.'
 
Quite curious to hear the writer saying that Radeon drivers are superior to Nvidia drivers now. If thats the case then Radeon has come an awful long way. That was my only gripe with Radeon when I used it, its drivers. They weren't unstable, never had any crashes or anything but new games often went unoptimized for weeks, sometimes months back in the day (Radeon 5700XT). I used to get weird bugs, black screens and things. When I bought my 3070ti all these issues seemed to disappear.
 
Quite curious to hear the writer saying that Radeon drivers are superior to Nvidia drivers now. If thats the case then Radeon has come an awful long way. That was my only gripe with Radeon when I used it, its drivers. They weren't unstable, never had any crashes or anything but new games often went unoptimized for weeks, sometimes months back in the day (Radeon 5700XT). I used to get weird bugs, black screens and things. When I bought my 3070ti all these issues seemed to disappear.
5700XT was an odd duck. Some people had zero issues and others had a plethora of them. I think the 5700XT itself was the problem because those types of issues on the 5700XT were far and few between on the other model cards from AMD that gen.
 
Back