The Arc A580 is Intel's latest budget graphics card. Essentially, you get all modern features such as AV1 encoding, XeSS support, and ray tracing, at a more affordable price point, and that's cool.
The Arc A580 is Intel's latest budget graphics card. Essentially, you get all modern features such as AV1 encoding, XeSS support, and ray tracing, at a more affordable price point, and that's cool.
I made the same comment in the past. The response that I got was: the authors use the geometric mean instead of the arithmetic mean. I am not sold that it completely resolves the issue that you are pointing out, but it does minimize it. I don't know why they don't simply state they use the geometric mean in their discussion. It would be the right thing to do IMO.The way the author is computing averages is mathematically problematic. Let's say we're comparing two graphics cards in six games. The results in FPS for the first card are 400, 25, 25, 25, 25, 25 and the second card are 200, 50, 50, 50, 50, 50. The average for the first card is 87.5, much better than the average of 75 for the second card. Yet, it's clear that the second card is better on average.
Instead you should use relative performance. Set one card to be the reference 100% and scale everything to that card. In the example above if we used the first one as the reference, the relative speeds for the second one would be 50%, 200%, 200%, 200%, 200% and the average 175%.
The way the author is computing averages is mathematically problematic. Let's say we're comparing two graphics cards in six games. The results in FPS for the first card are 400, 25, 25, 25, 25, 25 and the second card are 200, 50, 50, 50, 50, 50. The average for the first card is 87.5, much better than the average of 75 for the second card. Yet, it's clear that the second card is better on average.
Instead you should use relative performance. Set one card to be the reference 100% and scale everything to that card. In the example above if we used the first one as the reference, the relative speeds for the second one would be 50%, 200%, 200%, 200%, 200% and the average 175%.
TPU's review was good on the A580. Shows that it pretty much walks all over Nvidia's 3050 and gives the RX 6600 a run for it's money.
I see the only real glaring issue to be the power consumption when compared to the RX 6600. TPU shows the A580 idle being nearly 40W. That's not good.
If Intel can rope in the high power issues the card would be a lot better buy.
If you can afford an extra $30-50 the RX 6600 may be the better option, but if you're unable to stretch your funds out then the A580 stands to be a decent buy for someone looking to get an entry level gaming PC for 1080p.
5700XT was an odd duck. Some people had zero issues and others had a plethora of them. I think the 5700XT itself was the problem because those types of issues on the 5700XT were far and few between on the other model cards from AMD that gen.Quite curious to hear the writer saying that Radeon drivers are superior to Nvidia drivers now. If thats the case then Radeon has come an awful long way. That was my only gripe with Radeon when I used it, its drivers. They weren't unstable, never had any crashes or anything but new games often went unoptimized for weeks, sometimes months back in the day (Radeon 5700XT). I used to get weird bugs, black screens and things. When I bought my 3070ti all these issues seemed to disappear.