AMD A8-7600 Review: 'Kaveri' APU Put to the Test

By on January 22, 2014, 11:16 PM

As the successor to last year's Richland APUs, Kaveri has been updated with new CPU cores based on AMD's Steamroller architecture. The Radeon R7 series GPU has also been integrated, though the 384 SPU version on most Kaveri APUs isn't much different than the A10-6700 and A10-6800's Radeon HD 8670D. Kaveri is AMD's fourth-gen APU while Steamroller is third-gen CPU technology that is supposedly 10% faster per-clock and per-core than Piledriver.

AMD really is focused on gaming performance with Kaveri and believes this is where its latest APUs have a serious advantage over the competition. The company's latest processors are being pushed as budget solutions for modern 1080p gaming, though on paper the Radeon R7 doesn't look quite up to the task...

Read the complete review.




User Comments: 31

Got something to say? Post a comment
2 people like this | cliffordcooley cliffordcooley, TechSpot Paladin, said:

Long story short conclusion:

The Core i3 is typically faster and it uses less power, so it's an obvious choice for non-gaming machines while AMD's A8-7600 is clearly the smarter pick if you want to enjoy 3D graphics without a discrete GPU.

wastedkill said:

I can see this APU going far if its advertised as a below $300 gaming budget system which in fact it is. APU's are gonna be the new consoles instead of actual consoles being developed xD

GeforcerFX GeforcerFX said:

Good performance, the new full A10 chip is the one I really wanna see bench marked that chip backs a lot of graphics power into a overclock able CPU. The biggest thing I noticed in the benchmarks at least was that there single core performance has gotten a lot better with steam roller. But if AMD is going to kill off the FX series or most of them at least (makes sense to me just have 1 or 2 really high end really high core count chips for enthusiasts that works on FM2+) they are gonna need to get 6-8 core chips on APU's in the 95 watt range (for non-k variants) Otherwise all there ever going to compete with is the I3 and the i5 will continue its nice little dominance of the middle ground.

Puiu Puiu said:

I am extremely surprised at the power consumption numbers, 50% less under load and 80% at idle compared to the previous generation. that's kinda insane. what did they do to?

Guest said:

It's a bit weird to compare between old A10 and new A8 then called A8 is slower because old A10 is faster? Aren't you supposed to compare same old Axx and new Axx, not old Axx and new Ayy? Other than that, great review and thanks!

yukka, TechSpot Paladin, said:

I didn't read the whole review as I am at work but it would be nice to see a comparison against some older discrete cards. How does this rate compared to a GTX 460.. if I got one for the missus would it put her at a disadvantage and could she play elder scrolls online on it

Guest said:

hUMA will be really good for integrated gfx, decent speed boosts to be gained and with mantle will push it ahead of other integrated GPU's so its all good

AnilD AnilD said:

So, where's the first Mantle game to put this APU to proper gaming use?

Guest said:

I just finished a new PC build with an A10-7850K. At first it was slower than expected. Turns out that the only display driver currently available is a half-gig AMD beta driver, which can be downloaded from the AMD web site. With the driver installed the machine is quite fast. Video is set to share 1-GB of RAM (8-GB installed, with Windows 8.1). Use the fastest RAM you can, of course, when you order your new gear.

Guest said:

AMD still has a ways to go in single thread/core processing. These are still running 4/5 the single core speed of much lower end i3 processors.

On the GPU side, they run circles around Intel. However, just to be competitive CPU side, they are having to throw in twice as many cores, which means most software is still going to suffer.

I am still crossing my fingers for AMD to be able to take on the i3/i5 processors and bring GPU performance that Intel can't currently touch.

Guest said:

With Mantle and TrueAudio I'm not particularly worried, and the HSA app benchmarks look pretty serious too. They might not be better for apps you get right this moment, but there's a lot to be said for it long term.

2 people like this | GhostRyder GhostRyder said:

I didn't read the whole review as I am at work but it would be nice to see a comparison against some older discrete cards. How does this rate compared to a GTX 460.. if I got one for the missus would it put her at a disadvantage and could she play elder scrolls online on it

In short yes the A8 7600 is not as powerful as a GTX 460 on the GPU side. I do not know how well it will run Elder Scroll Online, however ill bet at a lower setting it would do just fine. At the top tier the A10 7850k has similar performance to the HD 7750 minus a notch in clock speed and no GDDR5 Ram.

I guess the value of this chip is the point that its pretty much similar to the A10 6800k from last gen minus a lower clock speed but runs at much lower voltage. That means it would be great in something really small, but its definitly not going to be any G4M3R L33T Chip.

Great review as always

Guest said:

With Mantle and TrueAudio I'm not particularly worried, and the HSA app benchmarks look pretty serious too. They might not be better for apps you get right this moment, but there's a lot to be said for it long term.

If you think a bit faster GPU access and offsetting audio is going to compensate for a 20% computational difference, someone is misleading you.

If you look at gaming now, 90% of the titles are CPU bottlenecked up to 1080p, not GPU. Thus TrueAudio or Mantle will do nothing to help them.

As for thread out to the GPU (HSA), Windows 8 already is doing more of this than people realize.

(Even going back to Vista, OS level software was using early DirectCompute on DX10 hardware, which has been extended in 7 and further in 8 where the OS is taking older software code and shoving it through the GPU when it can.)

A lot of WinRT is using or set up to use the GPU/CPU agnostically, and this may help AMD, but they are going to still need to meet Intel on per core processing.

With Windows 9 (newer concepts brought over from the XB1 development) , more of this happen, but by the time it and other software catches up to provide necessary gains, Intel's GPU adaptations will be closer to what AMD is producing.

The other problem facing AMD is NVidia is retooling for mobile and mobile GPU additions to Intel devices. This could mean discrete NVidia GPUs in tablets running close to the power range of Intel HD GPUs.

When it comes down to base level performance, single thread speeds are crucial. Even if you have 64 cores, if executing a single thread on one core is 20% slower, that software is still going to run 20% faster on a i3/i5 CPU. Even when dealing with 4-8 threads on an i5 class processor it is going to be 10-20% faster than the fastest 8 core AMD CPU.

I'm an not a fan of AMD or Intel on this subject. They both have done good and really disgusting things to the technology market.

Part of Intel's laziness in the past years is waiting out the ATI/AMD integration and 'just' staying ahead of them in core speeds. It is why their lower end mobile 'Atom' technology stood still while Microsoft literally begged them to move it forward over 5 years ago.

AMD also let their CPU performance tank in favor of their GPU technologies. This has kept them a bit ahead of NVidia, but at the cost of remaining competitive with Intel. It shouldn't have needed to be a trade off.

AMD had a better SoC technology they got from Microsoft and started with a better multi-core technology, and have fallen behind Intel with both. There is also no reason AMD should have let up advances on their APUs and let Intel get their HD 4xxx GPUs running as fast as AMD integrated GPUs.

All these technologies you mention will help AMD, but they will also help Intel. I see people also mention hUMA as a savior, but they forget or missed that Microsoft created the software version of this technology back in 2005/2006 and it has been in Windows NT since Vista. (As is how the WDM/WDDM and full video subsystem works in Vista and more specifically the evolution of how it works in Windows 8 today. Notice when AMD talks hUMA, they are very specific to talk about it in 'hardware', because engineers will quickly point out NT is already doing it in software.)

2 people like this |
Staff
Steve Steve said:

I am extremely surprised at the power consumption numbers, 50% less under load and 80% at idle compared to the previous generation. that's kinda insane. what did they do to?

That wasn't the case in the clock-to-clock testing.

It's a bit weird to compare between old A10 and new A8 then called A8 is slower because old A10 is faster? Aren't you supposed to compare same old Axx and new Axx, not old Axx and new Ayy? Other than that, great review and thanks!

It's not weird at all, in fact it makes perfect sense. On the CPU side of things the A10 and A8 series are identical with the exception of clock speeds. What differs is the GPU spec and the A10-6800K and A8-7600 feature very similar GPU's so it made sense to compare them. As it turns out the R7 and 8670D are very much alike as we suspected.

If you think a bit faster GPU access and offsetting audio is going to compensate for a 20% computational difference, someone is misleading you.

If you look at gaming now, 90% of the titles are CPU bottlenecked up to 1080p, not GPU. Thus TrueAudio or Mantle will do nothing to help them.

None of the games we tested were CPU bottle-necked at even 1280x800, installing a powerful discrete GPU significantly boosted the performance in games for the low-end CPU's such as the A8-7600.

JC713 JC713 said:

I think if AMD went with the previous plan of putting GDDR5 right on the motherboard, Kaveri would be an absolute budget gaming beast. Too bad DDR3 pulls it back.

1 person liked this |
Staff
Steve Steve said:

I think if AMD went with the previous plan of putting GDDR5 right on the motherboard, Kaveri would be an absolute budget gaming beast. Too bad DDR3 pulls it back.

Probably not, it would be a lot more expensive so while it would be beastlier it wouldn't be a budget beast

Added cost would likely make it pointless, might as well just buy a discrete graphics card.

1 person liked this | JC713 JC713 said:

Probably not, it would be a lot more expensive so while it would be beastlier it wouldn't be a budget beast

Added cost would likely make it pointless, might as well just buy a discrete graphics card.

True .

theBest11778 theBest11778 said:

I was really hoping the rumors of 896 GPU cores was true. Kaveri is good, but only a step ahead. AMD needs to jump generations to catch up. Unfortunately die space was probably the main issue here, but that makes you wonder how they cram 1024 GPU cores on the PS4 APU...???

Puiu Puiu said:

True .

think what this will mean for AMD with DDR4 soon

1 person liked this | JC713 JC713 said:

think what this will mean for AMD with DDR4 soon

Definitely

1 person liked this | Guest said:

Afaik, the "cpu-bottleneck" is because DX11's single-threaded dispatch-approach to the render-pipeline. With Mantle, there should be BIG differences in regard to this short-coming!

1 person liked this | GhostRyder GhostRyder said:

I was really hoping the rumors of 896 GPU cores was true. Kaveri is good, but only a step ahead. AMD needs to jump generations to catch up. Unfortunately die space was probably the main issue here, but that makes you wonder how they cram 1024 GPU cores on the PS4 APU...???

They can do it, but its going to take some time before they are willing and make the necessary changes. Remember that the Jaguar APU's inside the XB1 and PS4 are on completely custom boards and such so they can have more freedom than on something interchangeable.

I think if AMD went with the previous plan of putting GDDR5 right on the motherboard, Kaveri would be an absolute budget gaming beast. Too bad DDR3 pulls it back.

Yea, I mean DDR4 will start to make things very nice for the APU later on, however at this performance point I don't think its really a necessity. I just built a friend a machine using an A10 7850k (Hes on a tight budget) and some DDR3 2400 ram that performs really quite well in gaming for the money spent. Its not a bad little machine, I put it on a ASrock MITX A88X board in a Bitfenix Prodigy and its chugs along. I can't wait to see a this APU on a laptop, imagine a laptop with this much graphical power, it may not be high end but man for people on a 5-600 buck budget getting an A10 like this would blow my mind for budget gaming on a laptop. Lets throw some DDR4 on that :P

WangDangDoodle said:

Will TechSpot do a review on the A10-7850K and dual-graphics mode? I'm wondering which discrete cards are compatible with--and how fast it can get in--dual-graphics mode.

I'm planning to build myself a Mini-ITX steam machine. It doesn't need to be top shelf, but I want to be able to play games in 1080p with medium/high detail if possible. I'm not sure why you guys chose to review the lower end of the APU lineup, but I'm hoping there's a high end review in the making.

1 person liked this | GhostRyder GhostRyder said:

Will TechSpot do a review on the A10-7850K and dual-graphics mode? I'm wondering which discrete cards are compatible with--and how fast it can get in--dual-graphics mode.

I'm planning to build myself a Mini-ITX steam machine. It doesn't need to be top shelf, but I want to be able to play games in 1080p with medium/high detail if possible. I'm not sure why you guys chose to review the lower end of the APU lineup, but I'm hoping there's a high end review in the making.

At the moment I have an A10-7850k M-ITX machine built for a friend (Waiting on a replacment PSU) and from what ive seen it does a decent job especially if you put DDR3 2400 with it and overclock the GPU/CPU. The Dual Graphics options from what ive heard are limited at the moment to the R7 240 and the R7 250 which are both sadly less powerful than the on board GPU but tend according to some leaked benchmarks and some posted by AMD the performance can double (Though alot of this is just a rumor and what AMD showcased so who knows).

WangDangDoodle said:

Will TechSpot do a review on the A10-7850K and dual-graphics mode? I'm wondering which discrete cards are compatible with--and how fast it can get in--dual-graphics mode.

I'm planning to build myself a Mini-ITX steam machine. It doesn't need to be top shelf, but I want to be able to play games in 1080p with medium/high detail if possible. I'm not sure why you guys chose to review the lower end of the APU lineup, but I'm hoping there's a high end review in the making.

At the moment I have an A10-7850k M-ITX machine built for a friend (Waiting on a replacment PSU) and from what ive seen it does a decent job especially if you put DDR3 2400 with it and overclock the GPU/CPU. The Dual Graphics options from what ive heard are limited at the moment to the R7 240 and the R7 250 which are both sadly less powerful than the on board GPU but tend according to some leaked benchmarks and some posted by AMD the performance can double (Though alot of this is just a rumor and what AMD showcased so who knows).

In most tests/reviews I've read, there's almost no difference between using DDR3 2400 and DDR3 1600 for gaming. Does this APU benefit more than a regular CPU + discrete GPU combo from having a higher memory MHz? At first glance, it seems a bit overkill using DDR3 2400 on what I'd consider a budget-friendly system, but I have no hands-on experience with APUs yet. Thanks for the info on dual graphics compatibility, btw.

Staff
Steve Steve said:

In most tests/reviews I've read, there's almost no difference between using DDR3 2400 and DDR3 1600 for gaming. Does this APU benefit more than a regular CPU + discrete GPU combo from having a higher memory MHz? At first glance, it seems a bit overkill using DDR3 2400 on what I'd consider a budget-friendly system, but I have no hands-on experience with APUs yet. Thanks for the info on dual graphics compatibility, btw.

Did you look at the A10-6800K results in this review, we tested it with both 1600MHz and 2400MHz memory.

WangDangDoodle said:

In most tests/reviews I've read, there's almost no difference between using DDR3 2400 and DDR3 1600 for gaming. Does this APU benefit more than a regular CPU + discrete GPU combo from having a higher memory MHz? At first glance, it seems a bit overkill using DDR3 2400 on what I'd consider a budget-friendly system, but I have no hands-on experience with APUs yet. Thanks for the info on dual graphics compatibility, btw.

Did you look at the A10-6800K results in this review, we tested it with both 1600MHz and 2400MHz memory.

I actually only checked the game-graphs on my first read and you didn't test 1600MHz memory with games, but now I checked some of the other graphs and it seems the 1600MHz memory beats 2133MHz in some of them. I'm still curious to see the impact on games.

Staff
Steve Steve said:

I actually only checked the game-graphs on my first read and you didn't test 1600MHz memory with games, but now I checked some of the other graphs and it seems the 1600MHz memory beats 2133MHz in some of them. I'm still curious to see the impact on games.

The A10-6800K was indeed tested with 1600MHz and 2400MHz memory in the gaming benchmarks.

WangDangDoodle said:

The A10-6800K was indeed tested with 1600MHz and 2400MHz memory in the gaming benchmarks.

Ah, I see now, you were talking about the 6800K. My confusion is that I was just looking for the Kaveri results and ignoring the previous generation chips.

1 person liked this | GhostRyder GhostRyder said:

Ah, I see now, you were talking about the 6800K. My confusion is that I was just looking for the Kaveri results and ignoring the previous generation chips.

It does make a difference to have that higher speed ram when equipped with an APU. One of my friends bought an A10 6800k with an gigabyte A88X Micro Board and for the first month till he got another paycheck had 8gb of DDR3 1333 in it (He had this memory sitting around from an old machine) and used it to play League with me and a few friends which it gave a decent amount of performance though he could only run it on high with shadows on low to keep around 60FPS. He recently grabbed some DDR3 2133 (8gb) and started league up and now he can play it on ultra 1080p with 60fps with shadows on medium (A few drops in big team fights into the 50s). There is some improvement to be had with higher ram when using the iGPU on an APU, it does make a difference at times (Albeit depends on what ram your comparing it to) especially once you start going from something like 1600x900p to 1920x1080p.

Without that dedicated 1gb of GDDR5 ram like on the HD 7750 (Which the 7850k is most closely resembled with in terms of performance) you would get a loss in performance. So the faster the memory it has access to, the closer the performance will be to that 7750 (If you compared clock to clock results). The DDR3 2400mhz is not the GDDR5 ram that the GPU would get, but its closer than DDR3 1600 which does make a noted difference in gaming tests. Its not like "ZOMG NIGHT AND DAY BABY" but if you want the best performance from it thats how your going to do it.

WangDangDoodle said:

It does make a difference to have that higher speed ram when equipped with an APU. One of my friends bought an A10 6800k with an gigabyte A88X Micro Board and for the first month till he got another paycheck had 8gb of DDR3 1333 in it (He had this memory sitting around from an old machine) and used it to play League with me and a few friends which it gave a decent amount of performance though he could only run it on high with shadows on low to keep around 60FPS. He recently grabbed some DDR3 2133 (8gb) and started league up and now he can play it on ultra 1080p with 60fps with shadows on medium (A few drops in big team fights into the 50s). There is some improvement to be had with higher ram when using the iGPU on an APU, it does make a difference at times (Albeit depends on what ram your comparing it to) especially once you start going from something like 1600x900p to 1920x1080p.

Without that dedicated 1gb of GDDR5 ram like on the HD 7750 (Which the 7850k is most closely resembled with in terms of performance) you would get a loss in performance. So the faster the memory it has access to, the closer the performance will be to that 7750 (If you compared clock to clock results). The DDR3 2400mhz is not the GDDR5 ram that the GPU would get, but its closer than DDR3 1600 which does make a noted difference in gaming tests. Its not like "ZOMG NIGHT AND DAY BABY" but if you want the best performance from it thats how your going to do it.

That's good info, thanks!

Load all comments...

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.