Detailed Xbox 720 GPU specifications leak online

Shawn Knight

Posts: 15,284   +192
Staff member

GPU specifications of Microsoft’s upcoming Xbox 720 are now online courtesy of tech site VGleaks. This is the same publication that revealed specs on the console’s CPU a couple of weeks ago. If accurate, we now have a complete picture of what Microsoft’s next generation gaming console will look like when it debuts in a few months.

In addition to having an 800MHz GPU clock, the publication lists a wealth of other detailed information pertaining to the graphics subsystem. Rather than typing out every single detailed specification, that information can be found in the chart below.

xbox gpu microsoft xbox 720 xbox 720 specifications

We are told that the GPU has 32MB of fast embedded ESRAM which will be free of many of the restrictions that accompanied the EDSRAM on the Xbox 360. The subsystem is capable of rendering to surfaces in main RAM, texturing from ERAM and reading back from render targets without performing a resolve.

xbox gpu microsoft xbox 720 xbox 720 specifications

Furthermore, the Xbox 720 is expected to include a two stage caching system. This consists of four L2 caches with 128KB each that generally acts as a write-back cache. Each shader core is paired with its own 16KB L1 cache that typically acts as a write-through cache. VLGleaks also claim the GPU can support 2x, 4x and 8x MSAA antaliasing.

We still anticipate Microsoft will debut the new console during the E3 expo in June. If you recall, Sony will be hosting a PlayStation media event on February 20 where we expect to see them announce the PlayStation 4.

Permalink to story.

 
So for those of us with a limited technical understanding of GPUs... is this good?
 
No doesn't look too impressive in raw numbers but you can optimise for a console a lot better. Still would have loved to have seen much better raw numbers considering what PC GPUs are like.
 
I dunno, I wasn't really expecting much. To be quite honest, I don't see how they can make anything more powerful than a mid-range laptop GPU. There's just too much heat to be dispersed.
 
32 MB of video Memory? Even with gimmicks, it's still bad. My gtx 560 has 2gb of video memory and you can get it for $200 dollars.
 
You know, I wouldn't expect I-7 3930k / GTX 690 type numbers, but these figures don't even add up to a low/mid-range system from 2 years ago. And yeah, I want to see this configuration run 8x MSAA. Fat chance without grinding the game to about 15 fps.
 
32 MB of video Memory? Even with gimmicks, it's still bad. My gtx 560 has 2gb of video memory and you can get it for $200 dollars.
they are talking about memory built onto the gpu itself, not the entire card. If you look at the diagram you'll see there is a lane leaving the gpu that separates. When it separates youll see there is one lane leading to ram the other for video decoding.
 
Honestly....this is kinda disappointing. texture fill rate of this is 38.4 Gtexles. for reference, the geforce gtx 650 ti has a 59.3 Gtexel fill rate. also, the gtx 650 ti has an extra 200 GFlops of processing power. essentially, the xbox 360 sucessor is already outdone by nvidia's lowest end GTX gpu (the gtx 650 doesn't count)
 
This is way to underwhelming to be real, I bet this was a spec that was being considered by Microsoft, I guess we'll never know until the official announcement.

If these are real though, they could have upped the spec a bit more than that! I mean, jeez, I know they have power and heat limitations but a laptop cooling solution can easily deal with that, I'm sure they could have gone for something a little higher...
 
If we are to believe the rumors then the PS4 will have 1.8 TFLOPS while the XBOX will have 1.2 TFLOTS. The difference is pretty big.
But in my opinion is that these consoles will be powerful enough to have games work in 3D at true 1080p with some antaliasing. Some less demanding games may reach 4k.
Let's not forget that they now support more features from DX11 so we can see more focus on it and we'll finally start moving away from DX9.

For comparison the xbox360 gpu has 240 GFLOPs which makes the new console about 5 times more powerful without taking in account other improvements and optimizations. (the PS3 had even less with 172 GFLOPs)
 
And exactly which DX11 features will be able to be run after a resolution of 1080p? There's no power left to run tessellation etc..
 
I don't think this information is accurate. Just my .02,
I see the new PS4 getting a Nvidia GPU like the PS3, only stronger obviously with 2GB VRAM and similar results for the new xbox... I see the new 720 getting an AMD GPU like the 360 as well with beefed power and AA/AF/MSAA support.
 
My memory is a bit fuzzy since it was so long ago but isn't 1.2TFLOPS roughly equivalent to a 9800GTX?
 
Holly cow, for a website that targets PC enthusiasts and writes such detailed in-depth write-ups on GPUs, it appears most of you just skip over how GPU architectures work and only look at the FPS graphs.

"hahahanoobs said:
What the...? Doesn't Tegra 4 have more shader cores than this?"

No, the "SC" in this case is interchangeably used with the term Compute Unit. HD7000 is made up of CUs (or in this case they labelled it as a Shader Core Unit), each comprised of 64 Stream Processors. 12 CUs (or "SC") equates to 12x 64 = 768 Stream Processors. Clocked at 800mhz, that gives a theoretical floating point value of 1.23 Tflops. In comparison, HD7770 is made up of 10 CUs, or 640 SPs, clocked at 1Ghz, for 1.28 Tflops of floating point performance.

"St1ckM4n said: I dunno, I wasn't really expecting much. To be quite honest, I don't see how they can make anything more powerful than a mid-range laptop GPU. There's just too much heat to be dispersed."

They could. Wimbledon XT (HD7970M) is an 850mhz downclocked HD7870 card with 100W TDP.

http://www.notebookcheck.net/AMD-Radeon-HD-7970M.72675.0.html

"Fat" launch PS3 used roughly 200-240W of power in games (http://www.gamespot.com/features/green-gaming-playstation-3-6303944/)

That leaves plenty of room for a 65-95W TDP CPU, whether a quad-core A10-5700/6700 APU, or an FX8300 95W part, with reduced clocks (or a Jaguar 8-core if MS is cheap).

" Littleczr said: 32 MB of video Memory? Even with gimmicks, it's still bad. My gtx 560 has 2gb of video memory and you can get it for $200 dollars."

32MB of eSRAM has nothing to do with the overall video memory of a GPU. The comparison to your GTX560 is meaningless. This eSRAM is used for color, alpha composing, Z/stencil buffering and can also be used to reduce the performance hit with anti-aliasing. Xbox 360's Xenos GPU used 10MB of eDRAM, which is completely different from the 512MB of DDR3 the GPU shared with the system. The Xbox 720 GPU is rumored to be using 8GB of shared DDR3 system memory for actual VRAM.

http://en.wikipedia.org/wiki/EDRAM

"TomSEA: You know, I wouldn't expect I-7 3930k / GTX 690 type numbers, but these figures don't even add up to a low/mid-range system from 2 years ago. "

Sure they do. 68 GB/sec of memory bandwidth, 12 Compute Units (768 SPs) HD7000 GPU clocked at 800mhz is nearly a sister equivalent to the 72 GB/sec memory bandwidth 10 CU (800 SPs) HD7770 clocked at 1Ghz. That's a modern low-end GPU, not from 2 years ago.

http://www.gpureview.com/show_cards.php?card1=675&card2=

"TheinsanegamerN said: Honestly....this is kinda disappointing. texture fill rate of this is 38.4 Gtexles. for reference, the geforce gtx 650 ti has a 59.3 Gtexel fill rate. also, the gtx 650 ti has an extra 200 GFlops of processing power. essentially, the xbox 360 sucessor is already outdone by nvidia's lowest end GTX gpu"

You cannot directly compare texture fillrate (or anything theoretical really) between AMD's and NV's GPus. In fact you can't even compare these metrics between AMD vs. AMD or NV vs. NV unless you are discussing the exact architecture. For example, you cannot compare the theoretical pixel fill-rate of HD6970 to HD7970. Both have 32 ROPs and HD7970 only has a 5% GPU clock advantage, but nearly a 50% Pixel Fillrate advantage in the real world.

http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/26

Similarly, HD7970GE is only 10-11% faster than a GTX680 despite a 50% memory bandwidth advantage. I am not going into details why this is but you cannot compare theoretical #s on paper between different GPU architectures, especially not across NV and AMD. At best you can only do so with some accuracy across all HD7000 cards or across all GTX600 cards, but not interchangeably.

For example, GTX650Ti has a near 50% texture fill-rate advantage over HD7770 (http://www.gpureview.com/show_cards.php?card1=675&card2=682), but in real world games it just 13% faster at 1080P:
http://www.computerbase.de/artikel/grafikkarten/2012/test-vtx3d-hd-7870-black-tahiti-le/4/

Based on the specs, a GTX650Ti would be slightly faster than the GPU in the Xbox 720. It's possible MS decided to keep costs down. Supposedly the GPU in PS4 is Liverpool based HD7970M with 18 Compute Units and 1152 SPs, 1.84Tflops of floating point, which is roughly 50% faster than the rumored Xbox 720 GPU. If you want a more powerful console, pay more attention to PS4's specs. Consoles won't have high-end GPUs due to high costs and the fact that something like a GTX690 uses nearly 300W of power.

"St1ckM4n said: And exactly which DX11 features will be able to be run after a resolution of 1080p? There's no power left to run tessellation etc.."

You are not the target market for next generation consoles. If you want the best graphics with tessellation, stick to PCs :)

Considering PS360 don't render most games beyond 1280x720 at 30 fps, moving to 1080P with some DX11 effects will be a huge step up for console gamers. Even games like Black Ops 2 or Uncharted 3 only run at 880x720 / 896 x 504 resolutions:

http://forum.beyond3d.com/showthread.php?t=46241

"Amstech, TechSpot Enthusiast, said: I see the new PS4 getting a Nvidia GPU like the PS3, only stronger obviously with 2GB VRAM and similar results for the new xbox..."

Little to no chance. PS4 is likely going to be running on Linux. As a result developers will want the flexibility of using the fastest GPU architecture for DirectCompute, OpenCL and OpenGL. That leaves only 1 choice - AMD's HD7000 series (or HD8000 series refresh). With games like Bioshock Infinite, Tomb Raider, joining Sleeping Dogs, Dirt Showdown, Sniper Elite V2, Hitman Absolution and 20nm Maxwell and HD9000 series focusing even more on compute, the industry is going to use more and more GPGPU functions for accelerating gaming graphical effects such as HDAO, contact hardening shadows, etc.
http://videocardz.com/39236/amd-nev...ith-tomb-rider-crysis-3-and-bioshock-infinite

More importantly, since MS/Sony are trying to keep costs down as the market is unlikely to bear $500-600 consoles as was the case with PS3, price/performance becomes a critical factor. Right now GTX680M is just 5% faster than HD7970M when tested with recent drivers but costs $330-400 more. Since Enduro vs. Optimus functionality is completely irrelevant for consoles, mobile AMD GPUs provide by far the superior price/performance for consoles:

"Thanks to the enormous lead in the last three titles, the Nvidia GPU is more or less 5 % ahead of the AMD card - a negligible difference. With regards to costs, however, the performance of the Radeon HD 7970M is truly impressive as the 680M can run users $400 USD more than the Radeon. Nvidia's high-end graphics card continues to have very poor value per dollar."

http://www.notebookcheck.net/Review-Update-Radeon-HD-7970M-vs-GeForce-GTX-680M.87744.0.html

That makes an NV GPU inside PS4 almost a non-starter assuming Sony wants to keep the MSRP close to $400-450.

"MrBungle said: My memory is a bit fuzzy since it was so long ago but isn't 1.2TFLOPS roughly equivalent to a 9800GTX?"

No, only 0.432 Tflops: http://www.gpureview.com/show_cards.php?card1=559&card2=

Regardless, you cannot compare theoretical Tflops between different GPU architectures and apply it to games. Case and point:

HD7970GE is about 10-11% faster than a GTX680 despite having a 33% floating point performance advantage: http://www.gpureview.com/show_cards.php?card1=667&card2=679

GTX680 has a 95% Tflops advantage over GTX580 but is only 35-40% faster on average:
http://www.gpureview.com/show_cards.php?card1=667&card2=637

The only useful information you can get from the above specs is to find out a similar HD7000 card and then look up its performance in this chart:

http://alienbabeltech.com/abt/viewtopic.php?p=41174

The GPUs in PS3/360 are only at 12-14 VP level at best. If Xbox 720 has a GPU similar to HD7770, it will be at least 5x faster. If PS4 has a slightly downclocked/neutered HD7970M with 18 Compute Units, with performance around HD7850 2GB, then it will be 9-10x faster than PS3's RSX.
 
My memory is a bit fuzzy since it was so long ago but isn't 1.2TFLOPS roughly equivalent to a 9800GTX?
Nope. A 9800GTX would be around half that figure (648.192 GFlops: shader/core count * shader frequency * 3 instructions per clock).
1.2 TFlops would be roughly equal to a HD 7770- which is more fitting if the Xbox 720 uses GCN architecture.
"MrBungle said: My memory is a bit fuzzy since it was so long ago but isn't 1.2TFLOPS roughly equivalent to a 9800GTX?"
No, only 0.432 Tflops: http://www.gpureview.com/show_cards.php?card1=559&card2=
Those gpu review guys are amusing as all hell- I could fill a couple of pages with their inaccuracies. Their numbers use 1 add+ 1 multiply per cycle, and don't take into account the compute ability (1.1 spec in the G92's case) to execute a third op independent of the other two. All theoretical of course- but then I don't know of any CPU/GPU manufacturer that lists actual floating point performance in their literature.
 
Maybe they'll pull another "Surface RT now, Surface Pro later" by releasing an Xbox 1080 a few months later which can up the graphics on the games. Unlikely, but would be cool.
 
BlueFalcon, excellent post and I hope more people read it so they know what's actually going on.

In regards to my comment about "heat being dispersed", you reply with TDP figures. That's all well and good, but a laptop running a HD7970M is pretty damn loud. The console itself is slightly bigger yes, but noise levels must still be acceptable for the hardware used!
 
I love how everyone wants the new gen consoles to be all powerful but still be affordable.

Give both the companies some slack as I am sure they are trying very hard to please everyone.
 
I'm going to go out on a limb and say that this isn't even that bad, even if it is the final specifications of the gpu. Modern gaming consoles are working on what basically equates to an underclocked 7800 card from 5 years ago and still push out games that look pretty good. Sure, they don't stack up against high end GPUs like the 680 or 7970, but they still look alright. Try running today's games on a 7800 on PC and it would look a far cry worse than what you see from consoles. Just take a look at Youtube vids of people playing Skyrim on a 7800GTX, it's not a pretty site...

Basically what I'm saying is that it's a lot more difficult to compare apples to apples on gaming performance from PC to console games because, more often than not, the consoles are way more optimized and will run more fluidly than an 'equivalently' powered gaming PC.
 
My memory is a bit fuzzy since it was so long ago but isn't 1.2TFLOPS roughly equivalent to a 9800GTX?
Nope. A 9800GTX would be around half that figure (648.192 GFlops: shader/core count * shader frequency * 3 instructions per clock).
1.2 TFlops would be roughly equal to a HD 7770- which is more fitting if the Xbox 720 uses GCN architecture.
"MrBungle said: My memory is a bit fuzzy since it was so long ago but isn't 1.2TFLOPS roughly equivalent to a 9800GTX?"
No, only 0.432 Tflops: http://www.gpureview.com/show_cards.php?card1=559&card2=
Those gpu review guys are amusing as all hell- I could fill a couple of pages with their inaccuracies. Their numbers use 1 add+ 1 multiply per cycle, and don't take into account the compute ability (1.1 spec in the G92's case) to execute a third op independent of the other two. All theoretical of course- but then I don't know of any CPU/GPU manufacturer that lists actual floating point performance in their literature.

Yep, you're right looks like the new eXcrement Box will be just a bit faster than a GTX 280... better but still horrible.
 
Back