Intel details upcoming 'Ice Lake' Gen 11 integrated graphics architecture

Cal Jeffrey

Posts: 4,183   +1,427
Staff member
Something to look forward to: On Thursday, Intel released design documents detailing its Gen11 SoC graphics architecture. The information posted on its website outlines what to expect from Gen11, which it announced last December would be twice as fast the previous Gen9 integrated graphics engine.

The new architecture will make its first appearance in Intel’s upcoming 10nm Ice Lake processors. The chip maker indicated that it would be aiming to reach one teraflop of 32-bit and two teraflops of 16-bit floating point performance with the new engine.

According to Intel, it will be based on its 10nm process with third-generation FinFET technology and will support all common APIs and includes Adaptive Sync support. The architecture allows for up to 4x32-bit LPDDR4/DDR4.

Intel raised the number of sub-slices which each house eight cores from three in Gen9 to eight in Gen11 for a total of 64 EUs. That is a considerable improvement over the previous 24 in Skylake chips. The new engine will process two pixels per clock.

Interestingly, CPU and GPU will share last level cache (LLC). Intel explains that this provides increased effectiveness for memory bandwidth by eliminating data movement to and from their respective units. Of course, this is all theoretical, so we’ll have to wait to see if this bears out.

The Gen11 SoC will support course pixel shading (CPS) and position only shading (POSH) to reduce power and bandwidth demands.

CPS reduces shading in portions of the screen where it is less noticeable. This method can improve frame rate and performance, while reducing rendering overhead.

“[We can use] this technique to lower the total overall power requirements or hit specific frame rate targets by decreasing the shading resolution while preserving the fidelity of the edges of geometry in the scene,” Intel said.

The method is best used on objects that are far from the camera, are in motion, or are on the visual periphery.

POSH is a tile-based rendering technology. It reduces the required bandwidth by dividing the image into a certain number of rectangular regions and then rendering them individually. Tiling helps stem the extra write bandwidth that comes with overdrawing pixels.

Intel’s Gen11 integrated graphics should be a considerable step up from the previous generation. If you want to dig into all the gritty details, it posted a white paper on its website.

Intel has Ice Lake core processors slated for release around the 2019 holiday season.

Permalink to story.

 
I'm a level 3 IT Specialist for NY State and I still have yet to play around with ANYTHING Thunderbolt. That being said I am digging Intel's angle at having increasingly impressive power in their integrated graphics architecture.
 
This is Intel rebranding bullshit.

This Gen11 GT2 is clearly the next step from the Gen9 GT3e (aka Iris Plus 655), not Gen9 GT2. If Intel will be including this iGPU in all their Ice Lake chips then great, I think that's fantastic idea, but I doubt this will happen.

Prev. gen GT3e (Iris Plus 540, 640, 650, and 655) are reasonable low end gaming iGPUs for the very casual gamer, roughly equivalent to the Vega 6 to 8 iGPUs That brings some gaming performance to all Intel customers which was previously limited to Intel's own NUC and Apple's 13" MacBook Pros (which was rarely used for gaming because Mac).

However, rebranding this next gen GT3e part down to GT2 and then comparing to the previous gen's lower end GT2 is IMO misleading. Though from a marketing perspective, I love the misdirection. Take advantage of everyone's ignorance and "Look at this great thing we did!"
 
This is Intel rebranding bullshit.

This Gen11 GT2 is clearly the next step from the Gen9 GT3e (aka Iris Plus 655), not Gen9 GT2. If Intel will be including this iGPU in all their Ice Lake chips then great, I think that's fantastic idea, but I doubt this will happen.

Prev. gen GT3e (Iris Plus 540, 640, 650, and 655) are reasonable low end gaming iGPUs for the very casual gamer, roughly equivalent to the Vega 6 to 8 iGPUs That brings some gaming performance to all Intel customers which was previously limited to Intel's own NUC and Apple's 13" MacBook Pros (which was rarely used for gaming because Mac).

However, rebranding this next gen GT3e part down to GT2 and then comparing to the previous gen's lower end GT2 is IMO misleading. Though from a marketing perspective, I love the misdirection. Take advantage of everyone's ignorance and "Look at this great thing we did!"

AOTS leaked bench has Gen 11 on par with Vega 10 integrated graphics. I'm not excited about any iGPU's, but I am definitely curious.
 
The problem with comparing IGs is the supported main memory speed they depend on. One reason (possibly the main reason) the Iris Plus 655 is a little faster than the 650 is the NUCs with the 655 support 2400 MHz RAM while the 650 NUCs only support 2133MHz.

If Intel opens this spec out to 2666 or 3000 MHz in Gen 11 that can help close the gap with Vega 8 or 10 at 2800 or 3200 MHz. I've mostly moved on from iGPUs (I started gaming with a Broadwell NUC [Iris 6100] and then a Kaby Lake NUC [Iris Plus 650], now have a GTX 1080) but I'm still interested in the tech.
 
The problem with comparing IGs is the supported main memory speed they depend on. One reason (possibly the main reason) the Iris Plus 655 is a little faster than the 650 is the NUCs with the 655 support 2400 MHz RAM while the 650 NUCs only support 2133MHz.

If Intel opens this spec out to 2666 or 3000 MHz in Gen 11 that can help close the gap with Vega 8 or 10 at 2800 or 3200 MHz. I've mostly moved on from iGPUs (I started gaming with a Broadwell NUC [Iris 6100] and then a Kaby Lake NUC [Iris Plus 650], now have a GTX 1080) but I'm still interested in the tech.

The unganged memory controllers will help here. AMD have been doing this a while but it basically allows the iGPU it's own memory controller distinct from whatever the CPU is using. It should lower latencies all round.

When it comes to the iGPU you're right that at this performance level they are heavily starved of bandwidth, anything that can increase that with faster memory support is a big bonus. So they have LPDDR4X support which will help presumably on the slimmest implementations and hopefully it'll support faster SODIMM speeds on the notebook end.
 
Welcome improvement if it’s rolled out across all the CPU’s not just i9’s say. Makes the iGPU useful for non-gaming stuff, but of no interest to me for real gaming where I’m running 1440p and don’t do laptop gaming at all.
 
Back