Intel now shipping seventh-gen Kaby Lake processors to manufacturing partners

Darth Shiv

TS Evangelist
I guess I did use clockspeed, lol. I meant that as more of a statement, you could replace that with the tiny IPC gains we've been seeing and it would be the same result.

Saying Hyper-Threading is a failure is like saying we will never overcome the inefficiencies of Multiple cpus cores. If we can't find a way to utilize all the cores then we are going to have issues in the consumer market. If you can't get CPUs to divide the work among cores properly it's going to reflect in the software as well and vice versa. If it can be done via software it can be done via hardware.
No it's not the same. Hyperthreading tries to use unused portions of a core but they are not complete cores which is the problem. Programs just so happen to get bottlenecked on the resources that aren't un-utilised far too often for hyperthreading to be effective, and it ends up costing more in far too many situations to have hyperthreading on. It's a nice idea in theory (trying to use something that is unused) but in practice it just simply doesn't work.

It doesn't cost Intel a lot to put it in granted but they must actually achieve a perf benefit!
 

Evernessince

地獄らしい人間動物園
No it's not the same. Hyperthreading tries to use unused portions of a core but they are not complete cores which is the problem. Programs just so happen to get bottlenecked on the resources that aren't un-utilised far too often for hyperthreading to be effective, and it ends up costing more in far too many situations to have hyperthreading on. It's a nice idea in theory (trying to use something that is unused) but in practice it just simply doesn't work.

It doesn't cost Intel a lot to put it in granted but they must actually achieve a perf benefit!
That's mostly an issue with the scheduler though, isn't it? I know that making a scheduler for a non-parallel processor is a lot more difficult but isn't it entirely possible to create one that does a better job saturating all the cores? I know that a good chunk of Intel's IPC gains over the year have been due to their improved scheduler.
 

Darth Shiv

TS Evangelist
Intel has actually done something pretty neat in attempting to overcome this resource contention/bottleneck issue in Skylake - inverse hyperthreading. There are a few articles floating around but the premise is a lot of things are hard to write multithreaded or you will have the bottleneck on a core's large resources. My understanding is they allow a core to use the resource from another core sort of like resource pooling in a sense to alleviate pipeline stalls when multiple instructions are trying to use a long executing resource. http://www.myce.com/news/skylake-cpus-have-inverse-hyper-threading-to-boost-single-thread-performance-77011/
 

Darth Shiv

TS Evangelist
That's mostly an issue with the scheduler though, isn't it? I know that making a scheduler for a non-parallel processor is a lot more difficult but isn't it entirely possible to create one that does a better job saturating all the cores? I know that a good chunk of Intel's IPC gains over the year have been due to their improved scheduler.
Well if they can't offload a core's execution to another core's resources, it is very hard to saturate resources. Smart optimisations like out of order execution, smarter branch prediction can increase IPC a lot and traditionally they've been doing a lot of that but inverse hyperthreading is something that has only been recent to x86 - I don't know why. Maybe they didn't bother because there haven't been enough cores until now for it to be worth the cost of the change.

Hypothetically, how would a set of processes saturate say a 2 physical core hyperthreaded machine? You would first need 4 processes. There are long executing resources in each core so you need 2 of the programs essentially to a) use those long running resources AND b) have independent operations that can be scheduled out of order to use other resources while they are waiting for the main stuff to finish. How often do you actually want to use the result of a long running op soon or immediately after that op completes? Almost always. And in addition, this usage is assuming you have a perfect controller! AND your hyperthreaded processes have to somehow use anything left over. Programs like that just don't exist - it's a problem that inherently doesn't make sense to how we solve problems in code.
 
Last edited:

Evernessince

地獄らしい人間動物園
Well if they can't offload a core's execution to another core's resources, it is very hard to saturate resources. Smart optimisations like out of order execution, smarter branch prediction can increase IPC a lot and traditionally they've been doing a lot of that but inverse hyperthreading is something that has only been recent to x86 - I don't know why. Maybe they didn't bother because there haven't been enough cores until now for it to be worth the cost of the change.

Hypothetically, how would a set of processes saturate say a 2 physical core hyperthreaded machine? You would first need 4 processes. There are long executing resources in each core so you need 2 of the programs essentially to a) use those long running resources AND b) have independent operations that can be scheduled out of order to use other resources while they are waiting for the main stuff to finish. How often do you actually want to use the result of a long running op soon or immediately after that op completes? Almost always. And in addition, this usage is assuming you have a perfect controller! AND your hyperthreaded processes have to somehow use anything left over. Programs like that just don't exist - it's a problem that inherently doesn't make sense to how we solve problems in code.
That's a really good point. Most modern desktops still only have 4 cores. Maybe when Intel releases Coffee Lake with mainstream 6 core processors will we then start to see better threading.
 

gponline

TS Member
AMD, wake up!!!

Did you hear that? Intel is launching something new every year, or every half of the year. You better get Zen out by the end of 4-th quarter, or else...
 

jauffins

TS Enthusiast
...and Kaby Lake just around the bend, it may be advisable to hold off on buying a new computer for another few months to take advantage of Intel’s latest and greatest.
Meh, I bet it'll be 3% additional CPU performance for 15% additional price. If choosing between Haswell and Skylake I still recommend Haswell to save money here and there... why would anyone with some sense wait for "better integrated graphics"? The CPU side of PCs has been boring for years now, my attention and excitement going to GPUs instead.

I just built a system with Haswell-E right after I saw Broadwell-E price and performance, and only because of the change in use case scenarios at home. When I first built my PC with a Core i5 3 years ago it was intended to be used for gaming, then my brother decided to study audiovisuals and soon it became evident that we needed more CPU power to handle me gaming and him rendering videos at the same time -yes, with concurrent users using the PC. If it wasn't the case, I wouldn't bother changing that i5 for gaming in the incoming years until a true CPU leap was done.

The i5 Skylakes are pretty much exactly the same price as the Haswell parts. The i7's, on the other hand...