Intel unveils the first of its 8th generation Core CPUs

Only the boost speed is faster, & that's only 5% faster than the prior generation chips Yes, Hyperthreading is not as efficient as physical cores, so a 4C/8T CPU that's only using 4 threads would normally outperform a 2C/4T CPU...but only if the clock speeds were at least identical (or if the 4C/8T CPU was running faster). In a laptop, it's a lot harder to get the cooler temperatures needed to maximize your Turbo speed. Considering that these chips have double the cores on them, that's double the potential heat sources...& with a 15W TDP, I doubt you're going to get any kind of hefty cooling system in a laptop to really allow for full Turbo performance on these chips.

That last sentence though....
 
They're not the kind of chips that typically attract my attention. In fact if you hadn't responded to my post I would've already forgotten they even exist, or going to exist. Kind of like entry level smartphones. Millions of different ones are available and millions of people buy them but no one really takes any notice of them, they're innocuous.

So you have no interest, but still decided to comment. You do that a LOT.
 
I love the new Intel boxes. They're colorful in an upbeat, vibrant, modern, and tasteful way...(y)

AMD's packaging designers, on the other hand, displayed such poor judgment, and a gaudy, tacky, low brow sense of style, to a degree that would even make Kim Kardashian wretch in its presence.

In fact, those "Threadripper" boxes are so ugly, they succeed in making Andy Warhol's work look elegant.

In fact, I heard Michaelangelo's "David", asked for a fig leave, not to cover his ding-dong, but to cover his eyes when he first saw those boxes. Which by the way, are really only fit to package $50.00 burner phones, one would try to sell in Spanish Harlem!

Have a nice day kidz, and be nice to one another! If you don't, I'll come back and put up another of these infuriating sh!tposts. :p:cool:
 
Why participate in a topic about chips that will come in and out of your head as soon as you open the link and then close it? Doesn't it sound irrational?
Well that's why they call them improved processes, they engineered the way to have the best of both worlds, adding performance and keeping the same TDP. This comes through efficiency, improved cooling solutions, so back and forth.

I'm still not getting why people are so stuck with not understanding how engineering works, there is a problem, let's find a solution. I doubt they will ship chips that will burn out your computer... and I'm impressed that people believe this is how they work. The level of abstraction is incredible.

Because "engineering" isn't a magic wand that you can simply wave around & say, "viola! Every problem is now solved!" If that were the case, then Intel fans wouldn't have had a field day with the original FX processors, & AMD fans wouldn't have a field day with Intel's use of thermal paste in their newer CPUs. And if it were that simple to simply "engineer" items into existence, you wouldn't see Intel still stuck on their 14nm process, as they would have found it "simple" to move on to 10nm or even 7nm by now.

As for 'burning up' computers, you can't honestly be saying that you've never heard of a PC overheating, or having components (including CPUs) "frying" because they overheated? Especially with laptops? Guess all those companies that designed those passive & active laptop coolers must be kicking themselves for having wasted all of their time "engineering" a solution to a problem that doesn't exist, huh?

For those still complaining about the numbers, let's take a look at this in a non tech way:
Scenario 1: You have 2 people carrying 2500 pieces each.
Scenario 2: You have 4 people carrying 1900 pieces each.
Which scenario will be completed first?

Ok let's go back to tech way, sure, it's "slower clocked" but you have a better spread of tasks all around and you will definitely feel it faster, because processes won't hog the complete processor. And then you have turbo, you now have 4 cores instead of 2, so even if they were left at the same clocks, it will still be faster.

How they are handling it? Well that's how they make money.

You forgot to take into account any other limiters that might be in play. You assumed a) there's no limit to how many pieces at a time can be picked up by the people, b) there's a limit to how long you're going to collect the pieces, but didn't specify that length of time, c) you assumed that everyone is dipping into a common source for their pieces, when each person has their own designated channel, & d) you forgot to account for the "teamwork" equivalent to HyperThreading.

So what you have is more of a situation like this:
Group A:
  • Normally has 2 people [cores] that can carry 2,500 pieces each (5,000 pieces maximum).
  • If they both work extra hard, they can manage to carry 3,700 pieces each (7,400 pieces maximum); if only 1 of them is working, they can manage to get up to 4,000 pieces at a time.
  • Sometimes, they will work in 2 teams of 2 [HyperThreading]. Each team can then bring in 3,250 pieces (6,500 pieces maximum), or 4,810 pieces if they work really hard (9,620 pieces maximum); if only 1 team is working at their absolute fastest, they can bring in 5,200 pieces at a time In this case, the assumption is that increasing from a 1C/1T to a 1C/2T situation improves the overall performance by 30%; this means however, that the performance per thread technically drops, but overall performance can improve because you have more threads available].
Group B:
  • Normally has 4 people [cores] that can carry 1,900 pieces each (7,600 pieces maximum).
  • If they all work extra hard, they can maybe carry 3,900 pieces each (15,600 pieces maximum); if only 3 people are working hard, they can maybe carry 4,000 pieces each (12,000 pieces maximum); if only 1 or 2 people are working really, really hard, they can carry 4,200 pieces each (8,400 pieces maximum with 2 people).
  • They can also try working in teams, this time in 4 teams of 2 [HyperThreading]. Each team can bring in 2,470 pieces each (9,880 pieces maximum), or maybe 5,070 pieces if they work really hard (20,280 pieces maximum); if only 3 teams work hard they can maybe manage 5,200 pieces each (15,600 pieces maximum); if only 1 or teams work really hard they can carry 5,460 pieces each (10,920 pieces maximum with 2 teams) [same as with Group A].
Overall setup:
  • Pieces are stored in massive warehouses [SSDs, HDDs, flash drives, CD/DVD, network storage, etc.]. The pieces are transferred by conveyor belts [system RAM] to storage hoppers [L3 cache]. The hoppers feed individual bins [L2 cache]. Each person [Core] is assigned a separate bin, and cannot access pieces from another person/team's bin; if they work in teams, each member of the team gets their own bin [Thread], but because both bins have to fit into the same space on the processing floor they're limited to half the normal size [reflects the fact that 1 core cannot access the L2 cache of another core, not without the instructions in core #2's L2 cache first being read back to the L3 cache & then being sent to core #1's L2 cache].
  • The bins can be manually refilled at any time from the hopper; however, since refilling the bins takes time, for efficiency's sake the people/teams only refill their bins when they're completely or almost completely empty. While being refilled, the people/teams cannot pull any pieces from the bins [represents cache latency & that the cores generally can't read from the cache while that portion is being written. Plus, CPUs work more efficiently when they can process large chunks of data from their cache, rather than small bits here & there].
  • Depending on the pieces being worked on, management swaps out the bins, so sometimes the bins don't always hold the same amount of pieces, & sometimes not every person/team gets a bin to pull pieces from [not all applications are created equal. Some of them are set up to use every single available core/thread in a CPU, while others by their nature can only use a few cores/threads. Some applications fully tax the capabilities of a CPU, while others may use every single thread but dont' come anywhere near taxing its capabilities].
Scenarios:
  1. Group A has both bins available, Group B has all 4 bins available. Both groups just work normally, & each bin can hold 2,500 pieces at a time. Group A is able to process 5,000 pieces at a time, while Group B is able to process 7,600 pieces at a time. Group B's performance is 52% greater than Group A's [represents a situation where both CPUs use all of their cores, but neither CPU uses Turbo, & neither CPU uses HyperThreading. The chances of this happening are almost non-existent].
  2. As per Scenario 1, but Group A is allowed to work in teams; Group B's bins can still hold 2,500 pieces each, but Group A's bins are reduced to 1,625 pieces each. Group B's output is still 7,600 pieces, but Group A's output has increased to 6,500 pieces. Group B still produced more, but now their edge is reduced to 21.6% [represents a situation where an application uses 4 threads, so CPU A's HyperThreading kicks in, but CPU B's did not. Both CPUs are still not using Turbo].
  3. As per Scenario 1, but both Groups are able to work as teams. Group A's output increases to 6,500 pieces, but Group B's output increases to 9,880 pieces, giving them back the 52% productivity advantage [situation is an application that will use as many threads as it can find. Neither CPU's Turbo has kicked in].
  4. Both Groups A & B have only 2 bins available. Neither Group is allowed to use teams. Both are limited to bins of 2,500 pieces each. Group A is able to produce 5,000 pieces, but Group B is only able to produce 3,800 pieces unless they work extra hard; if they don't work extra hard, their productivity is 24% less than Group A's [represents a two-threaded application. In this case, unless CPU B's Turbo settings kick in, it cannot match CPU A's performance]
  5. As per Scenario 4, but the bin sizes are increased to 4,000 pieces, & each Group is limited to 1 bin. Both Group A & B decide to work really, really hard. Since each Group only had 1 bin available, however, & were not allowed to work in teams, both Group A & B produced the same amount (4,000 pieces), giving them identical performance [represents a single-threaded application. In this case, both CPUs are able to use Turbo, but both top out at the same frequency, giving them identical performance].
  6. As per Scenario 4, but the bins sizes are increased to 5,000 pieces. Both Groups decide to work really, really hard. Group A is limited by its people's performance, & is still only able to produce 7,400 pieces. Group B was able to work slightly faster, but only managed to produce 8,000 pieces, giving it a productivity margin of only 8.1% [represents a dual-threaded application where each CPU is able to hit its maximum Turbo setting]
  7. As per Scenario 6, but each Group is limited to 1 bin. Group A manages to produce 4,000 pieces, while Group B manages to produce 4,200 pieces; Group B's margin is only 5% in this case [represents a single-threaded application where both CPUs can use full Turbo]

Note that these are just estimates for show, & that the 30% figure I used is the generally accepted performance improvement when going from a 2C/2T CPU to a 2C/4T CPU. Actual performance of a 2C/4T CPU vs.a 4C/4T or even 4C/8T CPU varies quite a bit from application from application -- quite a few games, for example, perform just as well on a 2C/4T Core i3 as they do on a 4C/4T Core i5 (or even a 4C/8T Core i7). Conversely, even if HyperThreading only gives an estimated 30% improvement, & even if you assume that using physical cores vs. HT adds 100% performance, you don't always see a 4C/4T CPU (or even a 4C/8T CPU running quad-threaded applications) running 50-55% faster than a 2C/4T CPU.

Still, what you're going to find is that moving from a 2C/4T CPU to a 4C/8T CPU can only provide a big boost in performance if:
  1. You are consistently using applications that can take advantage of the extra 4 threads; or, with quad-threaded applications, you are able to feed more data into each core/thread because you have more L2 cache available
  2. You have a significant increase in L3 cache available per core/thread to ensure that each core/thread isn't waiting for additional instructions to be fed into its L2 cache. This will only matter, however, if the cores were consistently idling to wait for additional data from the L3 cache to be written to the L2 caches, so the actual benefit will probably vary from application to application.
  3. You see significant increases in core clock frequencies over the older CPU, which also requires being able to run cool enough to squeeze as many Turbo steps as possible out of your CPU.
 
That's why they do this through improving the nm assembling process. Where have we talked about magically making it... there is nothing magic about engineering, and you are able to see the improvements made.

Again, though, incremental adjustments on the same manufacturing process are going to manifest (usually) as slight improvements to the CPU -- usually in the same # of cores but slightly increased clock frequencies, like we saw with the slight increase from Skylake (6th-generation, 14nm fabrication) to Kaby Lake (7th-generation, 14nm fabrication): same fab, same instruction sets (so no improvement in IPC per clock cycle), & very low increases in clock speeds (Skylake-U models ranged from 2.2 to 2.6 GHz, Kaby Lake-U ranged from 2.4 to 2.8 GHz; increases were 7-9% tops).

Yes, it's nice that they managed to squeeze 4 physical cores in, but they had to cut the speed of each core to do so. Having a 4C/8T CPU can be nice, if your applications can take advantage of the additional 4 threads; if not, then your 4C/8T CPU is being treated just like a 4C/4T CPU by the application -- or worse (from this perspective) like a 2C/4T CPU (just like its predecessor). In that kind of situation, the speed reduction for these chips could hinder their performance.

You keep on bringing turbo into the picture... you are obsessed with it. It doesn't require for an application to use all 4 cores to feel the computer run faster, OSs will distribute the load between the different resources available, if you want synthetic performance, then you won't be looking at a U line processor. The improvement is there, the extra performance is there on paper, why keep on banging the head against the wall? Let's wait for the benchmarks.
.

Maybe because that is part of the performance? Aside from the number of cores, these CPUs have no edge over their predecessors, & without Turbo steps they'll run slower.

But that's the problem: not only are they not designed for high-end laptops, they're going to go to designs that already have heat issues. Reddit had so many on their /r/Dell forum about heat-related issues for the XPS 13 models (which use the i7-7560U listed above) -- & not just for gamers trying to use them, but people hitting 50-60C on idle & hitting 90-100C when surfing the Web or using office applications -- that they set up a "superthread" with suggestions on how to handle the non-gaming-related heat issues. If a 2C/4T & 15W TDP CPU has trouble with overheating for "light" usage, a 4C/8T & 15W TDP CPU is going to also have problems.
 
But that's the problem: not only are they not designed for high-end laptops, they're going to go to designs that already have heat issues. Reddit had so many on their /r/Dell forum about heat-related issues for the XPS 13 models (which use the i7-7560U listed above) -- & not just for gamers trying to use them, but people hitting 50-60C on idle & hitting 90-100C when surfing the Web or using office applications -- that they set up a "superthread" with suggestions on how to handle the non-gaming-related heat issues. If a 2C/4T & 15W TDP CPU has trouble with overheating for "light" usage, a 4C/8T & 15W TDP CPU is going to also have problems.
You do know that each manufacturer uses their own cooling solutions and computer setup right? Dell has been dropping the ball for a while now, from their top line down, I'm not going Dell anytime soon again, specially after all the issues I've seen firsthand.

Again... let's wait for benchmarks, we are just hitting our heads against a wall.
 
Back