Intel 9th-gen desktop lineup is here, finally fleshes out the Coffee Lake Refresh

onetheycallEric

Posts: 225   +47
Staff
In brief: Intel's belated 9th-gen lineup, also known as Coffee Lake Refresh, will see Intel bring up to eight cores to desktops and laptops. Support for denser DRAM, solder TIM, improved Wi-Fi, Turbo Boost for the i3 models, plus the new F and KF SKUs round out the notables. The chips should be compatible with current 300-series motherboards.

Intel's Coffee Lake Refresh that makes up its 9th-gen processor line has been a long time coming; Intel rolled out the i9-9900K and its i7 and i5 K-SKU brethren late last year, but it didn't come with many of the mid to low end models. As of now, Intel's 9th-gen desktop lineup is complete, coming alongside the new H-series mobile parts powering new 6-core laptops from vendors multiple vendors, with 8-core models soon to come.

Like the i9-9900K, Intel is fleshing out its 9th-gen lineup with chips based on its 14nm++ manufacturing process. Being another extension of Intel's Skylake architecture, there isn't a wealth of new features. Aside from higher clocks within the same TDP envelope that come with refined silicon, there are things like solder TIM on the K-series. One point of interest is the inclusion of Turbo Boost support for Core i3 chips. This is undoubtedly meant to bolster Intel's defense on the budget segment, where AMD has been considerably dangerous to Intel's marketshare with Ryzen 3 and APU offerings.

Intel is presumably hoping its full 9th-gen family will help stave off the encroaching AMD and its looming 7nm Ryzen 3000-series. Alas, that's probably a job better suited for Intel's 10nm chips. Nevertheless, Intel's 9th-gen lineup will drop into existing 300-series boards (likely requiring a BIOS update) and will offer more 8-core models, support for up to 128GB of RAM, improved Wi-Fi, and extends Optane Memory support to the entry-level Pentiums and Celerons.

The 9th-Gen series will see Intel continue its well known price and feature segmentation, as well as introducing its newest "F" and "KF" suffixes. These models are stripped of integrated graphics, a feature Intel tested the waters with somewhat recently. Pentiums and Celerons are now the only dual-core models, as the i3 has gained two more cores in recent years, and Intel has extended Turbo Boost support to the value-minded chips as well.

Intel states the new processor family is available from retailers immediately; however, color us skeptical with Intel's pervasive 14nm shortages. We'll be keeping an eye on price and availability.

Permalink to story.

 
When do we needing better cpu gpu ram. i3-i9 9xxx series. when does intel come out with pcie 3.0 1 -5.0 motherboards ?
can games programs run better with pcie 4.0-5.0 or do we have to wait years like pcie1-2 and 3. what games can benefit of that extra bandwidth. can windows 10 run from scratch at day one if we put a amd r 3000 motheroboard with pcie 4.0 and better ssd readings.
will there be pcie 5.0 this year.
 
When do we needing better cpu gpu ram. i3-i9 9xxx series. when does intel come out with pcie 3.0 1 -5.0 motherboards ?
can games programs run better with pcie 4.0-5.0 or do we have to wait years like pcie1-2 and 3. what games can benefit of that extra bandwidth. can windows 10 run from scratch at day one if we put a amd r 3000 motheroboard with pcie 4.0 and better ssd readings.
will there be pcie 5.0 this year.
More realistic pci-e 4.0 needs to reach mainstream first.
 
Getting old wine in new bottles. Thanks, but no!
These are still the same bug infused processors. Intel benchmarks them without Spectre mitigations. At home, those mitigations will be automatically installed (note that not all bugs can be mitigated!) and your PC will become slower with each newly uncovered bug. To call them bugs is already a lie, though, as Intel was simply cheating and has now been caught. Their response: Keep cheating and deny it (just benchmark as if nothing had ever happened). This only works as long as their journalists (free trips and samples) play along...
 
Getting old wine in new bottles. Thanks, but no!
These are still the same bug infused processors. Intel benchmarks them without Spectre mitigations. At home, those mitigations will be automatically installed (note that not all bugs can be mitigated!) and your PC will become slower with each newly uncovered bug. To call them bugs is already a lie, though, as Intel was simply cheating and has now been caught. Their response: Keep cheating and deny it (just benchmark as if nothing had ever happened). This only works as long as their journalists (free trips and samples) play along...
And yet, there are no known exploits using Spectre, I mean active ones in the wild, not researchers proof of concept ones.
https://v8.dev/blog/spectre - V8 developers blog (V8 is used in Chrome)
So on one hand a lot of bugs, on the other, no active exploits, yet Intel CPUs suffer from performance penalty... but in which game? is there a list of those? how big of an impact there is?
Oh, I know that there is one in server space (impact, I mean), especially in some tasks that have a lot of I/O traffic, database scenarios that have a lot of small reads and writes, due to content switching in kernel being slower after mitigations - but does that impact games? no, not really, some loading times may be a bit longer, network performance may suffer (on 10 Gbit and faster connections). That is all there is to it.
Frankly even with all mitigations enabled Intel CPUs are still faster than AMD, sure, they suffer in some scenarios but 5Ghz Intel is still faster than 4.35Ghz 2700X (core count being equal) in any task presented. They cost more, are worse in perf/$ but if you have a need and money to spend....
 
And yet, there are no known exploits using Spectre
And yet, there are no known exploits using Spectre...??? Except those published by the researchers themselves, of course.
The nasty thing about those bugs is that they make it nearly impossible to detect exploitation. If you were running your webpage on one of the big server farms and somebody on the same machine was "snooping" all your data and passwords, you wouldn't be able to detect it. Relying on that someone going public with her/his "findings" is just as illusory.
 
The i9-9900T interests me as an always on Plex server at home, 8 Cores/16 Threads with a TDP of 35 Watts. Very tempting, just a shame about the price...
 
The i9-9900T interests me as an always on Plex server at home, 8 Cores/16 Threads with a TDP of 35 Watts. Very tempting, just a shame about the price...
There already has been a discussion about the "dishonesty" of that offer. Would you actually happen to put them all to use (including "hyper-threading"), the 35W wouldn't be nearly enough. Ergo, you get plenty of cores that are doomed to do nothing. In other words: they are playing you for suckers.
 
There already has been a discussion about the "dishonesty" of that offer. Would you actually happen to put them all to use (including "hyper-threading"), the 35W wouldn't be nearly enough. Ergo, you get plenty of cores that are doomed to do nothing. In other words: they are playing you for suckers.
Ok, so you're saying AMD have something better to offer me? I don't mind the cores not boosting up massively, the threads are what counts for work loads like video trans-coding.
 
What is new or refreshed that can actually perform better than the 9900K? And any of them have real in-silicon fix for meltdown and spectre yet? Intel has been disappointing at multiple level for many years by now, and their high prices are not justified, other than the fact they can not even produce enough to avoid surrendering market share.
 
What is new or refreshed that can actually perform better than the 9900K? And any of them have real in-silicon fix for meltdown and spectre yet? Intel has been disappointing at multiple level for many years by now, and their high prices are not justified, other than the fact they can not even produce enough to avoid surrendering market share.
This is the rest of the 9000 lineup... the 9900 is the highend... obviously nothing here will beat it!
 
There already has been a discussion about the "dishonesty" of that offer. Would you actually happen to put them all to use (including "hyper-threading"), the 35W wouldn't be nearly enough. Ergo, you get plenty of cores that are doomed to do nothing. In other words: they are playing you for suckers.
Ok, so you're saying AMD have something better to offer me? I don't mind the cores not boosting up massively, the threads are what counts for work loads like video trans-coding.
Video transcoding is preferably done in hardware. A dedicated engine can do this much faster while consuming only a fraction of energy. If you insist on using "more flexible" hardware (for weird codecs) then the GPU would be the better choice.
 
Video transcoding is preferably done in hardware. A dedicated engine can do this much faster while consuming only a fraction of energy. If you insist on using "more flexible" hardware (for weird codecs) then the GPU would be the better choice.
And the software has to be coded to fully use the CPU/GPU!
 
Video transcoding is preferably done in hardware. A dedicated engine can do this much faster while consuming only a fraction of energy. If you insist on using "more flexible" hardware (for weird codecs) then the GPU would be the better choice.
So rather than show me AMD's competing product, you instead tell me to just use different software?

FYI I am specifically talking about Plex and playing back 4K HDR content using it. Already been trying out Hardware acceleration by sticking in a 1030 into the PC running it and it's hit and miss if it uses the GPU for transcoding.

If you can show me the competing AMD product (45 TDP, 8 cores, 16 threads) that would be great as I'm sure it's much cheaper than Intels.
 
So rather than show me AMD's competing product, you instead tell me to just use different software?

FYI I am specifically talking about Plex and playing back 4K HDR content using it. Already been trying out Hardware acceleration by sticking in a 1030 into the PC running it and it's hit and miss if it uses the GPU for transcoding.

If you can show me the competing AMD product (45 TDP, 8 cores, 16 threads) that would be great as I'm sure it's much cheaper than Intels.

I think what you're looking for would be the Ryzen 2700E

https://www.amd.com/en/products/cpu/amd-ryzen-7-2700e

I think it sells for under $300... but prices and availability vary from place to place... and from time to time :)

Of course, the 3700E is probably on its way - might want to wait for that...
 
I think what you're looking for would be the Ryzen 2700E
I know, I was making a point that xxLCxx went off on one, calling this CPU pointless without comparing it to anything. Even after being called out on it he still didn't link me to a competitor.
I think it sells for under $300... but prices and availability vary from place to place... and from time to time :)
I'm actually finding it nearly impossible to find one to be honest, Amazon, Scan, all the usual places I shop I can't find it at all. Not even listed.
Of course, the 3700E is probably on its way - might want to wait for that...
Definitely, I just have to pray my current Microserver holds out until then.
 
There already has been a discussion about the "dishonesty" of that offer. Would you actually happen to put them all to use (including "hyper-threading"), the 35W wouldn't be nearly enough. Ergo, you get plenty of cores that are doomed to do nothing. In other words: they are playing you for suckers.
Ok, so you're saying AMD have something better to offer me? I don't mind the cores not boosting up massively, the threads are what counts for work loads like video trans-coding.
Video transcoding is preferably done in hardware. A dedicated engine can do this much faster while consuming only a fraction of energy. If you insist on using "more flexible" hardware (for weird codecs) then the GPU would be the better choice.

The quality of the output of an asic video encoder used in consumer hardware is much worse then the output of a good software encoder. Its only good for things like gameplay streaming, when CPU and GPU resources are needed elsewhere.
Unfortunately I haven't found any video encoder that uses general computing possibilities of modern GPUs with good results, so CPU encoding is still the best way to go when quality matters the most.

Still a 35W CPU isn't likely a transcoding beast...
 
The quality of the output of an asic video encoder used in consumer hardware is much worse then the output of a good software encoder. Its only good for things like gameplay streaming, when CPU and GPU resources are needed elsewhere.
Unfortunately I haven't found any video encoder that uses general computing possibilities of modern GPUs with good results, so CPU encoding is still the best way to go when quality matters the most.

Still a 35W CPU isn't likely a transcoding beast...
Meh, my current Intel Xeon E3-1265L v2 can do 3-4 1080p transcodes at the same time before it all becomes a stuttery mess. I reckon a modern 8 core 16 thread could survive considerably more.
 
I know, I was making a point that xxLCxx went off on one, calling this CPU pointless without comparing it to anything. Even after being called out on it he still didn't link me to a competitor.

I'm actually finding it nearly impossible to find one to be honest, Amazon, Scan, all the usual places I shop I can't find it at all. Not even listed.

Definitely, I just have to pray my current Microserver holds out until then.
Just buy cheapest Ryzen 2700, downclock to 2700E clocks and lower voltages to match. They are the same silicon, maybe, just maybe, binned differently but I doubt that. If you can't reach power use of the "E" version you will get within few W for certain.
 
Back