Intel Haswell CPU lineup leaked online, led by flagship Core i7-4770K

By on December 12, 2012, 1:30 PM

It has come to our attention that Intel will be releasing 14 different Haswell desktop processors spanning the Core i5 and Core i7 brands next year. The next generation chips will use a new socket so if you are looking to upgrade from your existing setup, know that you’ll have to spring for a new motherboard with a Lynx Point chipset to do so.

VR-Zone got their hands on the Intel lineup that includes six standard processors and eight low-power CPUs. The top-of-the-line chip will be the Intel Core i7-4770K which features a clock speed of 3.5GHz on all four cores. The chip also uses HyperThreading so expect eight processing threads in total. It’s capable of ramping up to 3.9GHz using Turbo Boost and carries 8MB of cache.

Interestingly enough, the i7-4470K has a higher TDP than current generation Ivy Bridge chips, 84 watts versus 77 watts, despite it being designed to fully optimize power saving benefits found in the tri-gate transistor technology first used in Ivy Bridge.

Furthermore, since this is a “K” series SKU, users should expect to be able to overclock the chip without much effort. The only other K series chip in the lineup is the i5-4670K that’s clocked at 3.4GHz with 6MB of total cache on board. Only the Core i7 chips feature eight threads; all of the other i5 chips are configured as 4/4 or 2/4 (cores/threads).

Haswell is expected to debut sometime in the first half of 2013. It uses the same 22nm process that Intel introduced with Ivy Bridge but is built using a new microarchitecture that will improve performance.




User Comments: 42

Got something to say? Post a comment
slh28 slh28, TechSpot Paladin, said:

Same clock speed as the 3770K, same cores/threads, higher TDP, wow that really is disappointing. I don't know the exact percentage but very, very few people will care about the improved IGP on this chip.

1 person liked this |
Staff
Per Hansson Per Hansson, TS Server Guru, said:

@slh28, well keep in mind that IPC improvements between generations can be large, just because the new CPU has higher TDP but the same number of cores and threads does not mean it will not be significantly faster.

And also the number of end-users that actually use the GPU are probably in a large majority towards those that don't, sure those numbers are reversed for enthusiasts but atleast I am glad Intel finally are pushing for good integrated graphics!

I'll probably wait for the successor for LGA 2011 (If there is one in the not too distant future?)

2 people like this | Blue Falcon said:

slh28,

I am not sure what you were expecting? With little competition from AMD, Intel will maximize their yields to reduce costs/improve profitability. Not ramping up clocks will help to achieve this goal. This is why it's so important that we have a competitive AMD. Otherwise we end up with a slowed pace of progress in the CPU space, as we have been seeing since Core i7 920 days frankly. There will still be an IPC increase of about 10%. Were you expecting a processor 20-30% faster than IVB 3770k? There also shouldn't be a material decrease in power consumption since Haswell uses a more complex GPU and is still manufactured on the same 22nm node as IVB is.

Speaking of the original 1st generation Core i7, if someone plays games with a single flagship GPU, if a gamer overclocks them to 3.75ghz+, they are still not a bottleneck in modern games:

[link]

That means for anyone with a Core i7 920 @ 3.8-4.0ghz, there is little reason to upgrade outside of motherboard features and reduction of power consumption in overclocked states. That's not necessarily a bad thing as CPUs today last longer than ever. That $ can be spent on a larger/faster SSD or a GPU upgrade.

For anyone rocking i5-2500k/2600k/3570k/3770k @ 4.5ghz+, Haswell will obviously be a waste of $ performance wise. Of course if one resells his/her parts and minimizes the upgrade costs, it's still fun to play with new hardware

H3llion H3llion, TechSpot Paladin, said:

Will AMD ever recover? Id like to see them push their CPUs more then they are doing now, they are really slacking and Intel will be/are already taking advantage of this ... I want more CPU breakthroughs not these "little" improvements each year.

I wonder in 10-15 years when other forms of computing arrive (Optical / Quantum) if there will be much competition or Intel still being the "top dog"...

captainawesome captainawesome said:

I hope Haswell is terrible. But that's mainly because I just bought a 3570-k

Oh and AMD needs a break from this onslaught from Intel.

cliffordcooley cliffordcooley, TechSpot Paladin, said:

For anyone rocking i5-2500k/2600k/3570k/3770k @ 4.5ghz+, Haswell will obviously be a waste of $ performance wise. Of course if one resells his/her parts and minimizes the upgrade costs, it's still fun to play with new hardware
Yep, I'm happy with my 2600K but that does not mean I am not enthusiastic about building a new machine. Even if Microsoft has slowed down because of AMD's lack in performance, that doesn't irritate me in the least. And since everyone has praised AMD for their IGP performance, I can only imagine that Intel is taking this opportunity to close the gap between them and AMD.

slh28 slh28, TechSpot Paladin, said:

I am not sure what you were expecting? With little competition from AMD, Intel will maximize their yields to reduce costs/improve profitability.

Yup, totally agree with everything you said, it's just disappointing that Intel are holding back due to the lack of competition. I'm pretty sure this could have been a 6 core/12 thread 4Ghz chip overclockable to 5Ghz (with the IHS issue sorted).

Speaking of the original 1st generation Core i7, if someone plays games with a single flagship GPU, if a gamer overclocks them to 3.75ghz+, they are still not a bottleneck in modern games:

[link]

That means for anyone with a Core i7 920 @ 3.8-4.0ghz, there is little reason to upgrade outside of motherboard features and reduction of power consumption in overclocked states.

I've held off for the last 2 generations, still looking for a decent reason to upgrade Was expecting a bit more from Haswell being a "tock" and all.

hahahanoobs hahahanoobs said:

Blue Falcon said:

slh28,

I am not sure what you were expecting? With little competition from AMD, Intel will maximize their yields to reduce costs/improve profitability. Not ramping up clocks will help to achieve this goal.

LMAO. You actually think anything has changed at Intel with a chip that has been in development for the past 2 or so years, because AMD said they were bowing out less than 6 months ago? Oh god....

hahahanoobs hahahanoobs said:

Slh28 said:

I'm pretty sure this could have been a 6 core/12 thread 4Ghz chip overclockable to 5Ghz (with the IHS issue sorted).

LOL

Darth Shiv Darth Shiv said:

Blue Falcon said:

slh28,

I am not sure what you were expecting? With little competition from AMD, Intel will maximize their yields to reduce costs/improve profitability. Not ramping up clocks will help to achieve this goal.

LMAO. You actually think anything has changed at Intel with a chip that has been in development for the past 2 or so years, because AMD said they were bowing out less than 6 months ago? Oh god....

Do you think Intel has thought AMD has been a threat for the last 3+ years? Oh god...

amstech amstech, TechSpot Enthusiast, said:

Wow.

Has Intel been milking it the last 3 generations of was Bloomfield just that good?

EEatGDL said:

Slh28 said:

"I'm pretty sure this could have been a 6 core/12 thread 4Ghz chip overclockable to 5Ghz (with the IHS issue sorted)."

Again with the GHz myth? Dude, I thought we'd passed that stage; there are other parts of the microprocessor architecture that can be improved to gain performance than just adding more pipeline stages, higher clock speed or more cores.

*Pentium's philosophy of higher and higher clock -> FAIL, power wall

*Intel's super-deep pipeline -> FAIL, non-linear program execution -lots of jumps- loosing in performance

*Putting more cores without other intercommunication improvements -> FAIL, potential bottleneck if not handled well

In most reviews here we've seen that overclocking in games will barely get you 1-3 FPS in average; and in just computing, DRAM and SSD are still too slow for the processor to get a better performance by just improving the same processor. Big cache helps, but with a big word size it will be full pretty quickly and seem smaller to the processor once the processor needs a new instruction or data that's not in the cache. So they have to hit other areas without wildly just adding and making faster.

dividebyzero dividebyzero, trainee n00b, said:

Same clock speed as the 3770K, same cores/threads, higher TDP, wow that really is disappointing. I don't know the exact percentage but very, very few people will care about the improved IGP on this chip.

3770K TDP is 95 watts, so a 11 watt (or 11.5%) reduction in TDP while adding a sizeable bump in IGP performance. Whether people care about IGP is really neither here nor there in this context, and if integrated graphics is something "very few people will care about" , then someone should tell AMD they're heading down a cul-de-sac. There are plenty of examples of OEM prebuilts that ship with IGP as either default or sole graphics option...not everyone will need to play games at HD/HD+ resolutions with 8xMSAA enabled....assuming they game at all, and as such, I'd note the Core i5 4570T (see the link in the above article) which manages to pack a 2.9G base clock with 3.6G turbo into a 35 watt TDP. I'd call that reasonably impressive all things considered.

In most reviews here we've seen that overclocking in games will barely get you 1-3 FPS in average; and in just computing, DRAM and SSD are still too slow for the processor to get a better performance by just improving the same processor. Big cache helps, but with a big word size it will be full pretty quickly and seem smaller to the processor once the processor needs a new instruction or data that's not in the cache. So they have to hit other areas without wildly just adding and making faster.

Yes and no. The lack of substantial increases seen in some games...not all I might add

...is generally down to poor/unoptimized coding- generally CPU <---> RAM and/or CPU<--->GPU and /or GPU <--> VRAM subsystems. Core frequency generally scales better with code optimized for a particular setup. Remove graphics from the equation, and optimize for bandwidth, and you'll see why Power7 is already at 4+GHz (up to 5.2GHz for the z196) with an eye on 5.5GHz for the zNext variety.

St1ckM4n St1ckM4n said:

Well, this is definitely disappointing as there is no real 'tock'. My memory is bad, but wasn't this supposed to be a die shrink..?

It's to be expected, though. No pressure, and no requirement for better CPU's. I believe when the next-gen consoles ship, we will see a better increase.

1 person liked this | cliffordcooley cliffordcooley, TechSpot Paladin, said:

My memory is bad, but wasn't this supposed to be a die shrink..?
No the die shrink was Ivy from Sandy. It's now time for the architectural change on the same die.

dividebyzero dividebyzero, trainee n00b, said:

My memory is bad, but wasn't this supposed to be a die shrink..?

Nope. The next die shrink is Broadwell (14nm). 22nm was introduced with Ivy Bridge. Even Intel aren't rich enough to justify a single CPU series per process node.

EDIT: cliffordcooley the ninja !

It's to be expected, though. No pressure, and no requirement for better CPU's.

Pretty much. The percentage of software the fully utilizes four threads on the currently available architectures isn't exactly staggering, and we're not quite overrun with programs that take advantage of the instruction sets introduced in the last couple of years (AVX, FMA3, XOP etc.)...at least gaming is looking forward with the predominance of D3D 11....umm

Adhmuz Adhmuz, TechSpot Paladin, said:

This is where I get to laugh hysterically, I've been rocking a 920 @ 4.2 GHz since the year it came out, I guess 2009? And have yet to need to change anything, even my two 5870's are holding steady. I use to think of upgrading with every new generation of CPU, but this seems pointless now. Almost as if the industry is at a point where the technology at hand isn't being fully utilized so why make anything significantly faster. Of course with respect to scientific research which always needs more processing power and servers. The end user however, unless your a super enthusiast, won't even see the difference. Also worth bringing up, wasn't ddr4 suppose to be brought to the table with Haswell? Until that I still see no reason to upgrade, I think I said the same thing last year when the last "New" I Series came out.

dividebyzero dividebyzero, trainee n00b, said:

Also worth bringing up, wasn't ddr4 suppose to be brought to the table with Haswell?

Haswell-E/-EN/-EP - the HEDT/Server parts. DDR4 for mainstream desktop isn't scheduled until Broadwell (Haswell's successor). DDR4 isn't that big a deal in any case when you have DDR3-2800/-3000 kits now available.

1 person liked this | cliffordcooley cliffordcooley, TechSpot Paladin, said:

EDIT: cliffordcooley the ninja !
Sorry about that DBZ!!

Lets just say I was supporting your comment before it was made.

Darth Shiv Darth Shiv said:

Haswell-E/-EN/-EP - the HEDT/Server parts. DDR4 for mainstream desktop isn't scheduled until Broadwell (Haswell's successor). DDR4 isn't that big a deal in any case when you have DDR3-2800/-3000 kits now available.

Would be nice if they made the stock mem speeds a bit higher then! 1600MHz is a bit dated.... run 1333 mem at 1333 if put in etc like they do for slower modules at the moment anyway.

St1ckM4n St1ckM4n said:

Thanks for the info guys. 14nm Boradwell with DDR4 is probably where I'll be upgrading, then. I'm still on i7 920 too, and it's a beast.

Staff
Per Hansson Per Hansson, TS Server Guru, said:

Also worth bringing up, wasn't ddr4 suppose to be brought to the table with Haswell?

Haswell-E/-EN/-EP - the HEDT/Server parts. DDR4 for mainstream desktop isn't scheduled until Broadwell (Haswell's successor). DDR4 isn't that big a deal in any case when you have DDR3-2800/-3000 kits now available.

Is there any info available yet for a timeframe when the enthusiast parts in this range will be released? (Successor to LGA 2011)

1 person liked this | dividebyzero dividebyzero, trainee n00b, said:

Is there any info available yet for a timeframe when the enthusiast parts in this range will be released? (Successor to LGA 2011)

Mid 2014 seems to be the prevailing consensus so far.

And a [link] for those interested.

1 person liked this | Archean Archean, TechSpot Paladin, said:

+1 to Per's comments on better IGP.

I think DBZ has already 'taken care' of misunderstandings regarding the new architecture, just to add to that, I believe Haswell is targeted more towards improving parallelism, hence the TSX implementation / improvements in Speculative multithreading (SpMT uses multiple cores to accelerate a single thread by speculatively splitting it into multiple threads that can be executed in parallel) etc. Also add improvements to BP into equation, which probably also means it will handle misses a lot faster. So to sum it up, for someone with an IVB/SB system, Haswell may be not bring huge performance boost, but it will be a reasonable one.

I think the addition of MOVBE (for storage and networking applications) to Haswell is interesting, probably meaning Intel is seriously aiming for mobile sector with Haswell <=10W SoCs (e.g. in tablets). So with overall improvements in core architecture + GPU, Haswell will be far more potent solution for tablets when compared to ARM SoCs. Personally, I'd prefer performance over battery life any day, so an Haswell based tablet with 5 hours battery is a reasonable package IMHO .

2 people like this | technogiant said:

If Intel is slowing development simply because they have little competition from Amd then they are shooting themselves in the foot by killing the upgrade market.

Sales are not only generated by beating your competitors but also by substantially beating your previous generation product so people want to upgrade........with my 2700k @5.5ghz looks like I won't be giving Intel any more money for at least the next 5 years at this rate.

slh28 slh28, TechSpot Paladin, said:

Whether people care about IGP is really neither here nor there in this context, and if integrated graphics is something "very few people will care about" , then someone should tell AMD they're heading down a cul-de-sac.

I meant this chip in particular, people who buy a 4770K are either gamers who have a gaming GPU or people who use the CPU for productivity purposes and they will have either a workstation GPU or will be fine with a basic IGP. Better IGPs on lower end CPUs are definitely a good idea though.

1 person liked this | dividebyzero dividebyzero, trainee n00b, said:

@slh28

Unfortunately or not, all the CPU's (4C) are exactly the same die, with exactly the same IGP. The only thing that separates a 4770K from a low power i5 is the binning process...so you could say that the IGP is a zero cost extra. If Intel had to utilize a second wafer line solely for K series CPUs with low/no IGP, I'm betting that the price would be passed on to the end user...and paying more for less generally doesn't play.

Don't use IGP myself, but if I was to ever need to sell an old discrete graphics card to fund a new one, at least Haswell's HD4600 would make needing a placeholder card redundant - if it can drive 4K displays, then a paltry 2560x1440 shouldn't represent a problem.

crazyboots crazyboots said:

4 core what the ahh maybe should be a 12 core system or 16 core

Ravey Ravey said:

Well this is kind of frustrating for me. I was going to slowly upgrade my rig over first few months of the new year, starting with a Z77 Motherboard. But if this new Haswell chip is killing off that socket am I better off waiting?

I dont want to spend £1000(approx) on new hardware in 2013 only to have to spend it all again in 2 years time....

Also will this mean there will be DDR4 memory on the horizon as well?

Guest said:

Go ahead and buy one. the i7 920 is still an amazing chip. the 3570k is substantially faster, but no programs today can utilize that power. the i5 will probably be able to game for at least another half decade. I JUST upgraded from a core 2 quad, only because the motherboard gave out.

1 person liked this | JC713 JC713 said:

Same clock speed as the 3770K, same cores/threads, higher TDP, wow that really is disappointing. I don't know the exact percentage but very, very few people will care about the improved IGP on this chip.

I dont find a need to upgrade from my current i7 740qm, I just will upgrade to haswell since my graphics card in my laptop is lacking. I find it really disappointed intel cant even make a 8 core variant... when you see startups creates 50 core ARM chips

JC713 JC713 said:

slh28,

I am not sure what you were expecting? With little competition from AMD, Intel will maximize their yields to reduce costs/improve profitability. Not ramping up clocks will help to achieve this goal. This is why it's so important that we have a competitive AMD. Otherwise we end up with a slowed pace of progress in the CPU space, as we have been seeing since Core i7 920 days frankly. There will still be an IPC increase of about 10%. Were you expecting a processor 20-30% faster than IVB 3770k? There also shouldn't be a material decrease in power consumption since Haswell uses a more complex GPU and is still manufactured on the same 22nm node as IVB is.

Speaking of the original 1st generation Core i7, if someone plays games with a single flagship GPU, if a gamer overclocks them to 3.75ghz+, they are still not a bottleneck in modern games:

[link]

That means for anyone with a Core i7 920 @ 3.8-4.0ghz, there is little reason to upgrade outside of motherboard features and reduction of power consumption in overclocked states. That's not necessarily a bad thing as CPUs today last longer than ever. That $ can be spent on a larger/faster SSD or a GPU upgrade.

For anyone rocking i5-2500k/2600k/3570k/3770k @ 4.5ghz+, Haswell will obviously be a waste of $ performance wise. Of course if one resells his/her parts and minimizes the upgrade costs, it's still fun to play with new hardware

I disagree, I am running a 1st gen and the only bottleneck I have is my weak dedicated GPU

Blue Falcon said:

UNKNOWN9122,

What do you disagree with hard real world data? Most reviewers test a stock i7 920 @ 2.66ghz against an i7-3770K. Of course the latter would beat it in games. That's not how enthusiast gamers on this site use their parts though. Many of us overclock. Take a Core i7 920 @ 3.8-4.2ghz and you'd be seriously hard pressed to tell the difference between that an an i7-3770K OC if you are only using 1 GPU. You'd need to start using 2 GPUs to really push the CPU to the limits. For most games today, you are going to be by far GPU bottlenecked - Hitman Absolution, Sleeping Dogs, Far Cry 3, not to mention games like Metro Last Light, Crysis 3 should be even more GPU demanding.

I mean if you need the absolutely faster CPU for MMOs or Starcraft 2 where most of those titles are poorly threaded and benefit a lot from IPC per core, then by all means upgrade. I will be upgrading from Sandy Bridge because I like to play with new parts, not because I will feel the difference in gaming speed.

dividebyzero,

"Don't use IGP myself, but if I was to ever need to sell an old discrete graphics card to fund a new one, at least Haswell's HD4600 would make needing a placeholder card redundant - if it can drive 4K displays, then a paltry 2560x1440 shouldn't represent a problem."

Ya, that's a good point. The problem is Intel is putting GT2 series GPU into high-end desktop Haswells, with the much faster GT3 being reserved for mobile parts. Other than driving the displays, it's unlikely that Haswell's IGP on the desktop will amount to much. People who play less GPU demanding titles like Civilization 5, Starcraft 2 or Minecraft are actually more likely to use an APU. It's not like someone with an i7-4770K is going to fire up BF3 or Crysis 3.

In those popular titles that many play on laptops, Intel's GPU actually performs the worst!

[link]

[link]

[link]

^ You can see that even if Intel's 4600 GPU is 2x faster than HD4000 series, it's still way too slow for modern gaming. Alternatively, you can extrapolate based on this chart:

[link]

For bare minimum gaming of modern titles like FC3 or Sleeping Dogs, HD7750 is necessary. That GPU is 4.4x faster than HD4000 series.

Support for 4K TVs on Haswell is pure marketing and has almost no practical applications since a 32 inch 4K LCD monitor for PC costs $20,000+. Given how high the prices of 30 inch 2560x1600 monitors still are, it will be a miracle if 4K LCD monitors on the PC are affordable even for enthusiasts in the next 5 years. Most likely it will take longer.

[link]

4K is not at all a selling feature for this generation CPU, maybe for a CPU that's out in 4-5 years it will be.

If someone really wanted a stop-gap budget card to drive their 30 inch 2560x1600 monitor, it's way cheaper to buy a $15 GT210 than to "upgrade to Haswell".

[link]

None of the points you provided are valid enough for upgrading to Haswell from modern i7s, especially from modern SB/IVB CPUs. You sighted an example of an i5 that uses 35W but even if a CPU uses 65W, the break-even on electricity consumption cost will be like 10-20 years compared with the cash outlay to upgrade.

Like I said, upgrading to Haswell will be mostly for those who are using very old CPUs like Core 2 Duo/Quad (65nm/45nm) or for gaming enthusiasts/overclockers who like playing with new parts and would have upgraded even if Haswell was only 15% faster.

Other features might be more important like Thunderbolt for external storage. But even that is stretching it.

Summer 2006 Intel gave us E6600 2.4ghz

Summer 2007 for the same price we got Q6600 2.4ghz (2x the cores)

Fall 2008 we got i7 920 that overclocked like mad and had a real tangible difference from Q6600.

Since then, it's been mostly stagnation on Intel side unless you ponied up for i7-3930X.

JC713 JC713 said:

UNKNOWN9122,

What do you disagree with hard real world data? Most reviewers test a stock i7 920 @ 2.66ghz against an i7-3770K. Of course the latter would beat it in games. That's not how enthusiast gamers on this site use their parts though. Many of us overclock. Take a Core i7 920 @ 3.8-4.2ghz and you'd be seriously hard pressed to tell the difference between that an an i7-3770K OC if you are only using 1 GPU. You'd need to start using 2 GPUs to really push the CPU to the limits. For most games today, you are going to be by far GPU bottlenecked - Hitman Absolution, Sleeping Dogs, Far Cry 3, not to mention games like Metro Last Light, Crysis 3 should be even more GPU demanding.

I mean if you need the absolutely faster CPU for MMOs or Starcraft 2 where most of those titles are poorly threaded and benefit a lot from IPC per core, then by all means upgrade. I will be upgrading from Sandy Bridge because I like to play with new parts, not because I will feel the difference in gaming speed.

hmm. something to think about lol(y)

3 people like this | dividebyzero dividebyzero, trainee n00b, said:

The problem is Intel is putting GT2 series GPU into high-end desktop Haswells, with the much faster GT3 being reserved for mobile parts. Other than driving the displays, it's unlikely that Haswell's IGP on the desktop will amount to much

1.I think Intel are well aware that for gamers their IGP is, and will continue to lag behind discrete solutions, which is precisely why Intel have Lucidlogix Virtu as an option, and Intel aren't known for their excellence in graphics driver implementation.

2.Enthusiast PC gamers are a niche market at best, and if consoles manage to drag their performance into the twenty-first century that niche becomes smaller. More people are likely to use the IGP for video playback take advantage of the onboard encoder than serious gaming;

. People who play less GPU demanding titles like...[long...long...long rant about a point that no one else is disputing].. Support for 4K TVs on Haswell is pure marketing

Missed the point by a country mile.

If Haswell can run 4K displays, then anyone who needs display out capability during discrete card swap outs (or for troubleshooting discrete graphics hardware/driver issues) has an at hand solution. As I said before- a zero cost extra.

None of the points you provided are valid enough for upgrading to Haswell from modern i7s....[lengthy straw man argument]

Please point me to ANY statement ANYWHERE where I made that made that assertion. In fact, there isn't a single post by anyone in this thread taking that stance

You sighted an example of an i5 that uses 35W but even if a CPU uses 65W, the break-even on electricity consumption cost will be like 10-20 years compared with the cash outlay to upgrade

Lateral thinking not your strong suit? Scenario #1: A 35W full fat desktop desktop CPU >>> passive cooling options >>> potent HTPC/SFF. Scenario #2: If Intel can fit a 4C desktop CPU + IGP into a 35W envelope, what are the chances of ULV mobile Haswell reaching the 10W target. What would be the bigger selling point; electricity draw from the wall, or the increased battery life from a 10W CPU+IGP ?

Since then, it's been mostly stagnation on Intel side unless you ponied up for i7-3930X.

Personally I blame:

1. The other x86 licence holders for treating the business like an occasional hobby

2. 90% + of the tech buying public who have no interest in what resides inside the hardware they buy

3. Software companies that have little interest in reworking code unless sponsored by an IHV

Intel have a lot of faults, but for all of them, they still offer competitive perf/$ in a vast range of options in a climate where their primary competitor has essentially been AWOL for six years. You are arguing for a quantum leap in computing performance for every Intel generation, lets have a look at an alternate reality:

Intel releases Sandy Bridge/Ivy Bridge/Haswell at the clocks it can readily sustain rather than the anaemic 3-3.5GHz they launch with - lets say 4 - 4.5GHz. Intel also sell at the miniscule profit margin that AMD utilize (so, take into account die size, yield, foundry capacity/amortization). Intel only offer unlocked CPUs (OEMs rely upon locked down BIOS options). Intel offer 8C/16T HEDT parts (as opposed to server only) as the top tier HEDT SKU's and pricing is reduced accordingly down through the product stack. Intel sell chipsets for the same contract price as AMD. Intel removes the PCI-E lane restriction from mainstream CPUs and replicates the same on-die controller found on the server parts.

Taking those factors into consideration, you would see lower binned Intel CPUs at ~<$30, and a stock 4+GHz 3770K well under half price...with an Intel socket board costing no more than an AMD socket board for a better feature set (PCI-E 3.0, native USB3.0 (FM1/2 excepted), RST etc.)

Question: How long does AMD remain viable in any x86 market ?

Maybe you should be careful what you wish for.

JC713 JC713 said:

Like a boss!!!!!!

Archean Archean, TechSpot Paladin, said:

@DBZ

I no longer believe AMD is a viable competitor for Intel, the place has been taken by ARM, due to huge demand for 'mobile' SoCs, and the game is bit different, I.e. low margin/high volume. Intel has the performance lead by huge margin and ARM have power consumption advantage.

In the coming years, both sides will move towards the middle IMO, until they reach an optimum position where performance will be much improved without sacrificing too much rise in power consumption. Whoever reaches that stage first, will be winner, and keeping in mind the huge advantage of R&D and fabs, I'll put Intel in a slightly advantageous position. Beside, Intel with a mix of these two just need to convey one thing to developers that 'it is good enough' and rest will take care of itself.

dividebyzero dividebyzero, trainee n00b, said:

Intel will be in the box seat while x86 remains the dominant ISA. I'd like to see how the parallelization and server initiatives shake out (industry adoption). Intel have x86 + GPGPU (Xeon Phi), while the ARM analogue would be Nvidia's Maxwell/Project Denver (GTX 800 series) ARMv8 + GPGPU for HPC (now confirmed for TSMC's 20nm node). The server/pro market -traditionally carrying inertia- seems to be making a reasonable push towards ARM, although any present x86 based ecosystem is going to be an uphill struggle for any vendor against Intel.

With Intel's S1200 recently released there seems no end of comparisons and crystal ball gazing...even AMD pretending they're at the grown-ups table. Interesting times ahead.

miluthui miluthui said:

Looks like I'll still stick with my Z77 upgrade then, I don't like the new socket 1150 at all it sounds so backward and the Haswell CPU is not a big step up, Ivy Bridge is still the best choice in term of performance and power efficiency.

gamoniac said:

...

Yup, totally agree with everything you said, it's just disappointing that Intel are holding back due to the lack of competition. I'm pretty sure this could have been a 6 core/12 thread 4Ghz chip overclockable to 5Ghz (with the IHS issue sorted)...

I am an AMD fan, but there is no evidence that Intel is slowing down innovation because of the lack of competition from AMD. Price-wise, it is bad news for consumer, but Intel continues to invest in R&D. In fact, at the current time, ARM poses a bigger threat to Intel than AMD does, at least in the mobile devices area. Keep in mind ARM is tryingn to enter the server market, too.

feriss feriss said:

Same clock speed as the 3770K, same cores/threads, higher TDP, wow that really is disappointing. I don't know the exact percentage but very, very few people will care about the improved IGP on this chip.

As for me, I really care the IGP. I do network research, so I need my CPU to be very fast to run my experiments. But for the GPU part, I only occasionally play light games, so I do not want a discrete GPU to waste electricity.

I would never pay for a i7 with a second-grade IGP. Maybe I should turn to AMD, but the best AMD CPUs are too slow to be comparable to i5 CPUs. I really hate Intel for its ignoring PC users. Once there is a way, I would be the first to escape from Intel.

Come on AMD and ARM.

Load all comments...

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.