Intel's third 10nm processor is reportedly called 'Tigerlake'

I'll take that bet. ARM's present situation doesn't look that great in perf, perf/watt, or feature set ( Xeon-D versus X-Gene ARM for example). By 2018 Xeon's will be up to 28 cores/56 thread, have AVX512 to boost IPC, and be able to address 6TB of system RAM thanks to Apache Pass. Consumer use is always low hanging fruit for processors.

It does happen periodically, or all Microsoft's OS's would still have to support 8-bit and 16-bit processors. Sometimes the weight of legacy support far outweighs its usefulness. especially if the installed software base starts to dwindle.

"it's"? You are slipping.
You need not worry for too much longer. I hear CISC is on the way out. Bridge the divide with Android x86.

Let's say that intel's xeon's processors do have that many cores by 2018, does it really make that big a difference if Intel's IPC is pretty much the same? If the consumer side isn't even getting the benefit of these extra cores, there's very little point to developing apps that use that many cores and thus will diminish the marketplace for them. AVX512 is just an extension of current instruction types and will most likely have no impact on the consumer side and minimal impact on the business side for some time.
 
If you don't agree with something, it's automatically irrelevant. That doesn't make it so. Crap can all this "we s***", and do your gossiping and back slapping around a stagnant water cooler somewhere in Canada. I don't need a "we contingent" to deal with the likes of you. Get the idea that because you say something it's automatically right, out of your head, and out of my face.

Anything you've posted so far sounds it if was conceived by some brain dead shill for M$.

Now, I've told you before, give the capital letters a rest. They don't give whatever you have to say more credibility. All they are is the forum equivalent of shouting, which isn't permitted.

M$ and Natella screwed up so badly with Windows 8, they had to give Win 10 away. And now you turn around and tell me how "thankful", (no wait "THANKFUL") I should be they're offering Win 10 as a free upgrade"?

I didn't buy into that Windows 8 or 8.1 nonsense, and thus, I have nothing to be thankful for. Now run along to the water cooler, and talk about me behind my back with your little "we"clique.

Wait... so they're giving Win 10 away... but you say it isn't free? Make up your mind....

Clearly what is irrelevant is your opinion on anything on this website - you just like to be cranky.... maybe you should change your nickname to crankytroll?

And why on Earth would I waste my breath talking about you in real life?
 
Wait... so they're giving Win 10 away... but you say it isn't free? Make up your mind....
Well, if you're replacing either if the Win 8 variants, it amounts to being ripped off with another stinker of M$'s OS, then being bribed to keep your mouth shut. When 8 was released the people who bought were saying, "well you can buy after-market add-ons that turn it into Windows 7, to justify their purchase. That's imbecilic. Windows 8 incidentally, summarily removed Windows Media Center and started charging for it as an extra.:eek:

``Clearly what is irrelevant is your opinion on anything on this website
You have 340 posts here, most of them seem to be advertisements for Windows 10. Anyone with a fixed, single topic agenda such as yours, based in regurgitated propaganda, clearly needs to not be judging anyone's validity here, but rather reevaluating their own. You clearly like to argue as much or more than I do

``you just like to be cranky.... maybe you should change your nickname to crankytroll?
Oh now, that's just ever so cutsie pie. Maybe you could turn it into a children's song, a teach it to your students. It's about on a 6 year old's development level, they should love it.

``And why on Earth would I waste my breath talking about you in real life?
When did anyone, much less me, suggest that you should? You're the creepy, back slapping, gossipy, know it all, duplicitous slug, hanging around the water cooler trying to start fights between their coworkers. I'd rather take my chances befriending someone with bubonic plague. Yes, you're that repugnant to me.

And you keep running your mouth about Windows 10, yet this thread has little to nothing to do with Windows 10. In case you missed it, the topic of this thread is Intel's processor future release schedule. So, all this crap you've been pumping out here, is off topic.:D
 
Last edited:
Let's say that intel's xeon's processors do have that many cores by 2018, does it really make that big a difference if Intel's IPC is pretty much the same?
That kind of depends upon whether IPC is a governing factor, and in most scenarios and consumer metrics it isn't. Governing factors are more likely to be performance-per-watt and overall efficiency, system latency (I.e. use of eDRAM in workloads that are memory sensitive), cache efficiency and the like. Most consumer applications are too broad and unoptimized to benefit from IPC gains in general. Any quick browse of clock-for-clock/core-for-core benchmarks for gaming and office apps should bear that out.
If the consumer side isn't even getting the benefit of these extra cores, there's very little point to developing apps that use that many cores and thus will diminish the marketplace for them.
The same argument raged when quad cores first arrived. It is a truism that the ISA and hardware needs to be in place before the software ecosystem can be established.
Intel (and every other processor manufacturer) can provide the tools, but unless there is significant input from software houses it remains untapped. The leading edge of adoption will always be research projects and time/core/thread/power sensitive applications (I.e. bespoke software - often hand-tuned for the architecture).
AVX512 is just an extension of current instruction types and will most likely have no impact on the consumer side and minimal impact on the business side for some time.
On the consumer side there is always a lag between the ISA being implemented in hardware and supporting software. Maybe you remember the slow uptake of SSE (SSE2 in particular). AVX512 is aimed squarely at HPC and datacenter workloads. It will gain more traction when Intel's OmniPath, HSA, and IBM/Nvidia/Mellanox's POWER9/NVLink become concrete solutions rather than the nebulous terms carrying no weight that they currently are.
 
And you keep running your mouth about Windows 10, yet this thread has little to nothing to do with Windows 10. In case you missed it, the topic of this thread is Intel's processor future release schedule. So, all this crap you've been pumping out here, is off topic.:D

Lol... I was replying to a post mentioning the "Windows 10 debacle" - so actually, I was on topic... it was YOU who decided to leap on it and carry on this ridiculous argument that you don't have the brains to realize you've lost. I have 340 posts - glad to see you've read them all - they must make interesting reading. You have almost 12,000... I'm not going to claim I've read them all, as that would be pure torture and I'm not a sadist, but I'm going to assume the majority are just trolling... And with that many posts, you clearly have nothing better to do with your time...

So troll on crankytroll, I'm glad I'm so repugnant to you, as I was beginning to worry that you had a secret crush on me...
 
Lol... I was replying to a post mentioning the "Windows 10 debacle" - so actually, I was on topic... it was YOU who decided to leap on it and carry on this ridiculous argument that you don't have the brains to realize you've lost. I have 340 posts - glad to see you've read them all - they must make interesting reading. You have almost 12,000... I'm not going to claim I've read them all, as that would be pure torture and I'm not a sadist, but I'm going to assume the majority are just trolling
Well you know what they say "assume makes an a** out of you and me". So you hang in there and assume whatever you like..
And with that many posts, you clearly have nothing better to do with your time..
Well, I have been here for 10 years. You clearly have nothing better to do than attempt to have the last word with me, so I'm sure your post count will be rolling up faster than a traveling salesman's frequent flyer miles. And really don't flatter yourself, I read the post count under you name, that's about it.

So troll on crankytroll,
Wow, how childish and stupid is that? Redundant, stupid and childish.
this I'm glad I'm so repugnant to you, as I was beginning to worry that you had a secret crush on me...
A crush, really? I can't abide the smell of squid. Now run along and squirt your ink around elsewhere, maybe it'll attract a mate for you.

EDIT: And before I forget, please don't sic your imaginary friends here on me....:eek:
 
Last edited:
That kind of depends upon whether IPC is a governing factor, and in most scenarios and consumer metrics it isn't. Governing factors are more likely to be performance-per-watt and overall efficiency, system latency (I.e. use of eDRAM in workloads that are memory sensitive), cache efficiency and the like. Most consumer applications are too broad and unoptimized to benefit from IPC gains in general. Any quick browse of clock-for-clock/core-for-core benchmarks for gaming and office apps should bear that out.

The same argument raged when quad cores first arrived. It is a truism that the ISA and hardware needs to be in place before the software ecosystem can be established.
Intel (and every other processor manufacturer) can provide the tools, but unless there is significant input from software houses it remains untapped. The leading edge of adoption will always be research projects and time/core/thread/power sensitive applications (I.e. bespoke software - often hand-tuned for the architecture).

On the consumer side there is always a lag between the ISA being implemented in hardware and supporting software. Maybe you remember the slow uptake of SSE (SSE2 in particular). AVX512 is aimed squarely at HPC and datacenter workloads. It will gain more traction when Intel's OmniPath, HSA, and IBM/Nvidia/Mellanox's POWER9/NVLink become concrete solutions rather than the nebulous terms carrying no weight that they currently are.

Except that we still don't have good multi-threading in many applications years after multi-core CPUs have arrived. We are only really just now seeing efficient use of multiple CPU cores on the consumer side. Hell, windows 10 could still use allot more polish when it comes to using more CPU cores.

Performance Per-watt and efficiency do not factor into performance at all so I don't know why it's mentioned here.

Any browse for benchmarks in gaming and office apps will tell you the bottleneck lies elsewhere in the system and not in the CPU, not that they do not benefit from IPC gains.

Intel hasn't really done anything with it's architecture since AMD sank but shrink the die, add more transistors, and integrate more functions from the motherboard onto the CPU. Intel has Literally been riding Netburst, which introduced HT, as it's fundamentally the same as their current procs without the integrated GPU and motherboard parts. I find it hard to believe ARM won't surpass Intel unless they start investing billions more because you know there's a ton of money going into mobile chip development.
 
Except that we still don't have good multi-threading in many applications years after multi-core CPUs have arrived. We are only really just now seeing efficient use of multiple CPU cores on the consumer side. Hell, windows 10 could still use allot more polish when it comes to using more CPU cores.
I kind of wonder if it's actually possible for a human programmer to properly envision parallel threads over an "xx" vast number of cores. Hence possibly the lack of "polish" or "optimization" present.

Performance Per-watt and efficiency do not factor into performance at all so I don't know why it's mentioned here.
Perhaps to an individual at the desktop level, it doesn't. But picture yourself paying the electric bill for a server farm. I expect there it would matter a great deal.
Any browse for benchmarks in gaming and office apps will tell you the bottleneck lies elsewhere in the system and not in the CPU, not that they do not benefit from IPC gains.
.The concept of a "bottleneck", has to be tempered by the presence of unrealistic expectations on the part of the user. It's pretty much a given our nature will always want more than is available

Intel hasn't really done anything with it's architecture since AMD sank but shrink the die, add more transistors, and integrate more functions from the motherboard onto the CPU. Intel has Literally been riding Netburst, which introduced HT, as it's fundamentally the same as their current procs without the integrated GPU and motherboard parts. I find it hard to believe ARM won't surpass Intel unless they start investing billions more because you know there's a ton of money going into mobile chip development.
I still haven't quite figured out if Intel is completely immersed in the consumer aspect of the business, or desktop CPUs are simple hand me downs from their server chip research. For example IBM is a big a** company, but you don't see anything in the home anymore with their logo on it. That said, it would be interesting to know if the motion picture industry's individual rendering work stations are Zeon or Core I based.
 
Except that we still don't have good multi-threading in many applications years after multi-core CPUs have arrived.
Again, that is on software vendors. There are plenty of true multi-core aware applications, but they require time and effort to implement - and more importantly, that time and effort has to be rewarded - either because the software reaches a wider user base than it would normally do because it is publicized as a benchmark, or the software is time-to-completion sensitive ( a professional application).
We are only really just now seeing efficient use of multiple CPU cores on the consumer side. Hell, windows 10 could still use allot more polish when it comes to using more CPU cores.
Would you say there are more multi-core aware software applications available now than there were five years ago? If the answer is yes, then that just demonstrates that we are heading in the right direction. Will we ever reach a nirvana of full multi-thread utilization? Absolutely not. Most software design studio's could give a rats a*s about thread efficiency, and 99.99+% of users probably don't even understand the term.
Performance Per-watt and efficiency do not factor into performance at all so I don't know why it's mentioned here.
Well, that's easy. You put forward the view that a (or more rightly, the most - since you singled it out) significant processor metric for measuring advancement is IPC...
Let's say that intel's xeon's processors do have that many cores by 2018, does it really make that big a difference if Intel's IPC is pretty much the same?
I disagree, and put quite a number of other factors ahead of IPC as more significant indicators of processor advancement...
That kind of depends upon whether IPC is a governing factor, and in most scenarios and consumer metrics it isn't. Governing factors are more likely to be performance-per-watt and overall efficiency, system latency (I.e. use of eDRAM in workloads that are memory sensitive), cache efficiency and the like.
I actually thought that the industries direction would have made this relatively clear. Instructions and ops per clock and latency in mission critical applications, but performance/perf-per-watt/efficiency for the consumer . I've just Googled the subject, and it seems at least I'm not alone, praise the silicon gods!
Since then, despite the perseverance of (or soon to be mildly delayed) Moore’s Law, performance is measured differently. Efficiency, core count, integrated SIMD graphics, heterogeneous system architecture and specific instruction sets are now used due to the ever expanding and changing paradigm of user experience. Something that is fast for both compute and graphics, and then also uses near-zero power is the holy-grail in design. But let’s snap back to reality here – software is still designed in code one line at a time. The rate at which those lines are processed, particularly in response driven scenarios, is paramount. This is why the ‘instructions per clock/cycle’ metric, IPC, is still an important aspect of modern day computing.
Any browse for benchmarks in gaming and office apps will tell you the bottleneck lies elsewhere in the system and not in the CPU, not that they do not benefit from IPC gains.
That is only partly true. Bad coding is great leveller, as are applications that aren't taxing on compute, while many professional consumer applications are cache sensitive - which is a big reason that Bulldozer ploughed its own grave thanks to its cache misprediction penalties.
Intel hasn't really done anything with it's architecture since AMD sank but shrink the die, add more transistors, and integrate more functions from the motherboard onto the CPU. Intel has Literally been riding Netburst, which introduced HT, as it's fundamentally the same as their current procs without the integrated GPU and motherboard parts. I find it hard to believe ARM won't surpass Intel unless they start investing billions more because you know there's a ton of money going into mobile chip development.
Mobile chips are all about efficiency - the factor I pointed out earlier, which is why Intel (and AMD for that matter) focus on getting down to ARM chip power envelopes. Meanwhile ARM SoC's are steadily consuming more power as they attempt to compete with x86 and gain more "fat" as they move into more intensive workloads..
If your prediction comes true, one thing is certain, AMD will become a graphics and console chip design house in its entirety. Zen should just be on par with Skylake's IPC (best case scenario), so with "AMD's sinking" as you put it, they might want to book a bathyscaphe now.
I kind of wonder if it's actually possible for a human programmer to properly envision parallel threads over an "xx" vast number of cores. Hence possibly the lack of "polish" or "optimization" present.
It is a bespoke affair at present with a lot of hand-tuned (optimized) coding. Supercomputing time is expensive so down time tends to mean re-booking or organizing more time on the cluster.
Perhaps to an individual at the desktop level, it doesn't. But picture yourself paying the electric bill for a server farm. I expect there it would matter a great deal.
Aye. Performance per watt is paramount in big iron deployments, both from power used in the system and cooling. The other metric tends to be performance-per-thread since some software licenses are licensed on a "per core" basis.
I still haven't quite figured out if Intel is completely immersed in the consumer aspect of the business, or desktop CPUs are simple hand me downs from their server chip research.
Processors are built with server workloads in mind. Consumer processors are either direct salvage parts or tailored towards OEM/ODM needs ( laptops and prebuilts). Most consumer parts have a high degree of commonality, while the big die -E/-EN/-EP/-EX parts can use 3-4 different chip layouts.
That said, it would be interesting to know if the motion picture industry's individual rendering work stations are Zeon or Core I based.
Intel's Xeon is the king of renderers. Time to completion, power usage, and memory addressing make it the choice de jour at present.
 
Last edited:
If your prediction comes true, one thing is certain, AMD will become a graphics and console chip design house in its entirety. Zen should just be on par with Skylake's IPC (best case scenario), so with "AMD's sinking" as you put it, they might want to book a bathyscaphe now.
Yeah, cause for sure they won't be able to hail one of these:
Bathyscaphe-Trieste-hanging.jpg

with the Uber app...:D
 
Yeah, cause for sure they won't be able to hail one of these:
Bathyscaphe-Trieste-hanging.jpg

with the Uber app...:D
Looks like the Trieste. If Evernessince's ARM prediction comes true I suspect AMD might need something out of Jules Verne. The Marianas Trench might just be the first stage of the descent!
 
Looks like the Trieste. If Evernessince's ARM prediction comes true I suspect AMD might need something out of Jules Verne. The Marianas Trench might just be the first stage of the descent!
Oh ye of little faith. Any day now they're going to rise from the ashes like the phoenix and reclaim their former glory. Possibly as soon as the, "not the day after tomorrow".

If I may be so bold as to suggest a processor name to lead them on the comeback trail, it would be "Rip-Snorter' (*)! Top that Intel, with your sissy a**, "lake this" and, "bridge that"...:cool:

(*) Also in contention, "Nautilus".

Music to go deep sea diving by....
 
Again, that is on software vendors. There are plenty of true multi-core aware applications, but they require time and effort to implement - and more importantly, that time and effort has to be rewarded - either because the software reaches a wider user base than it would normally do because it is publicized as a benchmark, or the software is time-to-completion sensitive ( a professional application).

Would you say there are more multi-core aware software applications available now than there were five years ago? If the answer is yes, then that just demonstrates that we are heading in the right direction. Will we ever reach a nirvana of full multi-thread utilization? Absolutely not. Most software design studio's could give a rats a*s about thread efficiency, and 99.99+% of users probably don't even understand the term.

Well, that's easy. You put forward the view that a (or more rightly, the most - since you singled it out) significant processor metric for measuring advancement is IPC...

I disagree, and put quite a number of other factors ahead of IPC as more significant indicators of processor advancement...

I actually thought that the industries direction would have made this relatively clear. Instructions and ops per clock and latency in mission critical applications, but performance/perf-per-watt/efficiency for the consumer . I've just Googled the subject, and it seems at least I'm not alone, praise the silicon gods!


That is only partly true. Bad coding is great leveller, as are applications that aren't taxing on compute, while many professional consumer applications are cache sensitive - which is a big reason that Bulldozer ploughed its own grave thanks to its cache misprediction penalties.

Mobile chips are all about efficiency - the factor I pointed out earlier, which is why Intel (and AMD for that matter) focus on getting down to ARM chip power envelopes. Meanwhile ARM SoC's are steadily consuming more power as they attempt to compete with x86 and gain more "fat" as they move into more intensive workloads..
If your prediction comes true, one thing is certain, AMD will become a graphics and console chip design house in its entirety. Zen should just be on par with Skylake's IPC (best case scenario), so with "AMD's sinking" as you put it, they might want to book a bathyscaphe now.

It is a bespoke affair at present with a lot of hand-tuned (optimized) coding. Supercomputing time is expensive so down time tends to mean re-booking or organizing more time on the cluster.

Aye. Performance per watt is paramount in big iron deployments, both from power used in the system and cooling. The other metric tends to be performance-per-thread since some software licenses are licensed on a "per core" basis.

Processors are built with server workloads in mind. Consumer processors are either direct salvage parts or tailored towards OEM/ODM needs ( laptops and prebuilts). Most consumer parts have a high degree of commonality, while the big die -E/-EN/-EP/-EX parts can use 3-4 different chip layouts.

Intel's Xeon is the king of renderers. Time to completion, power usage, and memory addressing make it the choice de jour at present.

I think that definition of performance is inherently screwed towards Intel because it is the company that essentially inspired "More Cores, More IPC, More instructions, and better efficiency." I'm not expert on ARM CPU architecture so I cannot speak on what would net them the most in performance but it's fairly safe to say it's not going to react in the same way to certain improvements compared to x86.
 
The current microchip technology is reaching the silicon semiconductor capacity limits where smaller size transistors and higher densities are not viable anymore. Intel has been struggling to meet Moore's Law standard in delivering its 10 nanometer & 7 nanometer microchips as it is facing lithography problems for those transistor sizes. The industry as a whole is facing the obsolescence of the current 1945 Von Neumann computer architecture model that is based on the current silicon semiconductor transistor technology (CMOS.) With no Quantum Computers in sight, the current computer technology will stagnate until a new computer architecture model is developed.

Intel is extending its ailing Moore's Law through its 10nm, 7nm, ..., microchip lithography by using what's called compound III-V semiconductors, where "indium arsenide" (for transistor n-channels) and "indium gallium antimonide" (for transistor p-channels) are going to be grown as the top active layer over a silicon substrate instead of silicon over silicon, since silicon's electrical capabilities have been tapped for the most part. That III-V semiconductor single crystal layer has superior electrical properties and power efficiency over silicon at smaller sizes.

On a separate note, Intel is making a radical move by bringing its 72 core supercomputing microprocessor to PCs next year.
 
[QUOTE="On a separate note, Intel is making a radical move by bringing its 72 core supercomputing microprocessor to PCs next year.[/QUOTE]

Make that "...this year."
 
The current microchip technology is reaching the silicon semiconductor capacity limits where smaller size transistors and higher densities are not viable anymore.
You seem to love reposting the same quote across the tech sites. Might pay to update it.

Anyhow, "not viable anymore" is a relative term. Current architectural and foundry processes will be good for the next 10-12 years until sub-5nm nodes require a new paradigm. 10nm - > 7nm - > 5nm is already mapped out and being verified/validated. The hold up (as usual) is less about InGaAs and other III-V semiconductors than it tooling availability (litho tool systems primarily, and metrology tools for verification testing) and fab retooling costs - you have to sell a lot of chips to recoup $US12 billion per fab. Intel aren't IBM or Xerox's PARC. Engaging in Pure (or Basic) Research is well behind Return on Investment in the pecking order at Santa Clara.

Considering Intel had the current FinFET architecture and roadmap of process ramp in place around the time that their 45nm Penryn architecture launched in 2007, it is probably safe to assume that they are already making plans for what comes next - as should the other major players in the foundry industry.
 
You seem to love reposting the same quote across the tech sites. Might pay to update it... [ ]....
So, all that wonderful and informative material is pinched from somewhere else? Or we've merely gotten it 5th hand from the original source?:D
 
Last edited:
Well, it's all in my very own words.

As for the issue of microchip nodes, besides it involves lithography problems, silicon transistors' electrical properties and power efficiencies degrade quite significantly at the future nm sizes, and, here is the kicker, at one atom size silicon transistor, electrons' behavior is unpredictable rendering the transistor useless. More over, Moore's Law is forecasted to expire by 2020. Given all that, I don't see a need to update my recurring post across tech sites discussing the subject matter.
 
So, all that wonderful and informative material is pinched from somewhere else? Or we've merely gotten it 5th hand from the original source?:D
I was actually referring to vic's cut and paste comment from here....and here....and here....and here etc... But the post is just basically a recapitulation of Dave Kanter's work at Real World Technologies. Kanter is usually on the ball regarding chip architectures, so using his articles as a base is understandable - I often point people in his direction when further reading or in depth analysis is called for.
 
Oddly, I've been disputing the continuing validity of "Moore's Law", on a strictly empirical basis for some time now.

My sentiment has always been that at best, it should have been called any of the following, "Moore's Guess", "Moore's Speculation", "Moore's Musing", or "Moore's Quotable Utterance".
 
Oddly, I've been disputing the continuing validity of "Moore's Law", on a strictly empirical basis for some time now.
My sentiment has always been that at best, it should have been called any of the following, "Moore's Guess", "Moore's Speculation", "Moore's Musing", or "Moore's Quotable Utterance".
Well, Gordon Moore and most people that have some working knowledge of the semiconductor industry have never considered it a "law". It was a prediction (originally for 10 years, later extended to 15) made in 1965 based on past and near future advancements in process technology. The fact that it has held up relatively well for fifty years is a testament to Moore's foresight, yet people seem very quick to deride the guy because the supposed "law" isn't immutable.

It's a pity the "law" isn't better understood ( it only concerns itself with the viable commercial cost per IC based on transistor density), or his work more widely recognized. Even the pitiful Wikipedia page (the Twitter-age substitute for education) doesn't touch on his pioneering work with Jay Last and Jean Hoerni at Fairchild regarding IC transistor metallization and diffusion, or the implementation of reflow ( doping metal oxides with impurities to lower melting points to eliminate brittleness in circuits caused by uneven melting points between the circuits materials) at Intel that initiated the companies fabrication reputation and ensured the highest yields, and profits, in the industry.
 
Well, Gordon Moore and most people that have some working knowledge of the semiconductor industry have never considered it a "law". It was a prediction (originally for 10 years, later extended to 15) made in 1965 based on past and near future advancements in process technology. The fact that it has held up relatively well for fifty years is a testament to Moore's foresight, yet people seem very quick to deride the guy because the supposed "law" isn't immutable.

It's a pity the "law" isn't better understood ( it only concerns itself with the viable commercial cost per IC based on transistor density), or his work more widely recognized. Even the pitiful Wikipedia page (the Twitter-age substitute for education) doesn't touch on his pioneering work with Jay Last and Jean Hoerni at Fairchild regarding IC transistor metallization and diffusion, or the implementation of reflow ( doping metal oxides with impurities to lower melting points to eliminate brittleness in circuits caused by uneven melting points between the circuits materials) at Intel that initiated the companies fabrication reputation and ensured the highest yields, and profits, in the industry.

I'd agree. It really is a shame that more of his work isn't known.

Wikipedia has many issues and one of them is the fundamental. They way it recruits editor's is completely inane and is very lacking when it comes to actual subject matter. It's like a book with far too many authors all of unspecified qualifications.
 
Well, Gordon Moore and most people that have some working knowledge of the semiconductor industry have never considered it a "law". It was a prediction (originally for 10 years, later extended to 15) made in 1965 based on past and near future advancements in process technology. The fact that it has held up relatively well for fifty years is a testament to Moore's foresight, yet people seem very quick to deride the guy because the supposed "law" isn't immutable....[ ]....
Actually I expected that Mr. Moore himself probably considers the "law", his best highly educated guess. What does become tedious, is having it repeated as an immutable fact, by people a lot less educated, way further away from the issue. Some people have twisted the concept to perceive it as an entitlement from the industry.

I'm familiar with tolerance in much more "primitive pursuits", such as woodworking, and even the 180nm pathways of the P-2 amaze the hell out of me.

Given another folk "law", "anything which can go wrong, will go wrong" I can only imagine Intel's difficulties in trying to live up to their own hype. (Or "promises", if that term is more palatable).

...[ ]....Wikipedia has many issues and one of them is the fundamental. They way it recruits editor's is completely inane and is very lacking when it comes to actual subject matter. It's like a book with far too many authors all of unspecified qualifications.
Well, that's going to be mitigated by the nature and depth of the topic. Wiki is a good source of info on any number of subjects. However, I seriously doubt the inner workings of the deepest, most highly technically advanced corner of the semiconductor industry is one of them.
 
Last edited:
Actually I expected that Mr. Moore himself probably considers the "law", his best highly educated guess.
Quit while you're ahead, as Moore himself stated not so long ago...
One thing I've learned is that once you've made a successful prediction, avoid making another one
I'm familiar with tolerance in much more "primitive pursuits", such as woodworking, and even the 180nm pathways of the P-2 amaze the hell out of me.
Indeed. Back when Moore made his prediction, the industry was manufacturing on a 200μm process node ( 200,000nm in today's parlance)
Given another folk "law", "anything which can go wrong, will go wrong" I can only imagine Intel's difficulties in trying to live up to their own hype. (Or "promises", if that term is more palatable).
Given the jibber jabber about Intel having to invest in III-V semiconductors to avoid certain death, it seems apropos to point out that Intel saw the need for these and Quantum Leap Well FET's (QWFET) back in 2004. It's almost like Intel were planning ahead :eek:
5n4xv3y.jpg
 
Back