Intel is setting aside over $2 billion to give to its employees

You were born 2010?
Look, for the past decade AMD was putting out sludge It's product bore the brunt of all the "space heater" remarks now being directed at Intel. In fact, they had to move out of Silicon Valley, because they couldn't pay to keep the grass cut. AMD's reign ended when Intel released the Core 2 Duo E-6300.

Then the question arises, "who is more responsible for AMD's resurgence, AMD, or TSMC. As a matter of cold hard fact, AMD would be nowhere without them.

I was born in the late 1940's. When were you born, the minute after AMD released "Ryzen"?

We have a member here who never watched football in his life, (by his own admission). Then, "his home team", was making a run at the Superbowl. At that point, you couldn't make him shut up about them.You know, "instant super fan, just add victory". That's about the same opinion I have of AMD fanbois.
 
Last edited:
You can bet that 60-80% of that figure will go to executives over and above their usual bonuses ......
So, are you suggesting things would be different at other giant corporations?

Look at Amazon, their employees are always complaining about being "underpaid and overworked", while Jeff Bezos is the richest man in the world, or at the very least, damned close to it,.

Elon Musk's brother is on Tesla's board of directors, a situation you'll likely find in that echelon of management elsewhere.

So, what have we learned? IMO, that sociopathy, greed, narcissism and nepotism, are what make the corporate world go around.
 
Last edited:
Look, for the past decade AMD was putting out sludge It's product bore the brunt of all the "space heater" remarks now being directed at Intel. In fact, they had to move out of Silicon Valley, because they couldn't pay to keep the grass cut. AMD's reign ended when Intel released the Core 2 Duo E-6300.

Then the question arises, "who is more responsible for AMD's resurgence, AMD, or TSMC. As a matter of cold hard fact, AMD would be nowhere without them.

I was born in the late 1940's. When were you born, the minute after AMD released "Ryzen"?

We have a member here who never watched football in his life, (by his own admission). Then, "his home team", was making a run at the Superbowl. At that point, you couldn't make him shut up about them.You know, "instant super fan, just add victory". That's about the same opinion I have of AMD fanbois.
You said AMD chips have always been manufactured by someone else. That ended just 2009 when AMD sold all manufacturing plants to GlobalFoundries. Core 2 was not so fast CPU as reviews tried to tell, there are reasons why Intel decided to put integrated memory controller into CPU's just like AMD did. And E6300 didn't have that.

AMD needs TSMC mainly because GlobalFoundries cancelled 7nm process.

I was born much before AMD released K5. Point was just that AMD made most of their own chips until 2009 (not sure what was case long before that). Nothing personal.
 
Core 2 was not so fast CPU as reviews tried to tell,
The trick behind the first runs of Core 2 chips is, that they were "factory under clocked". Many of the reviews I read about them went something like this, "my E-6300 wasn't all that impressive at stock speed, but what a beast it became when I clocked it to 3.1 Ghz". (IIRC, stock was something around 1.6 to 1.8 Ghz).


We obviously don't have anywhere near that amount of headroom with the stock speeds set as they are today.

So, AMD used to make their own chips? My mistake, sorry, (really)..

I've always built with Intel. I make no claim as to their superiority or anything else. I bought them, purely because of brand recognition, in the same way you'd buy "Charmin" TP. I put the machines together, and they've been working ever since. So, all good. I'm not an Intel fanbois, or anything of the sort. In fact, I was thoroughly put off by Intel's "road map" BS, when they couldn't get to 10 nm, within literally years of their projections.(Or didn't bother to continue trying, as the case may be).

You have to admit though, if AMD had to sell their fabs, there was a feeling of gloom, doom, and desperation in the air. But now they are indeed dependent on TSMC, who I don't think gets enough credit for AMD's rebound.

Any decent CAD program can "draw" a CPU, only a very select few can design them, but those big, expensive, extremely close tolerance lithography machines, still do most of the heavy lifting.
 
Last edited:
The trick behind the first runs of Core 2 chips is, that they were "factory under clocked". Many of the reviews I read about them went something like this, "my E-6300 wasn't all that impressive at stock speed, but what a beast it became when I clocked it to 3.1 Ghz".

We obviously don't have anywhere near that amount of headroom with the stock speeds set as they are today.

So, AMD used to make their own chips? My mistake, sorry, (really)..
On yeah, overclocking made it much better obviously. However most people never overclock.

AMD made most of their chips until 2009 when they sold everything to GlobalFoundries. AMD also used at least IBM for some part before that. But generally AMD made own chips prior 2009, yes.
I've always built with Intel. I make no claim as to their superiority or anything else. I bought them, purely because of brand recognition, in the same way you'd buy "Charmin" TP. I put the machines together, and they've been working ever sense. So, all good. I'm not an Intel fanbois, or anything of the sort. In fact, I was thoroughly fed up with Intel's "road map" BS, when they couldn't get to 10 nm, within literally years of their projections.

You have to admit though, if AMD had to sell their fabs, there was a feeling of gloom, doom, and desperation in the air. But now they are indeed dependent on TSMC, who I don't think gets enough credit for AMD's rebound.

Any decent CAD program can "draw" a CPU, only a very select few can design them, but those big, expensive, extremely close tolerance lithography machines, still do most of the heavy lifting.
Intel just tried too aggressively make 10nm as "super advanced node". That didn't work and consequences are well known.

AMD had to sell fabs because node development was getting too expensive. There used to be like 20 semiconductor manufacturers, now there are much less and only three actually try to develop high end nodes. Other idea would have been selling capacity to other companies but that's basically what GlobalFoundries did. Because GF dumped 7nm, AMD is now depending on TSMC, that's true. That was not original plan for AMD but things happen.

I doubt simple CAD program is enough. There are multiple layers on CPUs and having those layers to match on 3D is very hard task.
 
There are multiple layers on CPUs and having those layers to match on 3D is very hard task.
I think (again IIRC), is that Intel locked the FSB down after the initial Core 2 Duo runs, so you couldn't overclock without going to the "K" model chips.

Way back in the day, (it may have been here), someone posted a video on exactly how a CPU is "grown". It was truly fascinating. IIRC, something on the order of 30 layers (?) are necessary.

In fact, you've peaked my curiosity about the "3D" aspect, and I may have to do a search for that, or a similar video. (y) (Y)
 
I think (again IIRC), is that Intel locked the FSB down after the initial Core 2 Duo runs, so you couldn't overclock without going to the "K" model chips.

Way back in the day, (it may have been here), someone posted a video on exactly how a CPU is "grown". It was truly fascinating. IIRC, something on the order of 30 layers (?) are necessary.

In fact, you've peaked my curiosity about the "3D" aspect, and I may have to do a search for that, or a similar video. (y) (Y)

Here's a YouTube for you Cranky, this is the build it close up animation by Intel. If you haven't been watching this channel you've missed a little of our collective history, documented. 😂

 
Take note Apple and Foxconn!
Apple is least as devious as Intel regarding the treatment of its workers, but they are better at hiding it. Apple has its spiffy new "spaceship campus" here in the US, and they treat their workers damned well. They also reap obscene profits margins, by letting Foxconn grind their workers minds and bodies to a pulp. And then they play stupid, pretending they didn't know what's going over there in China.
 
You said AMD chips have always been manufactured by someone else. That ended just 2009 when AMD sold all manufacturing plants to GlobalFoundries. Core 2 was not so fast CPU as reviews tried to tell, there are reasons why Intel decided to put integrated memory controller into CPU's just like AMD did. And E6300 didn't have that.

AMD needs TSMC mainly because GlobalFoundries cancelled 7nm process.

I was born much before AMD released K5. Point was just that AMD made most of their own chips until 2009 (not sure what was case long before that). Nothing personal.
https://www.anandtech.com/show/2051/6
That's just...wrong. the e6300, despite being a sub 2ghz chip, regularly performed on the same level as the 3.73 ghz pentium d 965 and, crucially, the 2.4 ghz athlons 64x2. The e6600 at 2.4 ghz regularly outperformed the 2.8 and even the 3 ghz athlons 64 while using a fraction of the power. None of that required any OCing.

The core 2 was a massive leap in CPU performance, it wasn't just reviews and marketing fluff.
 
The core 2 was a massive leap in CPU performance, it wasn't just reviews and marketing fluff.
Indeed.

Mr.. Reset was justifiably upset by my assertion that "AMD never had their own fabs".

IIRC correctly however, those old Athlons required several specific patches from M$ to work with correctly Windows. Which is pretty much the same thing that today's AMD crowd would be crowing about, should an Intel CPU require a brand specific patch today. The "tides of war", and all that.

I remember CPU shopping during the Prescott era. The top of the line, "Pentium 4 Extreme Edition", carried a price tag of $999.00.! It's amazing how spoiled we've become, when you can buy a CPU today for $100.00 which would likely blow it away. Yet as Yoda would say, "complain we must".
 
https://www.anandtech.com/show/2051/6
That's just...wrong. the e6300, despite being a sub 2ghz chip, regularly performed on the same level as the 3.73 ghz pentium d 965 and, crucially, the 2.4 ghz athlons 64x2. The e6600 at 2.4 ghz regularly outperformed the 2.8 and even the 3 ghz athlons 64 while using a fraction of the power. None of that required any OCing.

The core 2 was a massive leap in CPU performance, it wasn't just reviews and marketing fluff.
Nope. That is only when you are looking clean Windows install and running only one software at time etc etc.

Problem is, real world situations put much more pressure on cache/memory system. Because Core 2 lacks integrated memory controller, cache is everything. Like you can see here: https://www.anandtech.com/show/2757/9

Compare E8200 vs E5300 https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=33909,35300

Differences? E8200 has higher bus clock but also has triple amount of cache. Another problem with Core 2 design is shared cache between cores. Not so unlikely process will crash and reserve all CPU power from one core. And all cache. Then what do you think is speed of another core when all L2 is gone?

This is good example why you just shouldn't look at benchmarks. Core 2 is much slower on real life situations than on benchmarks. Intel agreed with me, Nehalem (2008) had integrated memory controller.
 
Indeed.

Mr.. Reset was justifiably upset by my assertion that "AMD never had their own fabs".

IIRC correctly however, those old Athlons required several specific patches from M$ to work with correctly Windows. Which is pretty much the same thing that today's AMD crowd would be crowing about, should an Intel CPU require a brand specific patch today. The "tides of war", and all that.
Like said above, Core 2 was only good for casual use. AMD put integrated memory controller for good reason on Athlon64. Core 2 was very vulnerable for cache pollution.

I cannot remember that patch thing. What CPU's those were?
I remember CPU shopping during the Prescott era. The top of the line, "Pentium 4 Extreme Edition", carried a price tag of $999.00.! It's amazing how spoiled we've become, when you can buy a CPU today for $100.00 which would likely blow it away. Yet as Yoda would say, "complain we must".
AMD offered faster and cooler CPU for same price, Intel just couldn't admit AMD is better :p
 
I cannot remember that patch thing. What CPU's those were?
IIRC, means exactly that. Do you honestly expect me to remember the specific AMD CPUs and KB numbers of patches from 15 years ago, considering I never owned one of the affected parts?

I always selectively installed updates, and never, ever, allowed M$ to summarily have its own way with my machines, via "automatic updates". Point being, I read though the descriptions of what each patch was supposed to accomplish, and responded accordingly..

You.re right though, all through the years, AMD has been, and is always better than Intel CPUs. Those "Bulldozer" series, were beasts. I was always so very jealous of people that owned them. Happy now?
 
Last edited:
IIRC, means exactly that. Do you honestly expect me to remember the specific AMD CPUs and KB numbers of patches from 15 years ago, considering I never owned one of the affected parts?

I always selectively installed updates, and never, ever, allowed M$ to summarily have its own way with my machines, via "automatic updates". Point being, I read though the descriptions of what each patch was supposed to accomplish, and responded accordingly..

You.re right though, all through the years, AMD has been, and is always better than Intel CPUs. Those "Bulldozer" series, were beasts. I was always so very jealous of people that owned them. Happy now?
Of course I expect as there were many patches for Windows systems, however serious problems were quite rare. There were some serious ones for Intel also like this one https://www.extremetech.com/extreme/58686-update-some-pentium-iii-notebooks-may-hang-under-xp

That's nothing new. With Alder Lake Intel needs entirely new OS, since Microsoft has not promised Windows 10 to ever have Thread director. Even that's not any news, MS never added hyper threading support for Windows 2000. AMD needing patches is not much when Intel needs entirely new OS. That's much bigger thing.
 
Of course I expect as there were many patches for Windows systems, however serious problems were quite rare. There were some serious ones for Intel also like this one https://www.extremetech.com/extreme/58686-update-some-pentium-iii-notebooks-may-hang-under-xp

That's nothing new. With Alder Lake Intel needs entirely new OS, since Microsoft has not promised Windows 10 to ever have Thread director. Even that's not any news, MS never added hyper threading support for Windows 2000. AMD needing patches is not much when Intel needs entirely new OS. That's much bigger thing.
You're really getting f**king tedious, and I say that with the upmost love and respect.
M$, Intel, and yes AMD, have likely conspired to introduce "planned obsolescence", into the desktop environment, AMD, "only second, so they try harder", I'm sure will get there. Nothing beyond Z170 is, " fully compatible with Windows 10". Ant I believe, that's true of AMD and Intel.

What's most annoying about AMD fanbois is, that you so full of double talk and hypocrisy, it's unfathomable.

By your own admission, AMD sold its fabs because, "they couldn't afford to go to the 8 nm node". But yet, you run your mouths constantly about how backwards Intel was/is, for sticking with the 14 nm process.

OK, big news, if it weren't for TSMC being able to advance to the smaller node, AMD would more than likely be out of business.

But then, you go right back to jawing about how AMD deserves all the credit. Please, have a heart, and spare me from your endless groupie crap.

And like I said before, a select few can design a CPU, but those big lithography machines, do most of the heavy lifting. I'm sure Intel's engineers could have designed an 8 nm, or even a 5 nm chip, they just couldn't print it. Which BTW, AMD can't do either.
 
Last edited:
By your own admission, AMD sold its fabs because, "they couldn't afford to go to the 8 nm node". But yet, you run your mouths constantly about how backwards Intel was/is, for sticking with the 14 nm process.
Really? Where I said that?

AMD sold fabs 2009 when Intel's most advanced node was 45 nm ;)
OK, big news, if it weren't for TSMC being able to advance to the smaller node, AMD would more than likely be out of business.

But then, you go right back to jawing about how AMD deserves all the credit. Please, have a heart, and spare me from your endless groupie crap.
It was not AMD's fault GF abandoned even more advanced 7nm node TSMC has.
 
Right there.. (y) (Y)
And where I said anything about 8nm node? AMD sold fabs 2009 when AMD had 45 nm node. And 32nm node was coming (licenced from IBM). At 2009 both Intel and AMD had 45nm.

GlobalFoundries (not AMD) ditched 7nm development on 2018 because they considered it too expensive.
 
So, are you suggesting things would be different at other giant corporations?

Look at Amazon, their employees are always complaining about being "underpaid and overworked", while Jeff Bezos is the richest man in the world, or at the very least, damned close to it,.

Elon Musk's brother is on Tesla's board of directors, a situation you'll likely find in that echelon of management elsewhere.

So, what have we learned? IMO, that sociopathy, greed, narcissism and nepotism, are what make the corporate world go around.

I know some Amazon engineers and they all make bank. You start off at $120k and when you become SDE2, your comp's at least $200k. Basically, you're guaranteed to be a millionaire before you hit 35 if you do things right. Note that its mostly stock options but still.
 
GlobalFoundries (not AMD) ditched 7nm development on 2018 because they considered it too expensive.
Well, AMD sold global Foundries (ostensibly), because they couldn't afford to keep it. Then, (as you say), Global Foundries ditched 7 nm because it was too expensive. So,you're describing two roads that lead to the same place, (TSMC), for the same reasons
 
Well, AMD sold global Foundries (ostensibly), because they couldn't afford to keep it. Then, (as you say), Global Foundries ditched 7 nm because it was too expensive. So,you're describing two roads that lead to the same place, (TSMC), for the same reasons
AMD sold because it was not profitable to maintain small amount of production. They needed more capacity they needed and it's much easier to sell excess capacity if company is only manufacturing company.

Not all AMD roads lead to TSMC. GF ditched 7nm because they considered there is not enough demand for it (=too expensive). Previously GF told that AMD could use other foundries too for "7nm" because AMD needs more then can supply. At that time TSMC had oversupply for 7nm. Yes, as stupid it sounds right now, that was then.

When GF's 7nm production was months away, TSMC still had 7nm oversupply and Intel's 10nm was coming. GF calculated that because 7nm is already on oversupply, 5nm oversupply would be even bigger. So it's not profitable to even start 7nm production because 5nm situation would be even worse.

Alternative scenario: no TSMC 7nm oversupply > GF decides to start 7nm production > since TSMC had no spare capacity, AMD goes for Samsung for Epyc chiplets (low clock, low power, small die area parts, no problem) > AMD continues to use GF on 5nm and (perhaps) Samsung.
 
Not all AMD roads lead to TSMC. GF ditched 7nm because they considered there is not enough demand for it (=too expensive).
This would sort of indicate the GF didn't have the, "if we build it, they will come", faith in AMD's ability to get rid of it.
Alternative scenario: no TSMC 7nm oversupply > GF decides to start 7nm production > since TSMC had no spare capacity, AMD goes for Samsung for Epyc chiplets (low clock, low power, small die area parts, no problem) > AMD continues to use GF on 5nm and (perhaps) Samsung.
Well, I have this perhaps naive, or "uninformed" concept of the expense and difficulty of reducing process width, based on some Jr. high math, the inverse square law. If you reduce the process widrh by half, the same circuitry should only cover 1/4 of the area. I can only guesstimate the cost cost would at least, if not surpass, that ratio.

Anyway, what node are Samsung's SSD using?. Because I've gotten some seriously good deals on their smaller (259 GB / 500 GB) EVO 870 SATA 3 drives in recent months. If it's anywhere 8 nm, then Samsung obviously does have a fairly abundant supply..

If they're not, then they might not be a solid source. (At least at present).
 
This would sort of indicate the GF didn't have the, "if we build it, they will come", faith in AMD's ability to get rid of it.
AMD was not enough to be GF's only 7/5/3nm customer. Now we know there was to be more than enough demand but very few (none?) semiconductor companies predicted current situation.
Well, I have this perhaps naive, or "uninformed" concept of the expense and difficulty of reducing process width, based on some Jr. high math, the inverse square law. If you reduce the process widrh by half, the same circuitry should only cover 1/4 of the area. I can only guesstimate the cost cost would at least, if not surpass, that ratio.
Yes but current nm's are mostly marketing. Also wafer prices tend to go much higher with advanced processes. Those are bit rough estimates but give good idea:
Anyway, what node are Samsung's SSD using?. Because I've gotten some seriously good deals on their smaller (259 GB / 500 GB) EVO 870 SATA 3 drives in recent months. If it's anywhere 8 nm, then Samsung obviously does have a fairly abundant supply..

If they're not, then they might not be a solid source. (At least at present).
Samsung does not seem to say what process they are using for 870 Evo. 1x nm was best info I found, somewhere between 10-20 nm sounds right.

Samsung debuted 8nm (improved 10nm) on 2018, Zen2 launched 2019 about year later. Since Nvidia has no big problems with 8nm supply and AMD's chiplets (say 90mm2 for Samsung 8nm process) are Much smaller than GeForce 3000 -series dies (400-600 mm2), I would say AMD could have secured Samsung's 8nm production for Zen2/Zen3 chiplets. Then GeForce 3000 series would probably have been done on different process.
 
Yes but current nm's are mostly marketing. Also wafer prices tend to go much higher with advanced processes. Those are bit rough estimates but give good idea:
I perhaps didn't make quite clear what my invocation/interpretation of the inverse actually meant. What I should have said was, as the process width decreases, the the price would likely go up, (on the narrower process), in the inverse ratio to the size decrease. So, a wafer with chips 1/4 the area of the previous, then the price for it would possibly be 4 times as much..

The price increase from 7nm to 5 nm is almost double, so this would , (to some extent), bear out my reasoning..

There was one huge aberration in those cost figures at the widest end, (90 nm), where the cost per chip is listed at about $2,500..! Does that mean, if for old time's sake, ("Auld Lang Syne"), I wanted Samsung to cook me a batch of P-4s, they'd cost me $2,500 a pop? :eek:

Throughout the summer (?) I did cop a bunch of the 250 & 500GB 860s and 870s for $35.00-$40.00 (250 GB) & $55.00 to $60.00 for the 500 GB models.

I just grabbed a 500 GB 870 from B & H for $60.00, $4.00 less than their asking price for the the 250 GB ATM..

I know these drives aren't enough to impress the hard core here at Techspot, but they sure do pep up the archaic junkers I'm running. :laughing:
 
Last edited:
Back