Explainer: What is Chip Binning?

So, you explained what binning is - which I don't think was that much of a mystery - but not how you sometimes end up with a chip that outperforms the model average.
I suppose I could have gone into the distributions (e.g. bins for the top-end models tend to be chi-square distributions, whereas the lower end ones are normal distributions) within the bins, but the article was meant as a simple primer on chip binning. Lots of people, including yourself, already know what is refers to, but plenty don't and the piece was for such folk.
 
Last edited by a moderator:
So, you explained what binning is - which I don't think was that much of a mystery - but not how you sometimes end up with a chip that outperforms the model average.
Actually they do explain it... in this part:

Product demand can often outstrip production capability, hence why the 10 core wafers are used to help fill in orders. Sometimes, perfectly functional dies have sections switched off, just to ensure there is sufficient output from the factories. That does mean it's a game of silicon lottery as to what die you're actually getting, when buying a particular model.

So while regular binning is meant to work to the advantage of manufacturers (lesser chips are sold as cheaper models of the same CPU -- less waste), if there's too much demand for a given value/mainstream model, you may end up with a very capable chip that has been sold as a cheaper part.

In those instances, enthusiasts may recognize the good batches (as has happened many times in the past) and take advantage of that by overclocking, etc.
 
In those instances, enthusiasts may recognize the good batches (as has happened many times in the past) and take advantage of that by overclocking, etc.

The good old days of overclocking. The notorious Celeron 300A that did 450Mhz no problem, or my personal favorite was the P4 Northwood 1.6A which ran at 1.6Ghz, but was built off of the Northwood 2.4Ghz proc stepping at the time. My Northwood 1.6A hit 2.7Ghz but was not stable. Stable 24/7 clock was I think 2.56Ghz.
 
On the Mac side of things, in the G3 days you could install a G3 upgrade card in your older Mac. Thanks to what I assume was increased demand for lower clocked parts, the G3/220MHz card I bought for my FIL actually had a CPU marked as G3/233 MHz in it, which then OC'd to 360 MHz. This CPU was known to regularly OC to 330-360MHz and was the Celeron 300A of the Mac side.
 
Ah, the Athlon 700. Mine ran at 900MHz for years on end, and it was a sad day when it headed for silicon heaven (die fracture at one corner, from endless cooler swaps). My Q6600 wasn’t as awesome, 3.2GHz, but its replacement (4970K) was lovely: 4.5 GHz with just a minor voltage tweak. These days I’m happy to just take what the lottery gives me and eschew overclocking.
 
Ah, the Athlon 700. Mine ran at 900MHz for years on end, and it was a sad day when it headed for silicon heaven (die fracture at one corner, from endless cooler swaps). My Q6600 wasn’t as awesome, 3.2GHz, but its replacement (4970K) was lovely: 4.5 GHz with just a minor voltage tweak. These days I’m happy to just take what the lottery gives me and eschew overclocking.

Didn't have much luck with my Q6600 but the E2180 before that sang nicely at 4.0 GHz.
 
The optics and exposure used during fabrication once favoured the centre like the clear centre of a photo with slightly blurry edges & darker corners. The best chips would come from the centre. They now use very straight beams that don't diverge like they used to and scan across the wafer.
The patterns may have a slight blur or deviation from the design so affected spots behave differently. Higher frequency radiation (UV and beyond) has smaller wavelengths and less blur like using a thinner brush to paint instead of a wide brush. If the variation (rounded corner instead of sharp) increases resistance (limits current) or slows a transistor then it limits maximum performance. Too much leakage or too much resistance and you limit maximum performance. The longest route with more transistors and distance is the critical path which limits the clock speed. Individual transistors operate around 100x the clock speed. The clock speed is slower because so many transistors are in series in the critical path. Waiting for the domino effect of one transistor into the next, the distance to travel and making sure everything is ready & settled before the next clock.

The current increases linear with voltage but power is voltage x current so power increases with the square of voltage. The voltage is increased to switch on transistors quicker and pass on more to the next transistor etc. so the critical path is finished quicker for faster clock speeds. For the same voltage, half the clock speed will reduce power consumption. Static power = power used while doing nothing, mostly leakage. Dynamic power = energy used to switch transistors on (like capacitors) and wasted to turn transistors off. Higher temperatures also increase resistance and leakage so cooler is better. Chasing MHz with voltage becomes a loosing battle at the top end of the curve. Minimum voltage to turn on transistors limits the bottom end of the MHz curve.

They once binned for the standard cooler and poor PC internal temperatures and maximum load. Now the TDP is set for lower average loads and they adjust voltages and clock speed based on instructions, per core loads and temperatures. Intel's implementations of AVX, AVX2 etc. increase the number of calculations per cycle but take longer so clock speeds are slightly reduced to run them. Or maybe it's just to reduce the power consumption and hot spots of these large AVX etc execution circuits. They are overclocked from the factory for some ideal situations so users have less room to overclock them during when non-ideal situations occur.
 
The good old days of overclocking. The notorious Celeron 300A that did 450Mhz no problem, or my personal favorite was the P4 Northwood 1.6A which ran at 1.6Ghz, but was built off of the Northwood 2.4Ghz proc stepping at the time. My Northwood 1.6A hit 2.7Ghz but was not stable. Stable 24/7 clock was I think 2.56Ghz.
The AMD Duron 600MHz (K7 generation) could be OC'd up to 1.2MHz.

I also remember Athlon XP where the process had become so mature, there was no binning anymore, all chips were pretty much excellent, just painted with different things foir sale. You bought the cheapest part (XP2500), applied a 33% bus speed overclock and you had an XP3200 in 2 seconds.
 
My trusty quad 6600 runs happy at 55c under load with a 240mm aio. Have a x570 strix and 16gb of 3600cl 16 on the desk. Was hoping zen 3 would be here shortly but I think ill buy a vanilla 3600 when the price drops shortly. Mainly use for games. Why x570. Future proof. Got to get 5 years out of this new rig.
 
Your list of things that it could end up as isn't even complete. There are also the Comet Lake laptop parts, and there will probably be Celeron or Pentium models in the future that use the same die.
 
Your list of things that it could end up as isn't even complete. There are also the Comet Lake laptop parts, and there will probably be Celeron or Pentium models in the future that use the same die.
Yes, some of the dies from the 10 core wafers could end up as laptop parts, but Pentium/Celeron models are more likely to come from the 6 core wafers - being smaller chips means that more can be cut from a single wafer. The upshot of this is that there is a greater distribution of dies to select from, allowing them to go from i5 all the way down to a Celeron. This is also why Nvidia uses chips from ‘small die’ wafers for their low end and budget models.
 
I could destroy one of those fabrication plants in two seconds by just shaking my newly washed and dried socks out in there ;-)

Also... Why the hell don't they make those original silicon wafers square instead of circles?? Trying to cut square shapes on a circle and losing 5-25% of them seems crazy?
 
Also... Why the hell don't they make those silicon wafers square instead of circles?? Trying to cut square shapes on a circle and losing 5-25% of them seems crazy?
It's a little old now, but still relevant and answers your query:


Edit: Just realised that it doesn't directly answer why the wafers are round. They're circular, because corners produce more internal stresses within the crystalline structure of the silicon. So you'd end up binning dies from the corners and those stresses could easily propagate through the rest of the wafer, and ruin the whole lot.

The blob of molten silicon you can see in the video is essentially a single crystal - it's slowly rotated to make it grow into the full ingot you can see.
 
Last edited:
I wanted to learn to make chip when I was in high school.
If I knew chip manufacturing was very chemical, I would have chosen chemical engineering instead of electronic engineering for my college education.
The chemical engineering department also had much more girls (many of them were the pretty ones) than the sad electronic department :(only 9 girls out of 180 students).
At least part of my wish was fulfilled. I made full logic design of usb hub controller using VHDL for final assignment.
 
I could destroy one of those fabrication plants in two seconds by just shaking my newly washed and dried socks out in there ;-)

Also... Why the hell don't they make those original silicon wafers square instead of circles?? Trying to cut square shapes on a circle and losing 5-25% of them seems crazy?
There are some spinning thing in the fabrication that makes the silicon ingot in tubular shape. The tubular was later cut into circular wafers
 
Very well written article. Now I can see that there is a touch of mystery in the whole process which means that even the companies manufacturing these products are on as much of a journey as the people who buy them, particularly when it comes to what these things can or will do; I quite like that.
 
So, you explained what binning is - which I don't think was that much of a mystery - but not how you sometimes end up with a chip that outperforms the model average.
Really? If you understand chip voltage and clocks can vary, it only makes sense some will, not can, perform better than required.

If you make a batch of cookies at home, some will look or taste better than others. Same applies with chip production.
 
The most pedestrian explanation of why some "binned" chips are better is b/c it was fabricated on the wafer of a much higher end CPU, but was "downgraded" as a lower level CPU due to factors other than overall speed or performance (eg: one core under performed, but the others still meet the wafer's original intended speed.)
 
I remember perusing long forum threads about which Athlons were better over-clockers. Everyone comparing production codes and debating which chip was the best to get a massive OC out of. Now I just buy a K model and call it a day. I could never get the golden chips anyway...
 
I had a i7 2600K that ran at 5.1Ghz all of its life and maxed out at 5.3Ghz in fact I still have it around in another system and to this day its happy at 5.1Ghz 1.38v. This chip is what they say is a Golden Sample CPU. My wife is now running it in her system so its had an easy life for the last 6 months and I'm retiring it on Christmas day when I give her new setup consisting of a Ryzen 5 5600X 32GB 3600Mhz DDR4 and some Gigabyte Auruos X570S Pro AX board I got on black Friday sales & a couple 1TB PCIe 4.0 NVMe drives.

Maybe I'll find another use for the old i7 2600K@5.1Ghz or maybe it deserves a break now..lol
 
Back