Intel P67 Motherboard 5-Way Shootout

Julio Franco

Posts: 9,099   +2,049
Staff member
Chipset delays aside, Sandy Bridge has been one of the most anticipated and impressive releases we've seen from Intel in recent years. The first wave of processors have demonstrated superior efficiency as they swept aside considerably more expensive processors of previous generations, while consuming even less power. The Core i5 2500K and Core i7 2600K models have also delivered effortless overclocking potential without going for excessive price premiums.

Read the full review at:
https://www.techspot.com/review/380-intel-p67-motherboard-roundup/

Please leave your feedback here.
 
Great review Steve.


. The P67A-UD7 supports up to ten USB 3.0 ports and provides more than 16 PCIe lanes for graphics cards with its four full-length PCIe x16 slots.

While true, I think this was supposed to be:

....and provides an additional 16 PCIe lanes for graphic cards with its four full-length PCIe x16 slots.
 
IMO, you're both wrong.
While true, I think this was supposed to be:

....and provides an additional 16 PCIe lanes for graphic cards with its four full-length PCIe x16 slots.
Although I can't for the life of me figure out why. It seems I'm used to the term PCIe only being used to socketed (ostensibly VGA) headers, and not internal controllers dedicated to other uses.
 
So long story short on the ud7 PCI-e slots all run at Full 16x and not 1st 16x ,2nd 8x. 3rd 4x..For all 3 slots.
 
IMO, you're both wrong.
Although I can't for the life of me figure out why. It seems I'm used to the term PCIe only being used to socketed (ostensibly VGA) headers, and not internal controllers dedicated to other uses.
you may be right<<<Billy Joel.....around 1981 I think. It's usually listed as something like "42 PCIE lanes 36 available (36) PCIE (6) USB etc...etc. In case, and on this board, the 4 PCIE headers run at 16x/16x/8x/8x, when occupied whereas most high end boards run 8x/8x/8x/8x, when all 4 are filled. so that would be an additional 16 lanes dedicated to the PCIE headers.
 
you may be right<<<Billy Joel.....around 1981 I think. It's usually listed as something like "42 PCIE lanes 36 available (36) PCIE (6) USB etc...etc. In case, and on this board, the 4 PCIE headers run at 16x/16x/8x/8x, when occupied whereas most high end boards run 8x/8x/8x/8x, when all 4 are filled. so that would be an additional 16 lanes dedicated to the PCIE headers.
Ah, a golden opportunity to show how little I know about video cards and processing.

Unless all four lanes are running at the same speed, information is being processed asymmetrically To the untrained mind, it seems like the X 16 lanes would just run out of information, and have to wait for the X 8 lanes to catch up. Is this another threading issue that would have to be addressed in programming, in order to take of the extra buss speed? So, how much is gained by a configuration such as you're describing? As you've pointed out, with multiple graphics cards arrays, the fact that they're running at less than full speed, doesn't really impact performance all that much. (??)
 
Nice review.
Coupla' points.
DFI is now DOA (no longer making ATX boards) and EVGA are probably reserving any press coverage for the FTW3 which was added to the lineup today, which might have some bearing on why these two board manufacturers failed to show.

Any chance of adding stability testing to the review regime?
Since most boards are going to offer (very) similar chipset performance -features that will no doubt become near-identical as more functions move from traditional control hubs to the CPU itself- would it not be more of a differentiator to assess each boards overall stability? -the ease of recovery from a failed overclock, whether the board handles a 24hr torture test etc.
Unless all four lanes are running at the same speed, information is being processed asymmetrically To the untrained mind, it seems like the X 16 lanes would just run out of information, and have to wait for the X 8 lanes to catch up.
The analogy would be more in line with traffic on the interstate. Light traffic on eight lanes moves just as well as light traffic on sixteen lanes. Present devices (graphics cards) in 99% of cases do not come close to utilising the bandwidth available to a PCIe x16 slot (whether it be x16, x8 or x4 mechanical)-mainly because the information flow across the PCI bus is generally limited to CPU intensive graphics tasks (physics and some compute functions being the prime ones). The majority of the graphics tasks rely on on the interconnect between GPU and VRAM, and thus never have to rely on the PCI bus.

In the case of P67 (and P55/X48/P45) the lane assignments are x16 (@ 16 or 8), x16 (@ 0 or 8), x16 or x4 (@ x4). The third slot shares PCI lanes with the SATA/USB since the lanes are generated via the controller hub (southbridge). Disregarding the third slot, when the primary runs at x16 the secondary is not populated. Once the secondary is populated both slots run at x8. Adding a bridge chip (PLX or NF200) splits the available 16 lanes (at 2.0 spec) into 32 lanes (at 1.0 spec) and the lane assignments become (max of 4 PCIex16 slots):
x16, x16, NC, NC or x16, x8, x8, NC or x8, x8, x8, x8 (NC= No connect)
 
Ah, a golden opportunity to show how little I know about video cards and processing.

Unless all four lanes are running at the same speed, information is being processed asymmetrically To the untrained mind, it seems like the X 16 lanes would just run out of information, and have to wait for the X 8 lanes to catch up. Is this another threading issue that would have to be programmed to, in order to take of the extra buss speed? So, how much is gained by a configuration such as you're describing? As you've pointed out, with multiple graphics cards arrays, the fact that they're running at less than full speed, doesn't really impact performence all that much. (??)


The bandwidth afforded by a x16 interface far exceeds the information that any card out will saturate. However when that point is theoretically reached, the card that saturates the bandwidth it has will slow down the card with more bandwidth as the cards are 'load leveled' Thats the function of the ribbon 'bridges. This is a great article on PCIE scaling. basically like the memory speed defaulting to the card with the slowest memory or memory setting.

http://www.techpowerup.com/reviews/AMD/HD_5870_PCI-Express_Scaling/1.html
From the conclusion of the article:

leaving only a 1/16th of the optimum bandwidth, it is still impressive that it can deliver 75% of its performance.

Running a 5870 with only 1 single lane of the X16 interface available, It gives 75% of the performance of it being used in a full 16 lane header.

Thats why I object when people (and some writers SEE S/A 's Lars-Göran Nilsson )
say that a x4 slot is useless to add a second or third card. A x4 lane interface will only give back 4-9% of a full x16 header (often less). An x8 gives back nothing.
There have been some published articles showing a bigger hit using x8 and x4, however I have run this test countless times and have results that are commensurate with the above article. I am convinced that when performance drops , there is something else happening.
 
Thats why I object when people (and some writers SEE S/A 's Lars-Göran Nilsson )...
L-G N is more error prone than Harold Lloyd's cinematic persona, and as such, one should view S|A for it's unintentional comedic content.
I am convinced that when performance drops , there is something else happening.
Probably depends on the benchmark being used and the GPU('s). For the majority of gaming a x4 connection is more than sufficient. Once the CPU utilisation for rendering increases (CPU physics and compute functions) increases then the impact is likely greater over the reduced bandwidth.
I would say that Civilization 5 (or Total War) would be a good benchmark to prove the point. Once the map fills up, I would think that CPU utilisation becomes more critical - although this could also likely be dependant on the game engine being used and the resources the coding can call upon (and possibly some differences in GPU architecture - nvidia's (Fermi) compute ability for instance).
 
Probably depends on the benchmark being used and the GPU('s). For the majority of gaming a x4 connection is more than sufficient. Once the CPU utilisation for rendering increases (CPU physics and compute functions) increases then the impact is likely greater over the reduced bandwidth.
I would say that Civilization 5 (or Total War) would be a good benchmark to prove the point. Once the map fills up, I would think that CPU utilisation becomes more critical - although this could also likely be dependant on the game engine being used and the resources the coding can call upon (and possibly some differences in GPU architecture - nvidia's (Fermi) compute ability for instance).

That would make perfect sense. It's been my experience while running these scaling tests on my own that the 'anomalies' happen the likes of CIV5, Flight sim. GTAIV. when I tried this with a extremely shader heavy game like Metro 2033, there was a grand total of 1.7% between the x16/x8/x4 interface's. Within the margin of error, but the x8 came out on top.
 
looks like ASUS/ASRock takes this one with the best value/performance ratio. the GB board sure is nice though
 
I'm glad manufacturers are moving in the right direction when considering oem builders. Pretty much pick your favorite color and that's the board to chose. Maybe the number of and placement of pci e lanes, but other than that unless your benching to the extreme brand doesn't matter on this list.
 
looks like ASUS/ASRock takes this one with the best value/performance ratio. the GB board sure is nice though
I think I'd be more inclined to look at the Sabertooth P67. The dodgy colour scheme aside, it seems to be a consistant performer, very stable, priced well, OC's as well as any, and the five year warranty might add some value in the resale market when the inevitable upgrade takes place.
 
I'm sorry but the best bang for the buck in this group clearly goes to the MSI board. Take the $100+ bucks you save over all of the others and bump your graphics card up a level.

Nobody maximum overclocks 24/7 so as long as they are all close, and they are, then what is the big deal. I'd feel comfortable running any of these boards 24/7 at 4.4 GHz.

However, if you actually NEED some of the extra features (like a second GigE port pfft) then one of the others would be necessary
 
which one did you land on Leek?

I think its real hard to ignore the Asrock's price and features. So that will become my next motherboard once I start putting together plans to replace everything that blew up.
 
stopped reading after false info in just the second paragraph. evga doesnt even have a board ready yet for p67 but results and benching from the prototypes are online. any basic google search would reveal that.
 
stopped reading after false info in just the second paragraph. evga doesnt even have a board ready yet for p67 but results and benching from the prototypes are online. any basic google search would reveal that.

stopped reading your post after I found these...

http://canadacomputers.com/product_info.php?cPath=569_26_722&item_id=036160
http://www.costcentral.com/proddetail/EVGA_eVGA_P67_SLI/130SBE675KR/11284531/
http://www.directdial.com/130-SB-E675-KR.html
http://www.pcsuperstore.com/products/11284531-EVGA-130SBE675KR.html
http://www.pcnation.com/web/details.asp?item=HA9365
 
No SLI testing? Would be interesting to see non-nf200 vs. nf200. Also you should probably elaborate on your overclocking section. Did you use the same voltages for each board? The same LLC? For it to be a fair comparison you would have to have the exact same load voltage measured with a DMM. Those 60 extra Mhz could easily be from differences in load voltage based on LLC.
 
How about somewhere its actually available? FYI evga has not sold a single p67 board yet because they don't exist! Just go to the evga forums for confirmation from evga themselves instead of making up lies. Why would you make up information about contacting evga? Makes me lose a lot of respect for techspot. Lazy reporting, and even worse, not willing to admit mistakes. Good luck...
 
How about somewhere its actually available? FYI evga has not sold a single p67 board yet because they don't exist! Just go to the evga forums for confirmation from evga themselves instead of making up lies. Why would you make up information about contacting evga? Makes me lose a lot of respect for techspot. Lazy reporting, and even worse, not willing to admit mistakes. Good luck...

I am not sure why you seem to think you know what you are talking about or why you know my business. I am sorry but we did contact EVGA soon after the Sandy Bridge launch and discussed the future P67 roundup with them. Regardless of their current status that was not known at the time and nor is it important, they backed out from the article shortly after we gave them the competition lineup. The fact that they may or may not be selling boards is completely irrelevant and has no bearing on our comments.
 
EVGA might still not have "relaunched" after the P67 chipset recall. That is the most likely explanation for a lack of the product in online stores. Makes me lose a lot of respect for random guests arguing and accusing people of lying for no real reason.
 
Back