Discrete GPU on a Cheap OEM PC: Does it Make Sense?

I watched a YouTube video by Carey Holzman that detailed where to buy off-lease computers that are just a few years old. He was buying these and then refurbishing them with SSD's and maybe a video card and donating them to Veterans or deserving people with low income.
 
In a world of Ray Tracing, I can't see ever even considering buying any card beneath the RTX 2060.
Yeah, but we're not in the world of ray-tracing and won't be for at least a couple of years - not until it gets cheaper and the consoles support it. The progress seems quite steady, though, so IMO it's just a matter of time.
 
I enjoyed the article! I see oem towers at work, refurbished or just old, that stutter and are so slow, that just need driver updates to run smooth. Somehow a five year old network driver can really slow it down. I also got a $5 adhesive heat sink to put on my Dell SFF VRM, along with another fan, and it runs a lot better (then I upgraded to a 7700K, and it really runs better).

Triggers Broom ... You buy a cheap PC and end up spending $400+ on various parts , buts its still the "cheap" PC you bought 5 years ago ..... LOL

Who decides who the "deserving" are? On the Party list ?
 
Last edited:
Yeah, but we're not in the world of ray-tracing and won't be for at least a couple of years - not until it gets cheaper and the consoles support it. The progress seems quite steady, though, so IMO it's just a matter of time.


I have a 2080Ti.

“We” may not be in the world of Ray Tracing.

But I am.
 
I have a 2080Ti.

“We” may not be in the world of Ray Tracing.

But I am.
And I have a VR HMD and a 4K screen, but I wouldn't say I'm living in world of VR or 4K, because I am not the world.

It's more like "World With Some Limited Ray-Tracing Still In Its Infancy, But It'll Get Better". It's hard to call a world, where almost no-one uses it yet (both on the dev and player side), a world of ray-tracing. Just like there was no "world of smartphones" in 2007, when iPhone came to the market, unless by "world" you mean "a very small piece of the actual world, but with potential to grow". Then yeah, whatever floats your boat.
 
There is serious sampling distortion to just use the HP G1 as representative for the cheapO'crap machine. Granted HP machines are fairly popular, nonetheless, the spread for performance at the his budget end of the spectrum is fairly wide, the HP G1 alone just doesn't seem an adequate representative.
 
The problem is: How do you know you have a cheap OEM machine or a slightly less cheap OEM machine? How is the consumer to know which is actually worth even a used 1050Ti?
 
There is serious sampling distortion to just use the HP G1 as representative for the cheapO'crap machine. Granted HP machines are fairly popular, nonetheless, the spread for performance at the his budget end of the spectrum is fairly wide, the HP G1 alone just doesn't seem an adequate representative.

While that is true, based on the specifications for the machine (https://support.hp.com/us-en/document/c03808397; https://www.cnet.com/products/hp-elitedesk-800-g1-core-i5-4570-3-2-ghz-monitor-none-series/ the G1 would be near the high end (note that it uses the full Haswell i5-4570, not the dual-core 4570T or the low-power 4570S). Some even came with the Haswell i7-4770 (https://www.cnet.com/products/hp-elitedesk-800-g1-tower-core-i7-4770-3-4-ghz-8-gb-256-gb-g6a05usaba/) -- again, the full version, not the dual-core or low-power versions. Haswells should still be decent CPUs...provided they have a good GPU paired with them & aren't crippled by the motherboard or PSU.
 
The problem is: How do you know you have a cheap OEM machine or a slightly less cheap OEM machine? How is the consumer to know which is actually worth even a used 1050Ti?

You don't seem to understand, no matter how crappy the motherboard or PSU are, the PC is still becoming a gaming capable PC when you add a GPU to it. The article just proves that there isn't much of a gain in replacing a 1050ti with a 1650.
 
OEMs PSUs are hit or miss. In some of my Dell OEMs, the PSU are good quality 300W to 500W gold rated PSUs made by Delta Electronics. Others have no name or inferior lite on brand models with standard efficiency.
 
Just a question ... If I changed the original power supply with 600W one and used 24 pin to 6 pin adapter is that will solve the low power issue or the problem at the 6 pin socket it self making poor electric feed or something ?
 
I got an old HP Prodesk 600 G1 SFF work was throwing out - it has an Intel I7 4770 in it. Decided to upgrade my Intel I5 760 desktop with it as a project. Using the HP Mobo, HSF and CPU. I already have 8GB of DDR3 1600 RAM, and an Nvidia GTX 960 2GB video card with a 6 pin header.

I thoroughly dusted the entire existing system out, including inside the PSU and ran Cinebench R20 on it before continuing.

Well HP did everything in their power to make it difficult. Custom pinouts on every header, odd motherboard mounting holes, custom power pinouts as mentioned in this article. I bought a custom power cable adaptor from moddiy which works perfectly to convert from my Antec 700W PSU to the HP motherboard. All my devices are powered from my PSU, not the HP mobo, and my GTX 960 has the 6 pin power from the PSU as well.

I had to use small nuts to secure the HSF to the motherboard as my case does not have case risers to screw them into. Which means for now my tower is lying flat.

Have booted, everything works fine. Cinebench R20 reports roughly a 60% boost in performance. However Intel XTU reports Current Limit Throttling while running the test. At roughly 52W Package TDP, which is as high as it gets. This is with everything powered from the PSU except Motherboard, RAM and HSF Fan, and I don't know if mobo power goes to the video card when it has its own 6 pin lead from the PSU, maybe some.

Still an upgrade for me, playing Sekiro and Metro: Exodus (much smoother) Grand cost to me was roughly AU $30 for the PSU cable and some nuts.
 
Have booted, everything works fine. Cinebench R20 reports roughly a 60% boost in performance. However Intel XTU reports Current Limit Throttling while running the test. At roughly 52W Package TDP, which is as high as it gets. This is with everything powered from the PSU except Motherboard, RAM and HSF Fan, and I don't know if mobo power goes to the video card when it has its own 6 pin lead from the PSU, maybe some.

Undervolt your processor using XTU (if the BIOS allows it to) and you may be able to get full performance within that 52W power envelope. With 100% CPU use in Handbrake my i5-8400 will use 72W, but if I undervolt it to -0.07v it uses 62W, which is under the 65W default power limit.

I undervolt everything I own is an effort to use less power and generate less heat. Another good use for UV is on the GTX 1050Ti or any other 75W slot-power only GPU, using MSI Afterburner. With a 3DMark Fire Strike load, it'll power limit right at 75W by default, but with a undervolted power curve I can get higher clocks and lower power draw (under 70W) and eliminate power throttling. Same goes for my PNY GTX 1080 which has a crap cooler on it. Undervolt that and I'm using 40W less power (140W vs 180W) with higher clocks and the thing runs at 71°C instead of 80°C.
 
Undervolt your processor using XTU (if the BIOS allows it to) and you may be able to get full performance within that 52W power envelope. With 100% CPU use in Handbrake my i5-8400 will use 72W, but if I undervolt it to -0.07v it uses 62W, which is under the 65W default power limit.

I undervolt everything I own is an effort to use less power and generate less heat. Another good use for UV is on the GTX 1050Ti or any other 75W slot-power only GPU, using MSI Afterburner. With a 3DMark Fire Strike load, it'll power limit right at 75W by default, but with a undervolted power curve I can get higher clocks and lower power draw (under 70W) and eliminate power throttling. Same goes for my PNY GTX 1080 which has a crap cooler on it. Undervolt that and I'm using 40W less power (140W vs 180W) with higher clocks and the thing runs at 71°C instead of 80°C.

The part of setup I've yet to fully understand. What's the best guide to explain it? For the "Google It!" folks, I've done that many times, but the massive compilation of sites and the sorting through all the BS and malware, I just end up giving up on it.
 
The part of setup I've yet to fully understand. What's the best guide to explain it? For the "Google It!" folks, I've done that many times, but the massive compilation of sites and the sorting through all the BS and malware, I just end up giving up on it.
I'll assume you're referring to undervolting the GPU and not the CPU, as I found the CPU undervolt to be pretty straightforward and the GPU UV to be nonsensical for months.

I used 10-series Nvidia GPUs for this (mostly 1080, some 1060 and 1050Ti) and I assume the AMD Radeons are similar but use different tools. I'm planning on getting an RX 570 soon for PC&Mac eGPU testing so I'll have my answer then.

Starting from scratch, install TechPowerUp's GPU-Z and MSI Afterburner and then run both. Keep GPU-Z on the Sensors tab as that's all your monitoring information. Let's set a baseline: install and run Unigine's Valley (you can use Heaven or another looping windowed GPU load, though I don't like FurMark for this). If using Valley, set it to Extreme HD, uncheck Full Screen and set resolution to smaller than your screen, so you can view it along with GPU-Z and AfterBurner.

Run it and it will loop continuously. In GPU-Z look at your GPU Clock, GPU Temp, Power Consumption, and VDDC (voltage). The levels of all will flatten out eventually. Note the VDDC, for my GTX 1080 it's at 1.05v.

In Afterburner, bump up the Core Clock by 50 MHz or so increments and click Apply until Valley shows errors or crashes. Reduce by 20 MHz or so after the first crash and try again. If you make it to +150, maybe do 20 MHz increments after that. My GPUs don't really tolerate +200MHz and my 1080 likes +175 or so.

If your GPU and cooler are up to it (temps below 80C), increase the Power Limit (not an option on a 75W slot power GPU like the 1050Ti) to reduce that limiting factor. If the GPU is at 80C or so but the fans are not running close to 100%, I prefer to set a custom fan curve in Afterburner with 100% fans at 83C and ~20% fans at 40C (I prefer cooler to quieter). Click Settings then the Fan tab, Enable user defined fan control and edit the graph. If the fan speed graph is stairstep instead of diagonal lines, double click anywhere on the graph to switch to diagonal. Click the points to select (they're a royal pain to select) and drag or hit Del to remove. Click Apply and OK when you're done. The Fan Speed (%) in Afterburner should be set to Auto now. Click it and hit Apply if not.

We've now established the max speed that you're GPU will run at. Now we're going to back off that to give up a bit of speed for a (hopefully) large voltage, wattage, and heat reduction.

You can stop running Valley for now and give your GPU and fans a rest. Also let's save your settings. In Afterburner, click Save and then one of the numbers. Since this is theoretical max, I chose slot 5.

Now lets see what that Core Clock setting actually does and how we take advantage of it. In Afterburner, hit Ctrl-F. This is the table your GPU uses under load to match a GPU speed to a voltage level. It seems that in practice, your GPU has a preferred *voltage* it can run at and then limits it's clock speed to match that. You noted a VDDC voltage in GPU-Z earlier when we ran the Valley loop before the overclock. Look that number up on the bottom of the chart and it should match to the max clock speed you saw on the left of the chart.

You saved your setting as #5 above. I'll use +150 MHz as a reference here and my values from now on, *plug in your numbers whenever you see these*. With the chart open set your core clock back to +0 in Afterburner. Note that the chart values just jumped down and at the same voltage (again 1050 for mine) the MHz speed is lower. For my 1080 card, those values are:

@ +0 MHz: 1.050v 1911 MHz
@ +150 MHz: 1.050v 2061 MHz

This means that in the +0 MHz setting, we are using too much voltage to produce 1911 MHz. That's wasted power and heat. Lets go back to the +150 MHz Core Clock curve and read out for the closest entry to 1911 MHz:

@ +150 MHz: 0.925v 1910 MHz

If we restrict the GPU to 1910MHz, we should only need 0.925v to run stably. That right there is a 0.125v undervolt waiting to happen.

Side note: you see much higher voltages and speeds are listed towards the right of the chart but you'll never reach those with your GPU, so ignore them. But below that to the left, in theory every voltage level should be stable at every matching frequency at your +150 MHz overclock.

Let's use 1910 MHz as a reference and we're going to edit the chart to prevent the GPU from using more than 0.925v and 1910 MHz. Set your Core Clock in Afterburner to +150 if it's not there already. Back at the chart, click on every data point to the right of the 925 setting and reduce the number displayed at the left of the chart to 1910. Yes, that's a lot of data point to edit and the selection of each point is the same royal pain as the fan curve. You may need to hit things around the house to vent your annoyance from time to time. I drag until the number is close and then use the arrow keys to fine tune to 1910.

When you're done you should have a nice curve from the left to 925, 1910 and a flat line to the right. *Do not lose your work*:

Click somewhere neutral on the Afterburner window
Click Apply
Click Save
Choose a number, I use 3 for this intermediate setting.

Now you haven't lost your work. Note that the Core Clock setting now says Curve. Test it out back in Valley again. Note your voltage, it should be 0.925. Note your Power Consumption, it should be noticeably lower. Same for GPU temps and fan speed.

Note your GPU clock, it's a little lower than 1910. Mine settles around 25MHz below the setting I choose to 1886MHz, sometimes a bit lower when running at max in a game.

You can try to compensate for this by targeting a slightly higher voltage and speed but my GPU has a bit of a mind of it's own about this. The higher you cut off the curve, say with a much smaller undervolt to 1.00 where my GPU should do 2025 MHz, it settles down even further to 1961 MHz. IOW, the closer you get to theoretical max MHz of your particular GPU in the silicon lottery, the less benefit you get from undervolting. In fact this is why you overvolt a CPU (and GPU I assume), to get those max clocks and damn the power usage.

Random notes: When you click back into Afterburner at a later time and look at your now half-flat voltage/frequency curve, you may see a small kink in the flat display side which you didn't put in there. Ignore it, it seems to be a display error as I haven't seen it affect my undervolt settings in-game.

You can arbitrarily choose *any* place to cut off/flatten the top of your voltage frequency curve below the 1.05v cutoff and if you're really looking for power savings (like in a laptop GPU), you may want to target even lower clock speeds. I have 3 different presets depending on how GPU intensive the game is and the ambient temp in the room.

One last note: When testing with games that run on a potato, like Rocket League, Minecraft, even Tomb Raider (2013), don't use a curve. I just set my Core clock to minimum (-400MHz) but don't make a custom curve to cut off the higher voltages. It never reaches those higher frequencies during gameplay and seems to use a bit less power with the regular curve. Leave GPU-Z on while playing and check out your usage afterwards, it's fun.

If this actually makes sense to anyone, I'll be amazed. But it makes sense to me.
 
Thanks Lee. I'll have to read it ba few times to get a full understanding. A prime example of why I haven't bothered with overclocks and underclocks, including the voltages.
 
Thanks Lee. I'll have to read it ba few times to get a full understanding. A prime example of why I haven't bothered with overclocks and underclocks, including the voltages.

It will only start making sense when you go through the process of doing it. I started out with a lot of trial and error (sooo many errors...) before the theory behind the idea suddenly started making sense. Since I started following that theory, I have had 100% success.

The biggest concept that helped me was that once you find a stable overclock (+150 MHz in our example above), every point on that curve below your GPU's max VDDC (voltage) is a stable setting. At any point on that curve (0.95v, 0.90v, 0.80V, whichever) you can limit the voltage by flattening the curve above it, and know you are getting the maximum performance with the minimum power and heat your GPU can tolerate.
 
It will only start making sense when you go through the process of doing it. I started out with a lot of trial and error (sooo many errors...) before the theory behind the idea suddenly started making sense. Since I started following that theory, I have had 100% success.

The biggest concept that helped me was that once you find a stable overclock (+150 MHz in our example above), every point on that curve below your GPU's max VDDC (voltage) is a stable setting. At any point on that curve (0.95v, 0.90v, 0.80V, whichever) you can limit the voltage by flattening the curve above it, and know you are getting the maximum performance with the minimum power and heat your GPU can tolerate.
The analogy that comes to mind is blood pressure at any given level activity. So are the chips being released with high blood pressure?
 
The analogy that comes to mind is blood pressure at any given level activity. So are the chips being released with high blood pressure?
Yes. Hmm, I like that. Continuing with that analogy:

You need a minimum blood pressure to keep your brain conscious, but of course there are small variations in everyone's system. So overall you need to start everyone with blood pressure that's a bit high to ensure that those variations all result in high enough blood pressure to keep everyone conscious.
Yes, a few people need that full pressure but most don't, so you can slowly lower the pressure of an individual until they pass out and then raise it back a bit to keep them conscious. So most people can get away with lower pressure but you need to spend the time testing to find everyone's individual limit.

That's exactly what I do with GPUs and CPUs, find that individual tolerance. I find it fun but I'll bet that most could not care less and find the whole process boring.
 
Elitedesk 800 G1 SFF - I7 4790, 16gb ram, 480gb ssd, GTX 1650 GDDR6, stock psu.

CINEBENCH R20 Score - 1675

Flashed a modded BIOS (no current throttling anymore) and the performance improved a little bit...

CINEBENCH R20 Score - 1850

My gaming scores are much higher than the Z97 Core i5 setup of this review... Both average and 1% low...



 
Elitedesk 800 G1 SFF - I7 4790, 16gb ram, 480gb ssd, GTX 1650 GDDR6, stock psu.

CINEBENCH R20 Score - 1675

Flashed a modded BIOS (no current throttling anymore) and the performance improved a little bit...

CINEBENCH R20 Score - 1850

My gaming scores are much higher than the Z97 Core i5 setup of this review... Both average and 1% low...


I wish to use hp g1 800 mobo with i5 4570 in new pc case with 550w psu and gtx 1660 super.
I already bought adaptor for psu/mobo and usb 3.0 ribbon for front panel.

Do you believe that will work?

It's old topic, but Im broken and can't afford to buy new pc. My current pc looks like that:
Dell precision 3500 with win 10 in new case, i7 860 (1st gen) 16gb ddr3, gtx 770 (2gb) and 550w psu.
I wish to make cheap upgrade by using hp mobo with i5 but I'm not sure that's worth it.

New AMD mobo msi mortar max b450m would cost me £80 plus ryzen 3100 or 3300 for £100-130 but I have to upgreade ram to ddr4 and buy Windows 10 on top so it's £400 upgrade and no benefits of old gpu.

IF option with HP what I got for free will work then only GPU will cost me.

So will it work?
 
I wish to use hp g1 800 mobo with i5 4570 in new pc case with 550w psu and gtx 1660 super.
I already bought adaptor for psu/mobo and usb 3.0 ribbon for front panel.

Do you believe that will work?
The original HP Elitedesk 800 G1 was designed to take up an 86W Intel i7-4770 processor, regardless of what form the case came in. So an i5-4570 will work fine in its motherboard.
 
Hello, I have a 600 G1 TWR that I bought for 80 Euros and I added a 75W without external power GTX1650.
I use it for Sim Racing, I can play iRacing in VR at 75 FPS on an Oculus DK2 or ACC on 1080p on a TV at 80-100 FPS.
Because I can't play ACC in VR, I was thinking of replacing the GTX1650 with a 6 Gb GTX1060, but reading this article I have doubts if:
-the PSU will be able to power the GPU through a SATA cu PCIE adapter
-the GPU fits over the SATA and Front USB connector
-there will be no bottleneck problems from the CPU

Currently with the GTX1650 during Benchmark Unigine I measure 120W with a wattmeter, so it seems that either the CPU is not used in the benchmark or it is throttled like the article says.
The GTX1060 has 120W TDP so I need only 45W more than the PSU needs to supply ...
What do you think, has anybody fitted a long GPU into one of these?
 
Back