AMD might bring back the 16-core Threadripper with Genesis Peak

I'm thinking customers arent liking spending $750-800 on a CPU (5950x) only to find out they spent all that extra money without receiving any of the HEDT benefits, as it's still on the mainstream boards. Imagine building a $4000 rig and having it limited to mainstream boards. The bottom end HEDT for AMD is $1400, which basically says AMD is defining HEDT's price at over $7000.

What are you putting in a $4000 rig?

$750 CPU
$350 Mobo
$400 64 GB RAM
$750 RTX 3080
$250 2 TB main SSD
$600 4 TB steam SSD
$150 Case/PSU
=
$3250

(And the only "extra money" being spent is the $200 difference from the 5900X)
 
You may not like it, but they are not targeting the same customers, ryzen targets people for whom the performance of their workstation cpu doesn’t change their business’s profitability (amateur streamers, gamers etc). Threadripper targets people who make more money from higher cpu performance (professional production studios/ independent producers, engineers running simulations etc), therefore both amd and intel will try to switch to value based pricing where they take a share of the extra profit buyers can make with their product’s extra performance.

So what if its $7k if you make an extra $50k profit in the two years you operate it?

in this scenario as a chip maker you MUST limit the performamce of the consumer line to make sure that most professionals can make more money using your pro line, if you didn’t limit PCI-e for example, a bunch of professionals could be happy with a 5950x which is priced in the competitive consumer category.

I think you've conflated the Threadripper Pro lineup on the WRX80 socket platform, which is the workstation professional lineup youre describing, with the Threadripper lineup on the TRX40 socket platform, which is the prosumer (or HEDT per intel) lineup.

AMD specifically markets Threadripper Pro as the workstation lineup, vs Threadripper being marketed towards those who are both looking to game and operate professional workloads (I.e. prosumer). Previously, despite what you've conflated, the pricing on Threadripper's 1st generation ranged from $550 to $1000 at release, and the 2nd generation ranged from $650 to $1800.

My complaint about them raising the entry pricing into the HEDT lineup from $650 to $1400 in 1 generation is a legit complaint, not covered by your response.
 
My complaint about them raising the entry pricing into the HEDT lineup from $650 to $1400 in 1 generation is a legit complaint, not covered by your response.

7nm Threadrippers offer 24 cores and IO chip is also same as on Epyc (416mm2). Ryzen uses only 125 mm2 IO-chip.

So considering need of much larger IO chip and 50% more cores, there is nothing wrong with pricing.
 
What are you putting in a $4000 rig?

$750 CPU
$350 Mobo
$400 64 GB RAM
$750 RTX 3080
$250 2 TB main SSD
$600 4 TB steam SSD
$150 Case/PSU
=
$3250

(And the only "extra money" being spent is the $200 difference from the 5900X)
your case / PSU is too low....that should be closer to $400.... and your motherboard could be pricier.... plus any peripherals (monitor, keyboard,mouse) and taxes.... that easily puts you over $4000
 
I remember all the debate about the 1900X (The 1st gen TR 8 Core that was about $100 more than the 1800X) and this seems very similar. My opinion hasn't changed. If they launch such a product: I like that AMD offers the option of a product with all the PCIe and Memory lanes without having to spend money on cores you don't need.

That all assumes that the PCIe and memory controllers aren't segmented, of course. If they are, then this would just be market segmentation the same as disabling SMT/Hyperthreading on lower end parts...
 
your case / PSU is too low....that should be closer to $400.... and your motherboard could be pricier.... plus any peripherals (monitor, keyboard, mouse) and taxes.... that easily puts you over $4000
$400! Dang, my Fractal Design case was $125 and it's been years since I bought a PSU. (I basically listed my rig for this example)

I find it hard to believe that someone buying a $4000 rig doesn't already own a computer and ergo already has at least a mouse and keyboard.
 
$400! Dang, my Fractal Design case was $125 and it's been years since I bought a PSU. (I basically listed my rig for this example)

I find it hard to believe that someone buying a $4000 rig doesn't already own a computer and ergo already has at least a mouse and keyboard.
Yes... but this is a new rig... so one has to assume you need the peripherals... and a PSU that can run an HEDT won’t be cheap...
 
Yes... but this is a new rig... so one has to assume you need the peripherals... and a PSU that can run an HEDT won’t be cheap...
No, the HEDT was a $7000 rig in the OP. The $4000 rig was supposed to be someone that bought the 5850X and then was distraught by the differences between the consumer and workstation setup. (Which again suggests someone not brand new to owning computers.)
 
@Pete Flint
"In the first two Threadripper incarnations, the lowest tier HEDT part AMD sold was a 16-core, 32-thread chip, boasting more cores than the highest-tier, 8-core consumer parts. The following release cycle saw the 16-core, Ryzen 3950X enter the top of the consumer category, with the next best chip being Castle Peak’s 24-core, Threadripper 3960X."

I beg to differ but first 2 thjreadripper iterations were the 19##X and 29##X. The lowest core count in an AMD Threadripper HEDT chip was the Threadripper 1900X - an 8-Core, 16-thread chip. The 16-core 1950x chip was the TOP Tier of the Threadripper line at the time. The 29##X dropped the 8-core chip but added 24 and 32-core chips at the top end.
 
I think you've conflated the Threadripper Pro lineup on the WRX80 socket platform, which is the workstation professional lineup youre describing, with the Threadripper lineup on the TRX40 socket platform, which is the prosumer (or HEDT per intel) lineup.

AMD specifically markets Threadripper Pro as the workstation lineup, vs Threadripper being marketed towards those who are both looking to game and operate professional workloads (I.e. prosumer). Previously, despite what you've conflated, the pricing on Threadripper's 1st generation ranged from $550 to $1000 at release, and the 2nd generation ranged from $650 to $1800.

My complaint about them raising the entry pricing into the HEDT lineup from $650 to $1400 in 1 generation is a legit complaint, not covered by your response.
Ok fair criticism, I didn’t consider the split, but it fits the approach even better.

I believe Threadripper is being positioned for individuals or businesses making some money from workstation performance (eg streamers and content creators with profitable youtube channels, other professionals with modest budgets etc).

Threadripper pro is positioned for companies making eye watering amounts of money from performance (production studios, simulation businesses etc).

My argument is they (sensibly) killed “prosumer”, there is “consumer” and two tiers of “professional” and their pricing is trying to make the most money from people who make the most money while being as competitive as possible with volume consumer parts.

Professionals aren’t supposed to “like” the price, they should evaluate it using a cost benefit analysis. If you don’t make more money from your workstation performance, AMD wants you buying ryzen. I bet they looked at successful creators on $600 threadripper and worked out how muchtimr it saved them and said “we’ll do better if we take 20% of that market up to the pro line and push thr rest back to volume consumer parts”
 
No, the HEDT was a $7000 rig in the OP. The $4000 rig was supposed to be someone that bought the 5850X and then was distraught by the differences between the consumer and workstation setup. (Which again suggests someone not brand new to owning computers.)
Fair enough.... but the pricing still stands... the 5950 needs a hefty PSU as well...
 
Let's use a 5800x and a 3080 for this example and use the MSRP. The idea of loosing 2-3% performance after paying $450 for a cpu to power my $700 graphics card just because I have too many m.2 cards is absurd.

Maybe someone can explain this to me, but what on a CPU is physically limiting how many PCIe lanes you can have? If a 5600x can have the same amount as a 5950x then I say nothing. 24PCIe lanes seems like an arbitrary number and people paying big money for CPUs shouldn't be limited by it. For someone with a 5600x 24 lanes might be fine. If I'm paying a $1000 for a 16 core cpu then maybe 24 lanes isn't really enough

If you ran your 3080 at pcie 4.0 x8 you won't lose any performance, because that is equivalent to pcie 3.0 x16 in terms of bandwidth, which no single GPU fully saturates yet as far as I know. So technically there's 8 spare lanes if you're on a pcie 4.0 platform with a new pcie 4 GPU.

I am currently on x570 with a 5800x and a 3080, I have two 2TB NVMes in RAID0 and my 3080 runs at pcie 4.0 x16 just because I don't have any other add on cards. But theoretically they are there. (I say theoretically only because neither I nor any of the major tech tubers have seemingly tested this yet, but on paper that's the case)

My whole point being that so long as you're utilizing PCIe 4.0 it's a little bit of a step up from previous mainstream systems in terms of lanes. Though I admit I wouldn't mind them adding a few more at this point if they want to have high core count CPUs on mainstream.
 
Last edited:
Seeing as how you didn't understand what I was trying to say I will spell it out very plainly for you. I shouldn't have to drop $2000+ on a threadripper system if I want to run an extra NVME SSD. This is even more ridiculous when you consider the 5950x puts you in threadripper system territory pricewise. Yes, it might be on the IO die but AMD just seemingly choose a number. It's not that anything about the number of cores or pins has anything to do with the number of PCIe lanes.

It was one thing when AMD was the "budget underdog" so allowances could be made. However, They're competing on the highend price of things with highend prices. There really isn't any room for them to recycle IO dies at this price point considering everything basically runs on the PCIe bus now.

Plug a dual or quad m.2 card into that x4 slot on the x570 board. Boom 4 more m.2 slots (all sharing x4 worth of pcie4 bandwidth). Your not really complaining about not being able to give all your m.2 drives full bandwidth simultaneously are you? What are you trying to do on a desktop system that would demand simultaneously pulling x4 lanes of pcie4 bandwidth all at once, simultaneously? And why would you expect storage server performance levels from a consumer product?

Also, if you just want more space, look at server drives. U.2 and E1.S pcie nvme drives are available in 4tb, 8tb, 16tb, and 30tb capacities. Slap the video card in the x8 slot and play some games and uninstall the benchmark software.

If your pegging your 144hz monitor then a 2% loss is, drumroll, 2.88fps (which doesn't occur at 8x pcie4, but for discussions sake).

What I would like to see is more m.2 slots even if they were shared (which would require a plx chip). If I add 3-4 m.2 drives in a consumer setup its because ive added more as I need more space. Right now im full up with 2 2tb m.2 drives. So in the future ill either need to swap or install an addin card for more slots. But lack of
simultaneous bandwidth won't be a concern for 99.9% of users in that scenario, a simple lack of slots will be the concern.

8gb/s to your storage subsystem is great, but I dont really need 8gb/s to *each* drive in my subsystem (at least not on my desktop. If I need that then ill get a 1u server stuffed with E1.S nvme sticks and with 40gb/e or infiniband ports on the back).
 
If you ran your 3080 at pcie 4.0 x8 you won't lose any performance, because that is equivalent to pcie 3.0 x16 in terms of bandwidth, which no single GPU fully saturates yet as far as I know. So technically there's 8 spare lanes if you're on a pcie 4.0 platform with a new pcie 4 GPU.

I am currently on x570 with a 5800x and a 3080, I have two 2TB NVMes in RAID0 and my 3080 runs at pcie 4.0 x16 just because I don't have any other add on cards. But theoretically they are there. (I say theoretically only because neither I nor any of the major tech tubers have seemingly tested this yet, but on paper that's the case)

My whole point being that so long as you're utilizing PCIe 4.0 it's a little bit of a step up from previous mainstream systems in terms of lanes. Though I admit I wouldn't mind them adding a few more at this point if they want to have high core count CPUs on mainstream.

Every conversation about PCIe lanes for a graphics card and SLI/Crossfire and CPU Core counts seems to have the same 3 contradictory arguments

1. Graphics cards don't use enough bandwidth to saturate 8 lanes
2. SLI/Crossfire doesn't scale.
3. Games don't use that many cores

Perhaps, upon investigation, people might find a causation link between the 3? Perhaps they would find that given the lanes bottlenecks would open, as graphics cards operate at nearly 100% usage while CPUs dont.
 
The point about saturating the PCIe lanes is highly dependent on the application running. For example, the image below shows the use of my RTX 2080 Super in two 3DMark tests - first, highlighted in yellow, is the PCIe Bandwidth benchmark (I did it twice to check the results) and then the Time Spy Extreme test (blue box):

pcie_bw_usage.jpg

In the first test, the bus usage was 92% and 3DMark confirmed this; in Time Spy, the usage was 10%. The former test streams a pile of vertex and texture data for each frame rendered, whereas the latter one uploads everything into the VRAM once, before the test commences.

Games operate mostly like the Time Spy test, unless it's an open world game - e.g. Far Cry, Skyrim, Flight Simulator. However, even these ones don't stream stuff every single frame, as it would grind performance to dirt. Graphics cards with small amounts of memory can require more frequent updates, but since they tend to lack rendering power, the amount of data sent back and forth is generally quite low.

Only in extreme cases, where especially heavy rendering techniques, such as ray tracing, or very high resolution (frame and textures), will result in a burst of bus traffic.
 
Back