Weekend Open Forum: To Ryzen or not to Ryzen?

I7 2600k at the moment.

I tend to do more 3D modeling and work related tasks, some virtualization. Graphics performance was competitive enough and I have a soft spot for the underdog.

AMD gets my money next year. RYZEN!
 
Of course not. I'll adopt the wait & see approach. If they're still on par performance wise with Intel latest offerings at a cheaper price... when I decide to upgrade my current rig, and it won't be anytime soon, then I'll certainly give them some serious consideration. One swallow maketh not a summer.
 
At one time I used Intel CPUs, but had "issues" with the systems built with them.... that, and the overly inflated cost of Intel CPUs lead me to use AMD CPUs. Over time I've come to think of Intel as a somewhat "slimy" company, mainly due to what I view as price gouging.
My builds are NOT for gaming.... but rather productivity and content creation.

All that being said, a part of me wants to "jump" on the new CPUs, but the practical part of me says to let the platform stabilize/mature a bit before making my move. I suspect I will build a new Ryzen system by year's end, but am still undecided just which one of the 8 core parts I will purchase.
 
The only desktop users who benefit from more than four cores are content creators so gaming won't really help Ryzen or Vega sales. Game developers seem uninterested in taking advantage of highly parallel processing so it will probably be up to Microsoft and AMD to do it all in software. When that happens we may see things change VERY quickly for AMD. For now, however, the best thing about Ryzen is how its prompted Intel to slash CPU prices.
 
Current system is sufficient - surprising considering its age.

Will let Ryzen age in the bottle for at least a year...drivers and chipsets are just as important, so need some history.
 
I actually sold my 6700k (did 4.6 at 1.35V) and ASRock OC Formula Z170 motherboard to get a 1700X (just leaving it at the XFR, but all the time on all cores of 3.9 at stock voltage). No regrets at all, well except for the motherboard situation. I rented a car and drove to Detroit to get a Micro Center for a B350 board. Granted I think it is a fine board, but ugh. Subjectively I don't get as high of fps in games, but it does feel much smoother. Also you can multitask the **** out of your PC even while gaming and it doesn't blink.
 
Last week I've sold my Intel Core i7 6700K and MSI Z170A-G45 Gaming motherboard. I'm still waiting to receive my new AMD Ryzen 7 1800X and MSI X370 Gaming Pro Carbon (x370 not on stock).
My monitor is 1440p so gaming performance should be only couple of frames lower until Microsoft issues a fix. Even without this fix/update I will still be more than happy with 4 more cores. Everything else will run much faster especially since I'm using Handbrake almost every weekend.
Could try some ovc as well with my custom loop, I'll see how far it goes.

Did not own AMD CPU since 2008. As soon as i7 920 came out I bought it and i7 CPU followed me ever since. Not that I'm AMD fanboy, but Intel stayed on 4 core i7 for too long. x99 is overpriced.
 
i7-6700K so not for me. If I was buying or making a recommendation to someone I'd certainly look seriously into Ryzen
 
My next build is definitely going to be there next set of chips. Im still sitting pretty with my 3770K. I dont think Ill go Intel unless they drop those i7 prices because AMD has got the price to performance ratio nailed down.
 
I generally like Ryzen, but I'm not in the market for one. I wanted to upgrade my HTPC to Bristol Ridge, but AMD hasn't seen fit to release it to the public, so I'm planning to upgrade to a Pentium G4560 + low profile Radeon RX 460 (to be underclocked and undervolted).
 
I'm with the i7-6700k crowd right now, so no upgrades for a while. When the time comes (2 years maybe?) I'll definitely give Ryzen due consideration depending on what Intel has to compete.
 
Well... If my Motherboard would show up, I would probably put my Ryzen 1800x in it and be testing out if it beats my X5675 running at 4ghz. So yes I am still running my LGA1366 till Asus gets their game in gear.

It will be faster there is no if.

I'm also on a i7-970 at 4Ghz and waiting until VEGA is out before I build a Ryzen setup.
 
If I had to upgrade, I would likely go for a ryzen 6 core. AMD is back

My i5 4690k@4,5 will do for now.
 
I'm not too disappointed with the performance of my 5 year old desktop but I don't trust it any more. I've had to put in a PCI-E card to give me SATA ports because the ones on the motherboard no longer work and I'm always worrying about what's going to die next.

It might be summer before I have enough money for a new system ... which will give places like TechSpot plenty of time to do AM4 motherboard reviews and for Ryzen owners to give feedback on their processors and motherboards. That's when I'll make a decision about whether to go back to AMD with Ryzen or stay with Intel.
 
Well, what I actually should be doing right now, is building an i5-6600K box, for which I have all the parts, save for a CPU cooler. I go out of my way to find 4 slot memory boards and Cooler Master has a habit of foiling my plans with their coolers which take out the #0 (?) RAM slot. I guess I'll have to go with a Noctua, but they're so damned ugly...:D

In any event, this build is getting my last copy of Windows 7, and I'm hoping it will last me the rest of my life. So no, I won't be building a Ryzen based system in the foreseeable future. With some of my earlier poor life choices, that might literally mean for the rest of my life
 
First time seeing people going goo-goo-ha-ha over CPU for gaming after the 386-486-Pentium era days.

It's the graphics card that really matters in today's gaming.

Have i7 4790K. No plans for Ryzen.
 
The only desktop users who benefit from more than four cores are content creators so gaming won't really help Ryzen or Vega sales. Game developers seem uninterested in taking advantage of highly parallel processing so it will probably be up to Microsoft and AMD to do it all in software.
I've been wondering for along time why parallel processing is not controlled at the OS level. It never has made sense to me thinking about the problems applications have with multi-threaded machines. The operating system should be able to ration a single thread across multiple threads. An operating system that doesn't allow all applications to take advantage of all threads just seems incomplete to me.
 
An operating system that doesn't allow all applications to take advantage of all threads just seems incomplete to me.
By the same token, it seems like the CPU would waste time and need to "learn" where to split a single thread program. So, I think, (and trust me, I'm speaking here without any knowledge of programming, so take it FWIW), that duty falls ultimately on the software developer. I mean, you simply can't split a thread in the middle of a string variable, because then one core would have to communicate with another to clean up the mess.

Although, videos cards manage quite well doing similar tasks. In fact Nvidia has a commercial wing, "Tesla" dedicated to "deep learning" and parallel computing. Here: https://www.pny.com/mega-commercial/explore-our-products/nvidia-tesla-graphics-cards maybe you can more sense of it than me...

I'm sorry, I wrongly linked PNY's endeavors

Here's Nvidia commercial cloud solutions: http://www.nvidia.com/object/gpu-cloud-computing.html
 
Sadly, chicken - n -egg to me.. if it sells well, mobo and software will develop to Efficiently suck every bit o' processing power from those many cores - if not, there won't be enuf R&D interest to create core-monsters, and Intel will yawn and keep its stunning Moore's/4 rate of progress (CPU processing power doubles in our children's lifetime-ish).
I am at a point where I can financially try it and support AMD's valiant effort - unfortunately the tweak-and-study Bug died for me with the i5-2500k, as, whoah, it just wasn't Necessary for that build, and I found that I didn't miss it much. aka, lazified vs.3% improvements. I'll shoot from the hip and say that Ryzen is gonna need that kind of care again (getting win7 drivers alone is a pretty good indication) and I'm no longer up for that task (but it IS always tempting to give the two renowned behemoths -I&M, inc- a final 'fickle finger', lol).
Sure wish it could pound the productivity world, though, that is the Single thing that might wake the neurons of the remaining best-and-brightest at Intel.
 
Last edited:
From my point of view Ryzen is massive disappointment. I can't upgrade. Not until they offer 40 or more PCI-e lanes on CPU. Simply put I cannot transfer all AICs from main X99 workstation to even X370. Not enough slots, not enough lanes, not enough SATA ports (and ignoring the fact that I have already 28 SAS ports on RAID controller and expander), but all non-critical tasks running from all 10 intel ports on board.

At this moment in time Ryzen is only for gamers or people who will never utilize more that one or two PCI-e slots. 24 lanes AMD, having a laugh? Intel had that in X58...

The only good thing from this premiere is that Intel will certainly deliver some sledge hammer (just like they did with Core architecture to counter AMD K whatever it was then). They can't afford to sit with fingers in the back side anymore. Not even in server market where Naples shapes to be something quite potent.

Hmm... perhaps some Ryzen SFF build... its some idea! (after they refine the architecture)
 
Sadly, chicken - n -egg to me.. if it sells well, mobo and software will develop to Efficiently suck every bit o' processing power from those many cores - if not, there won't be enuf R&D interest to create core-monsters, and Intel will yawn and keep its stunning Moore's/4 rate of progress (CPU processing power doubles in our children's lifetime-ish).
Well, first and foremost,"Moore's Law", is anything but a "law". In mathematical terms, it's should be classed as at most, "a theorem", and IMHO, it's really nothing more than an "idea" or a "dashed out concept". Yet it always seems to make a great sound bite, which requires very little thought, rhyme, or reason, to spit out. It's really just a rehash of, "I'll work for a penny a day, doubled each day for a month". Do the numbers on that, and you'll find you hit a wall around where you simply can't afford to pay the man, at around the 3 week point or so.

So it is with the 10nm process node. I have a feeling that as node size gets smaller, the tooling costs rise exponentially. In short, I honestly don't believe Intel is sitting on its duff to the degree the "experts" here at Techspot have the impression that.the real experts at Intel are. Remember, it's far easier for some know nothing talking head, to stand in front of a mic and spout a bunch of happy horse sh!t about, "ticks and tocks", than it is to labor at a drawing board or a lab, and actually DO the job.

Considering that Intel is in the business of making money from cooking CPUs for a broad spectrum of uses, not just catering to a bunch of whiny, needy, gamers whimpering for 10nm processors, I'd say, "don't hold your breath waiting for Moore's "law" to kick in, we're at the point in micro miniaturization, it's more likely to fail than it is to hold true.

We always seem to expect more of others that we are able to accomplish on our own. To which end I have a standing challenge. Take any two dozen Techspot "experts", go to the one's house with the best workshop, and see if you can cook up a lousy 140nm process (*) Pentium 2, or quit your whining..

This is where Intel is with GPU computing:
rc_600x450.jpg

And this is where you'll find it:

After Larrabee was cancelled, Intel shifted its design goals for the underlying technology. While Larrabee could have been quite capable for gaming, the company saw a future for it in compute-heavy applications and created the Xeon Phi in 2012. One of the first models, the Xeon Phi 5110P, contained 60 x86 processors with large 512-bit vector units clocked at 1GHz. At that speed, they were capable of more than 1 TFLOPS of compute horsepower, while consuming an average of 225W.

As a result of the high compute performance relative to power consumption, the Xeon Phi 31S1P was used in the construction of the Tianhe-2 supercomputer in 2013, which persists as the world's faster computer today.


(*)
In truth "140nm" for Pentium 2 was just a wild guess. Process widths for Pentium 4 were up as high as 180nm, and the "Prescott" offerings were shrunken all the way down to 90nm.
 
Last edited:
Back