Help me build my PC 2017 Catch All

There are definitely plenty of decent resources out there these days, but I'm not sure if there's a great central index to those resources. I'd start at either the Reddit for the game or at Inara, though -- lots of effort goes into making those places useful repositories of information.

So I have an nvidia 970 and i5-750 @2.675GHz
Which one should I update first?
I've been stringing this guy along since around 2011, so I'm not surprised if I finally need a full update.

Don't go solely on my view, but if I were in your shoes I'd prioritize the CPU over the GPU, but that is partly because I'm also running a GTX 970. I rebuilt my system last year but had to hold over the R9 270X I had at the time, but going from a Core2Quad Q9450 to a Ryzen 5 1600 was a massive increase in performance. I managed to salvage a GTX 970 from a system that crapped out on Robear, and it was a noticeable increase in performance, but not as drastic. Now I'm looking to jump up to a Vega 56 just to future-proof this machine for at least a few more years, and then later I'll probably upgrade just the CPU again at some point (just max out the Ryzen CPU for my board).

The CPU. The 970 can still hang OK at 1080p@60, but that's a first-gen Core CPU that released in 2009. Way past time for a CPU upgrade.

Thanks guys. I figured that was the case.
Time to shop for CPU, MB, and RAM. At least my PSU should hold out since I upgraded that unnecessarily high last time.

I'll also mention that your GTX1080 is gonna be fine and dandy for VR (higher resolution new headsets notwithstanding) - I'm running a Vive Pro on my GTX1070 and it handles everything I've thrown at it, albeit that my CPU is beefier than yours.

Jonman wrote:

I'll also mention that your GTX1080 is gonna be fine and dandy for VR (higher resolution new headsets notwithstanding) - I'm running a Vive Pro on my GTX1070 and it handles everything I've thrown at it, albeit that my CPU is beefier than yours.

Ah, interesting. I was aware that it should be good enough for the standard Vive and first gen Rift, but I was assuming that a bit more would be helpful with the higher resolutions of newer sets like the Vive Pro, and more to the point, the Valve Index. Good to know that it would hold up at least on the Vive Pro.

Where I've sort of settled for now is that if the rumors/leaks pan out with the Valve Index, I'll probably go ahead and pick one of those up and see where I stand (better FOV is a more desirable feature than self contained tracking for me). Still might get impatient and build a new machine later in the summer, but still, that seems like the right first step.

zeroKFE wrote:
Jonman wrote:

I'll also mention that your GTX1080 is gonna be fine and dandy for VR (higher resolution new headsets notwithstanding) - I'm running a Vive Pro on my GTX1070 and it handles everything I've thrown at it, albeit that my CPU is beefier than yours.

Ah, interesting. I was aware that it should be good enough for the standard Vive and first gen Rift, but I was assuming that a bit more would be helpful with the higher resolutions of newer sets like the Vive Pro, and more to the point, the Valve Index. Good to know that it would hold up at least on the Vive Pro.

Where I've sort of settled for now is that if the rumors/leaks pan out with the Valve Index, I'll probably go ahead and pick one of those up and see where I stand (better FOV is a more desirable feature than self contained tracking for me). Still might get impatient and build a new machine later in the summer, but still, that seems like the right first step.

Honestly, the increase in resolution between Vive/Rift and Vive Pro is modest at best. I doubt that's putting a lot of extra load on your system. And Index is rumoured to be resolution-equivalent to the Vive Pro, so there's that too.

Entirely agree that wider FOV is exciting though - my Vive Pro feels like playing VR through the middle of a toilet roll.

lunchbox12682 wrote:

Thanks guys. I figured that was the case.
Time to shop for CPU, MB, and RAM. At least my PSU should hold out since I upgraded that unnecessarily high last time.

So I've usually stuck with Intel, but should I focus more on amd now?

AMD is cheaper, and you get a lot more bang for your buck.

Jonman wrote:

Honestly, the increase in resolution between Vive/Rift and Vive Pro is modest at best.

Yup, the Vive Pro isn't really a second-generation VR headset, more like a Gen 1.5.

Same with the upcoming Rift S.

Probably still going to be a little bit before we see headsets that really represent a generation leap.

VR performance also depends on how demanding you are with performance. Some people have no problem with their games running at 45fps and having ASW interpolate the intermediate frames to raise the visual output to 90fps. Others demand native 90fps, or they run without ASW and just try to maintain as close to 90fps as they can. Some people are fine with 1.0x sampling or even undersampling, others find it unsatisfactory and use varying levels of supersampling to improve sharpness. There's a lot of technology to scale VR performance to run on different hardware, it's a question of how many trade-offs you pick up along the way.

lunchbox12682 wrote:
lunchbox12682 wrote:

Thanks guys. I figured that was the case.
Time to shop for CPU, MB, and RAM. At least my PSU should hold out since I upgraded that unnecessarily high last time.

So I've usually stuck with Intel, but should I focus more on amd now?

In general, I want to scream "yes!" from the rooftops, and rant about how if people actually want the CPU marketplace to have competitive prices, they need to buy AMD when AMD is this good.

But that aside, just purely on a performance-per-buck basis, at most price points, AMD is the correct choice. The primary exception would be if you're buying a top-end gaming CPU, and are willing to pay a premium for the extra X% of top-end gaming performance.

At pretty much every price point below that, AMD is providing much better performance-per-dollar. Roughly speaking, you're either comparing an Intel CPU with a roughly equivalent AMD CPU that costs much less, or you're comparing it with an AMD CPU that costs the same and is one "step" up higher in the product tier.

VR performance also depends on how demanding you are with performance. Some people have no problem with their games running at 45fps and having ASW interpolate the intermediate frames to raise the visual output to 90fps. Others demand native 90fps, or they run without ASW and just try to maintain as close to 90fps as they can. Some people are fine with 1.0x sampling or even undersampling, others find it unsatisfactory and use varying levels of supersampling to improve sharpness. There's a lot of technology to scale VR performance to run on different hardware, it's a question of how many trade-offs you pick up along the way.

Yeah, I'm mid to high tier demanding. Generally, I'm unhappy if I'm not close to full detail settings with a solid (ie, 60 fps on a traditional monitor) frame rate. I don't need the highest tier of antialiasing, and I'm going to choose higher detail settings over supersampling every time if there is a choice, but if I'm in a place where I'm undersampling or interpolating to double frames, that's absolutely upgrade territory.

I rarely if ever supersample in traditional games, but in VR it's a different beast, because it helps address VR's biggest visual flaw: blurriness. Supersampling is a core quality setting in VR, and you'll see it much more prominently featured in the quality settings of VR-based games.

SteamVR somewhat recently added behavior that dynamically adjusts supersampling based on runtime performance. I have this turned off in SteamVR because it was sub-optimal for me, but it is definitely the right way forward if they can improve the implementation.

Yeah, I suppose that makes sense. My VR experience is limited to my PSVR and very short play time on a friend's Vive, but even still I can definitely feel that the relative effect of improving image clarity could be much larger than on a traditional monitor. Of course, the best fix is just having an improved native resolution on the display, but I'm very inclined to believe that my personal priorities for settings could be quite different for VR vs a traditional monitor. (That is, I might find myself caring about improving AA or using supersampling over getting detail and effects sliders maxed out.)

Yeah, when text is hard to read, doing anything to improve sharpness and clarity becomes task #1.

You're right, of course, that this is a function of low resolution displays. As far as I'm concerned, VR's true breakthrough into the gaming mainstream will come when there's a generational leap in VR display resolution. There's just not enough dots right now for screens pushed that close to our eyes. Fixing that is when VR goes from a fun novelty to a primary gaming display, because just fixing that alone would do such mind-blowing wonders to the VR experiences we already have.

*Legion* wrote:

As far as I'm concerned, VR's true breakthrough into the gaming mainstream will come when there's a generational leap in VR display resolution.

Honestly, FOV is a bigger thing for me than resolution, though to be fair, I suppose I'd want higher res to go with the wider field of view to maintain the same pixel-density across the larger image.....

Particularly in games that require 360 degree situational awareness - the limited FOV kills them for me.

FOV is just a specific part of the same problem, in my view. Narrow FOVs are a design compromise to work with the current display technology. When you don't have enough pixels to make the "sweet spot" sharp and clear, you definitely can't spread those pixels out over an even wider viewing angle. As far as I'm aware, there's no additional technological advancement needed to support wider FOVs, hence I see them as the same thing.

Probably this is better discussion in a VR thread than here, so I'll shut up about it and return to recommending Ryzen CPUs to everyone that dares post in the thread.

Current PC:

CPU: Intel Core i5 4690k 3.5GHz (OC'd to 4.00GHz)
RAM: 4x 8GB sticks (DDR3 I think?)
GPU: AMD Radeon R9 270
I have a single 1080p monitor.

This PC has been performing reasonably well for me, generally in the 30-45FPS range with most games. I'd love to get that bumped up to a solid 60, and possibly look give VR a shot at some point.

I noticed a sale on an RX 580 ($380 $190), and while this might help with some FPS issues, I think it's not going to help a lot with heavily-modded Minecraft, which is typically more CPU-bound.

I think there's not a whole lot further I can go with this CPU socket (4790k is about the limit, and that seems really overpriced for what you get at this point). I've heard through the grapevine that Ryzens are a pretty good value proposition. The problem with going that route is I'd have to scrap the RAM and start again with DDR4?

Am I shooting myself in the foot by buying the RX 580 at this point? Or can I buy another year or two of life out of this rig, and hope RAM prices come back down to where they were a few years ago?

If RAM pricing is what you're worried about don't be. Prices are down now, lowest they've been since 2015 I think I read.

For what it's worth, I have a 6700k rig that's started seeing occasional CPU related slowdowns, but I'm waiting for the Ryzen 3000 series release to decide what to do. I can make it till July with what I have and see how things play out with AMD's new stuff.

The only meaningful upgrade I can get right now is either another Intel chip, which I just don't want to do, or kind of side-grading to an R7 2700X that won't be any faster in gaming but would probably solve about a third or maybe half my current issues.

Ah, indeed you're right. I got 2x8GB back in 2015 for about $95; current prices are about 10% above that, so not too bad.

That said, I think I might also hold out for a bit to see what happens when the new Ryzens release.

Minecraft is way more CPU dependent from what I'm reading (I play a little, but not much). Overall the CPU upgrade would likely boast a much larger performance boost than the GPU alone. Granted, that 270 is getting old (I'm trying to phase one out of my systems, I have it in one of the kids' PCs), but unless you're changing settings to force Minecraft to utilize the GPU more it's not going to provide as significant a boost than replacing the CPU.

And there's no reason to further upgrade utilizing that board. After a quick search the cheapest 4790k I found was just under $300, and for less than $10 more you could upgrade to a Ryzen 5 build that includes the CPU, board, memory, and a bundled cooler that's good.

Edit: Just saw this on /r/buildapcsales: Ryzen 5 1600 + GA-AX370 MB for $145. All you'd need is the RAM, so that drops the $306 build I linked above down to about $255 after RAM.

I could live with a Haswell CPU. I couldn't live with a 270.

Bah... quote is not edit.

Never mind, nothing to see here.

I'm not at all sure that a CPU bump will make that much difference, since the existing one is OCed to 4GHz. It'll improve things some, for sure, but even with a fairly hot CPU, Minecraft performance will probably improve only a little, maybe 20% or thereabouts. And a Ryzen probably wouldn't even help that much with Minecraft, because its cores aren't speed demons. They're good, and only a little behind Intel, but they are a bit behind. Where they really deliver is in high core counts, which won't do much with the older Minecraft design.

I think you'd get more mileage out of a GPU upgrade for general gaming. That will barely even bump the needle for Minecraft, though.

There's one possible point of major improvement: Haswell's I/O speeds were really kicked in the butt by the Spectre and Meltdown mitigations. If your Minecraft is somehow I/O bound (very large world, perhaps), then the difference might be larger than I'm guessing.

I'd suggest checking with folks that have run it on both Haswell (the chip generation you have) and whatever you're considering. Get some hard numbers from people and see if the improvement sounds worthwhile. I'm suspicious it will be quite expensive, and won't help that much.

With games that take advantage of wide multithreading, which are becoming more and more common, the big core counts on the Ryzen chips are quite appealing. In some cases, the difference can be startling. If you're eyeballing games like that, then an upgrade would become much more interesting.

But for Minecraft, I'm not sure that any upgrade would be of interest to you.

*Legion* wrote:

I'd do a Ryzen 2600, a B450 motherboard, 16GB DDR4 3200, a 1660 ti, and an SSD sized based on needs/budget, but all of that should fit within $1K.

I'm thinking about setting aside around this amount for a system.
Would something like this have a decent upgrade path? Ideally, I'd like to be able to invest a little more into the system over time.

Malor wrote:

With games that take advantage of wide multithreading, which are becoming more and more common, the big core counts on the Ryzen chips are quite appealing. In some cases, the difference can be startling. If you're eyeballing games like that, then an upgrade would become much more interesting.

But for Minecraft, I'm not sure that any upgrade would be of interest to you.

Yeah, that's the issue I was having when I was looking it up. Did a bit more reading and Minecraft still runs basically on a single thread, so upgrading the CPU would only make sense if he's going to a CPU with better single thread performance... Even the low end Ryzens would double his multi-thread performance, but they'd be a downgrade in performance on a single core. And the GPU isn't much of a factor.

Realistically even an i7-8700k is only about a 24% increase in single-thread over his current CPU, which wouldn't be worth the price tag for me.

Thanks everyone for the feedback.

Modded Minecraft is a unique case, but obviously one that affects me pretty regularly. On a whim last night, while poking around through options, i noticed that VSync was enabled. I disabled it, and FPS pretty much immediately went from ~40 to ~110. I’ve never seen vsync affect FPS by that much.

So anyway, I think I’m pretty good with MC for now, though there’s still some CPU constraint, but it sounds like that’s not necessarily something that can be fixed by throwing a new processor at it, given how MC doesn’t use all available cores.

I did end up ordering the 580 yesterday, and I expect it should help a lot with some other games I’ve been playing lately — Monster Hunter World, for example, has been making my 270 scream in pain.

Edit: Also, it’s not just cpu clock speed that drives performance, right? Architectures are fundamentally different, different cache sizes, and even DDR3 vs DDR4.

Speaking of... if I were to change from an Intel architecture to AMD (new mobo, cpu, ram), would I need to reinstall Windows 10? Or would it just boot and work as normal?

Edit: Also, it’s not just cpu clock speed that drives performance, right? Architectures are fundamentally different, different cache sizes, and even DDR3 vs DDR4.

Well, sort of. Memory latency is probably the biggest choke point in modern computing, and it has barely changed since 2003-ish. We've been using RAM that's at the same physical speed for almost 20 years, and just stacking it wider and wider, doubling signaling speed each time. Latencies of 5-5-5-10 on DDR2 became 10-10-10-20 on DDR3, and then became 20-20-20-40 on DDR4. Bandwidth has been improving, but actual wall clock latency has barely budged. DDR4-3200 is almost identical to DDR3-1600 in terms of response time, but can then transfer twice as much data once it (finally) gets around to responding.

So the 'fast' RAM on modern systems can drive big bandwidth, which doesn't help that much with individual processors, but it supports higher core counts relatively easily. DDR4-3200 can drive twice as many otherwise-identical cores as DDR3-1600.

That said, Ryzen chips have an unusual dependence on RAM clocking. Thin_J finally clued me in, upthread, as to why that happens: the inter-CPU communication fabric is synced with the RAM clock, so those cores benefit a very great deal from improving memory speed. Normally, you gain less than you'd expect from upclocking RAM, becauuse latency stays the same in terms of wall clock time. You might go from, say, DDR4-3200 to DDR4-3600, but your latency would also increase, perhaps from CL 14 to CL 16. Whatever bandwidth figure you choose, that latency barely budges, so the CL numbers just climb and climb as you increase signal rate. This can still be somewhat beneficial for unusually bandwidth-sensitive algorithms, even in games (The Witcher 3, for instance, shows pretty strong framerate improvements with higher-clocked RAM), but won't make much difference for most software.

With Ryzen syncing its communication to that clock signal, though, then trades of that type might be much more beneficial than on an Intel chip, even with programs that are not normally very bandwidth-sensitive. But pay close attention to people with more expertise in this area. I know that Ryzen benefits from RAM speed, but I don't know if there's an overall sweet spot, or if it just keeps improving the more you crank up the RAM rate.