Help me build my PC 2020 Catch All

TheGameguru wrote:

Do you use Linux? If not I wouldnt worry about it.. seems like it only affects newer Linux builds.

Windows 10 for me. Thanks for the info,no read up on it and came to the same conclusion so it's good to have validation.

Looking at the 3600x. Any recommendations for motherboard? I'm looking for something mid-range.

*Legion* wrote:

Here's my alternative version of the build that comes out to $578 total.

First up, switched the CPU to a 2600 and picked a relatively cheap but not bottom-of-the-barrel B450 board out there. For a simple and inexpensive build, a microATX board is just fine.

Cut the RAM to 16GB (2x8). For the price point you're at, and RAM prices being what they are, spending the premium for 32GB of RAM makes no sense. Much better to bump that RAM speed up to 3200 instead. And even this microATX board has 4 RAM slots, so upgrading to 32GB later is an option. Like Middcore said, everything else in this system is going to be obsolete before you get to the point of needing more than 32GB of RAM.

Switched the SSD to a good (albeit not NVMe) M.2 drive because it's 2019 and who wants to fuss with 2.5" drives and SATA cables and crap anymore? The motherboard I picked actually has two M.2 slots. Shaved $5 off in the process.

Also switched out the power supply for the non-modular version to shave about $10 off, even though aside from that, I like the modular version better (and have that exact PSU powering one of my secondary systems). But wanted to keep this as close to $500 as possible. There's probably a few more bucks that could be shaved off here while still staying out of the garbage tier of PSUs, but I only use Seasonic now and I'll be damned if I'm going to talk someone out of a Seasonic that already has one in their build list.

I upgraded with an extremely similar build a month ago. I've had no complaints with the 2600 or the Asrock (slightly different model, but still a B450M) motherboard. Solid choices.

Math wrote:
*Legion* wrote:

Here's my alternative version of the build that comes out to $578 total.

First up, switched the CPU to a 2600 and picked a relatively cheap but not bottom-of-the-barrel B450 board out there. For a simple and inexpensive build, a microATX board is just fine.

Cut the RAM to 16GB (2x8). For the price point you're at, and RAM prices being what they are, spending the premium for 32GB of RAM makes no sense. Much better to bump that RAM speed up to 3200 instead. And even this microATX board has 4 RAM slots, so upgrading to 32GB later is an option. Like Middcore said, everything else in this system is going to be obsolete before you get to the point of needing more than 32GB of RAM.

Switched the SSD to a good (albeit not NVMe) M.2 drive because it's 2019 and who wants to fuss with 2.5" drives and SATA cables and crap anymore? The motherboard I picked actually has two M.2 slots. Shaved $5 off in the process.

Also switched out the power supply for the non-modular version to shave about $10 off, even though aside from that, I like the modular version better (and have that exact PSU powering one of my secondary systems). But wanted to keep this as close to $500 as possible. There's probably a few more bucks that could be shaved off here while still staying out of the garbage tier of PSUs, but I only use Seasonic now and I'll be damned if I'm going to talk someone out of a Seasonic that already has one in their build list.

I upgraded with an extremely similar build a month ago. I've had no complaints with the 2600 or the Asrock (slightly different model, but still a B450M) motherboard. Solid choices.

I'd like to see what you built, but I think your build is set to private?

sithload wrote:

Legion, I live a couple miles from a MicroCenter. I figured I should check their bundles, and they have that MB/CPU combo for $165, $20 less than buying them individually. That makes your list only $58 more than my budget, and given how often I (don't) upgrade, it's probably money well spent.

Score. That's some great computing power per dollar right there.

Given the relatively low difference in price, I think that's the direction I'm heading. One question - the motherboard specs says DDR4 3200 OC; do I have to overclock in the BIOS to get the full benefit of the RAM you recommended?

There's a bit of semantics there in that DDR4 RAM speeds above 2133 are considered "overclocks".

What you're seeing on the motherboard specs has to do with AMD's "System Memory Specification" for the various CPU lines. What that is is basically the minimum guaranteed speed that that CPU and its motherboards will support, with support of any faster speed being left as an option to the motherboard manufacturers to implement.

For 2nd gen Ryzen, the System Memory Specification is 2933 MHz. That means any motherboard for Ryzen 2000 series chips *must* support that speed, but anything beyond that is optional and not guaranteed to work. In practice, pretty much everyone supports and expects to run at the higher speeds. But that's why the specs list 3200 as (OC) but 2933 is not.

3200 is an extremely common memory speed rating, so even if we just want to run at the 2933 that Ryzen 2000 guarantees support for, we're most likely going to be buying a 3200 kit anyway. There's very little to be saved by dropping to a 2933 or 3000 rated kit, and having the headroom is nice anyway.

sithload wrote:

I'd like to see what you built, but I think your build is set to private?

Oops! Should be viewable now. I wasn't building from scratch: reused my case and PSU. I also bought the GPU used on eBay.

TheGameguru wrote:
FridgeGremlin wrote:
Malor wrote:

The RDRAND instruction is borked in AMD 3000 chips right now; it totally doesn't work, while simultaneously *claiming* that it's working with the return value. This can cause absolute havoc in Linux, and could potentially mess up any software that uses random numbers, depending on how those numbers are generated. Any software that depends on the CPU, trusting it to generate random numbers when it says it's generating random numbers will be totally insecure.

Firmware patches may fix the problem, but they're rolling out pretty slowly.

That's a very interesting and weird bug. I read up on it some and a fix seems imminent. Still, should I be considering an i5 of some type as my higher end option and avoiding the 3000 series?

Do you use Linux? If not I wouldnt worry about it.. seems like it only affects newer Linux builds.

The problem with the kind of failure is that it's invisible. It's completely breaking the security of any program that depends on RDRAND, but it's doing so silently, providing corrupted data that can be easily reversed later. This means that, for instance, any encryption systems that use RDRAND in significant ways will be compromised unless firmware patches are deployed. Encryption keys generated on this hardware may be permanently compromised, requiring re-issuance, but the fact that they are compromised may well not be that obvious.

The fact that RDRAND wasn't working might not even have been discovered yet, if not for the accidental failure that Linux experienced. Systemd was asking the system for a random process ID, and the kernel hackers decided to just hand it an RDRAND result, since RDRAND claimed to be working. Systemd's logic was, basically, "A) get a random process id; check if the id is already in use; if in use, goto A; otherwise continue". So, normally, it would get an ID and go. Occasionally, there might be a repeated call or two because of collisions, and then it would have a good PID and could continue.

But on AMD hardware, it was getting back the *exact same number every time*, I think 0xFFFFFFFF. So that meant that systemd spun forever, asking for a new process ID, which crashed the system, revealing the severe hardware bug.

We got lucky. Normally, this failure doesn't cause symptoms that are so easy to detect.

This bug may not crash Windows, but that doesn't make it any less severe. It just means that the failures are more subtle and more dangerous. Make sure you get a firmware that fixes RDRAND if you have an AMD 3000-series chip.

The problem with the kind of failure is that it's invisible. It's completely breaking the security of any program that depends on RDRAND, but it's doing so silently, providing corrupted data that can be easily reversed later. This means that, for instance, any encryption systems that use RDRAND in significant ways will be compromised unless firmware patches are deployed. Encryption keys generated on this hardware may be permanently compromised, requiring re-issuance, but the fact that they are compromised may well not be that obvious.

Here we go again.. this is FUD. AFAIK outside of some use in OpenSSL the crypto and encryption community has largely ignored RDRAND for encryption systems largely because they would never trust Intel. In fact its sheer existence and feared compromise of Intel by the NSA has largely driven most use of it as a portion of a random generating that is part of a series of other random generators.

For most general gaming and Windows 10 office productivity use you aren't putting yourself at any greater risk using AMD vs Intel in like 99.9999999999999999999999% of scenarios. And if you are truly putting super sensitive information on your PC connected to the internet then you can probably get a better method regardless of computing platform.

Which is all just noise, because you don't know who's using RDRAND and for what purposes under Windows. Open source uses it, but the only reason we know that is because it's open source.

The only reasonable conclusion to draw, given that it's definitely in use in some of the projects we CAN inspect, is that it will also be in use in some of the projects we can't.

I stand behind this claim: don't use these chips unless you're sure you have a firmware update. You have no way to know what's going to silently stop working correctly. If it weren't for the systemd crash, the failure might have gone unnoticed for substantially longer, because it is subtle and hard to spot unless you're actively looking.

If you have been using these chips and if you depend on local encryption in any way (e.g., Bitlocker) I strongly suggest regenerating keys once you have new firmware. With closed source, you just can't tell what's been affected, so all you'll lose is a little time rekeying. You'll be in no worse shape than you are now, and you might potentially be in much better shape. If the encryption actually matters to you, that's not a huge burden.

Which is even more noise because if all you think you need to do is rely on encryption to stop “the bad guys” and you are that worried about a RDRand bug then you’ve already failed.

So go ahead and use these chips with full confidence that you are smart enough to take reasonable precautions with your sensitive data.

At least the Cooler Wars were relevant to the thread topic.

To steer this in a different direction, thoughts on the 2070 Super vs. the 5700xt?

TheGameguru wrote:

Which is even more noise because if all you think you need to do is rely on encryption to stop “the bad guys” and you are that worried about a RDRand bug then you’ve already failed.

Encryption is an important tool. If that tool has invisibly failed underneath you, it can give you very serious problems. It's not enough alone, but if your encryption fails, the rest of your security efforts are probably not going to do you any good.

This bug matters, because of the nature of the failure; it says it's working, but it isn't. The CPU explicitly tells the software that the RNG is working when it is doing nothing of the sort. And you can't tell, on Windows, what will invisibly fail as a result.

Despite the fact that Linux/systemd machines are the ones that crash with this bug enabled, you're safer over there, because you can tell exactly what's using RDRAND and how you might be impacted. You're safer using unpatched firmware in open source than you would otherwise be.

Unless the prices have come down, last time I checked the AMD cards were definitely better bargains.
Like $100+ less across the board for marginally worse benchmarks.
The 5700, not the XT is $350 in case you hadn't noticed...

Although, I just saw an add from Newegg for Black Friday where a 1660 was $210. Most are $230-$280 and noe I just saw that a radeon 590 is going for $200 so lots of good options.

Yeah. The 5700xt sits just above the 2070 Super and is $50 less. It is also closer to the 2080 in performance for $200 less.

Malor wrote:
TheGameguru wrote:

Which is even more noise because if all you think you need to do is rely on encryption to stop “the bad guys” and you are that worried about a RDRand bug then you’ve already failed.

Encryption is an important tool. If that tool has invisibly failed underneath you, it can give you very serious problems. It's not enough alone, but if your encryption fails, the rest of your security efforts are probably not going to do you any good.

This bug matters, because of the nature of the failure; it says it's working, but it isn't. The CPU explicitly tells the software that the RNG is working when it is doing nothing of the sort. And you can't tell, on Windows, what will invisibly fail as a result.

Despite the fact that Linux/systemd machines are the ones that crash with this bug enabled, you're safer over there, because you can tell exactly what's using RDRAND and how you might be impacted. You're safer using unpatched firmware in open source than you would otherwise be.

No it matters to you and a handful of Linux people. The rest of the world couldn’t give a crap because nobody relies on RDRand for encryption.

https://www.phoronix.com/scan.php?pa...

Some testing on the new TR4 Threadrippers.. AMD really upped the price here and combined with Intel's price drop these new Threadrippers are not a complete slam dunk but overall they stomp the Intel competition. The real price/performance king will likely be the 3950x

Anad's take

https://www.anandtech.com/show/15044...

Yah, I mentioned the 3950x before we got 15 posts deep into Linux and random number generators.

Analysts concerned as Intel falls further behind in flawed random number generation performance.

I would have given that 1000 likes Legion.

TheGameguru wrote:

Anad's take

https://www.anandtech.com/show/15044...

AnandTech wrote:

I have never used the word ‘bloodbath’ in a review before. It seems messy, violent, and a little bit gruesome. But when we look at the results from the new AMD Threadripper processors, it seems more than appropriate.

Intel won the AVX512 test and that's it.

peanuts3141 wrote:

Intel won the AVX512 test and that's it.

And that tells you something about the markets for which they are designing their cores.

Robear wrote:
peanuts3141 wrote:

Intel won the AVX512 test and that's it.

And that tells you something about the markets for which they are designing their cores. :-)

I guess. But in deep learning type applications Nvidia is eating them for breakfast.

Robear wrote:
peanuts3141 wrote:

Intel won the AVX512 test and that's it.

And that tells you something about the markets for which they are designing their cores. :-)

Not sure designing your cores for 3D particle movement when the largest AWS users are Netflix/Twitch/LinkedIn/Facebook is a wise decision. AMD's current perf/$ and perf/W advantage in those applications is pretty dramatic.

Ars says that Intel wins the AI benchmarks pretty handily, but is pretty much crushed everywhere else.

Intel's using new x86 instruction extensions collectively called Deep Learning Boost, and they've tied them into the OpenVINO and AIXPRT codebases. In those applications, Intel did extremely well, but that's what dedicated silicon does.

It's also worth pointing out that you probably wouldn't want a TR chip at home, because these things suck up power like crazy. The Kill-A-Watt meter Jim Salter was using was showing a total system draw of 403 watts at default settings, and 472 watts with Precision Boost Overdrive enabled. That extra 70 watts seems to net you almost nothing, so it looks like a good setting to leave off.

Keep those heat measurements in mind when you're thinking about benchmarks, because that's hot enough to actually matter. You've got the noise problem of exhausting all that heat into the room, and then the climate control problem of moving it outside, at least in the summertime. In the winter, that would partially offset your heating bills, but resistive heat is the most expensive kind, so it would only make the bite somewhat less painful. It would still bite.

The regular desktop AMD CPUs are apparently fine on TDP, and you can also apparently trust the figures they give you. If AMD says a chip will consume X watts at max load, that's what it will consume. Intel is not to be trusted on TDP.

New hardware incoming:
Intel i7-9700KF
MSI MAG Z390 TOMAHAWK Motherboard
MSI VENTUS GeForce RTX 2070 SUPER 8GB Video Card
16GB DDR4-3200MHz ADATA RGB Memory
1TB Intel M.2 NVMe SSD

Old hardware outgoing:
i5-6400
GForce GTX 950
8GB DDR4 RAM
1TB Hard Drive

I think I might notice a bit of a change in speed.

Graphics card and SSD are going to be the biggest differences...
You probably could have just put those into your old system and had 90% of the boost moving to the completely new system.

1TB Intel M.2 NVMe SSD

I'm quite pleased with the 660p Intel drive I just put in this thing. When I saw it was a hundred bucks, I couldn't resist. The speed's not that much different, but man, the space is sure nice. The 240-gigger was getting annoying, even with lots of regular hard drive space available.

I bet you'll like it a lot, and you should be able to actually get the 1700-1800 megs/second it's capable of.

TheGameguru wrote:
FridgeGremlin wrote:
Malor wrote:

The RDRAND instruction is borked in AMD 3000 chips right now; it totally doesn't work, while simultaneously *claiming* that it's working with the return value. This can cause absolute havoc in Linux, and could potentially mess up any software that uses random numbers, depending on how those numbers are generated. Any software that depends on the CPU, trusting it to generate random numbers when it says it's generating random numbers will be totally insecure.

Firmware patches may fix the problem, but they're rolling out pretty slowly.

That's a very interesting and weird bug. I read up on it some and a fix seems imminent. Still, should I be considering an i5 of some type as my higher end option and avoiding the 3000 series?

Do you use Linux? If not I wouldnt worry about it.. seems like it only affects newer Linux builds.

Well, systemd is at least patched to check for it and deal with it. I would have thought the microcode update they released last month (https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/commit/?h=20191022&id=2b016afc348ba4b5fb2016ffcb2822f4a293da0c) would fix it, but people are saying that's not the case.

GameGuru wrote:

I guess. But in deep learning type applications Nvidia is eating them for breakfast.

But then, Intel made their design decisions years ago. One thing they really can't do well is change their architectures. They are remarkably slow at that. Look at how long they held onto single-thread speed as their principle performance metric.

fangblackbone wrote:

Graphics card and SSD are going to be the biggest differences...
You probably could have just put those into your old system and had 90% of the boost moving to the completely new system.

No doubt that will be the bulk of the improvement but the significantly faster chip will come into play at times and I am looking forward to the 8GB increase in RAM so I can load mods like Roguetech for BATTLETECH which is too much for my existing computer.

Robear wrote:
GameGuru wrote:

I guess. But in deep learning type applications Nvidia is eating them for breakfast.

But then, Intel made their design decisions years ago. One thing they really can't do well is change their architectures. They are remarkably slow at that. Look at how long they held onto single-thread speed as their principle performance metric.

Even now, for gamers, that's often what we want. When chasing total throughput, per-thread performance is always useful, under every circumstance, with every algorithm. Adding more cores works with only a subset of algorithms, and the more cores you add, the fewer algorithms work well.

They're only switching to multiple cores because they have to, because they can't make transistors go faster anymore, not because that's what we actually need. We would be much happier with a single core at 32GHz, as long as the RAM could keep up, than with eight at 4GHz.

That isn't, as it turns out, the world we live in, but Intel probably thought they could figure out at least some of the megahertz problem through their process expertise. They've been having a surprising amount of trouble with their new fabs, which is not typical for them.