Help me build my PC 2020 Catch All

Malor wrote:

I'd probably do PCIe 4 if you can; the new consoles are doing some crazy sh*t with incredibly high I/O SSDs, and you'll want all the bandwidth you can get if you want a similar experience on the PC. Even if you don't take advantage right away, with the glacial pace of advance in CPU tech, that board will probably have a long service life. It may not matter now, but you'll probably still be using it 5 to 7 years from now, and it's probably going to matter quite a lot by then.

There was some talk of the 3000 series card supporting DirectIO so that the GPU will be able to directly access assets stored on the drives instead of having to request them from the CPU. Once that become more widely supported that will be one major bottleneck removed and PCs should be able to do next gen console style asset streaming.

It is funny with all the bragging Sony and Microsoft are doing about their SSDs but really Microsoft's is only like a high end PCIe Gen 3 device and Sony's is a mid-tier PCIe Gen 4 device. There are already devices on the market that can out perform the PS5's SSD (costs more than a PS5 will though).

Rykin wrote:

It is funny with all the bragging Sony and Microsoft are doing about their SSDs but really Microsoft's is only like a high end PCIe Gen 3 device and Sony's is a mid-tier PCIe Gen 4 device. There are already devices on the market that can out perform the PS5's SSD (costs more than a PS5 will though).

Console tech isn’t exciting because it pushes the envelope. It’s exciting because it establishes a baseline. Can I get that storage tech in my PC today? Yes. Can I get games that really take advantage of that tech? Having it in consoles makes it *way* more likely.

Vargen wrote:

Console tech isn’t exciting because it pushes the envelope. It’s exciting because it establishes a baseline. Can I get that storage tech in my PC today? Yes. Can I get games that really take advantage of that tech? Having it in consoles makes it *way* more likely.

In the days before console were pre-packaged, walled garden, mid-range PCs, consoles definitely pushed the envelope. Go find ArsTechnica's 2000 article on the PS2 vs a PC for gaming and they give the nod to the PS2, calling it a view into the future of PCs 2 years from then. There really was a sense of consoles showing what could be done and throwing down the gauntlet for PC hardware and software to figure out how to do it in a general-purpose machine.

The PS3 was pushing 7 cores back in 2006 when most PCs had 2. It could run games much better than PC hardware. I think it's really only current-gen (soon to be last-gen) where your statement became true. I'd guess the enormous hardware development costs became not worth the minimal quality gains at that point and Sony/Microsoft/Nintendo started concentrating on community development.

Well the PS3 launched at $599 in 2006. In today's dollars that would be $780. If they were willing to go that high I am sure we'd get more advanced specs.

peanut3141 wrote:
Vargen wrote:

Console tech isn’t exciting because it pushes the envelope. It’s exciting because it establishes a baseline. Can I get that storage tech in my PC today? Yes. Can I get games that really take advantage of that tech? Having it in consoles makes it *way* more likely.

In the days before console were pre-packaged, walled garden, mid-range PCs, consoles definitely pushed the envelope. Go find ArsTechnica's 2000 article on the PS2 vs a PC for gaming and they give the nod to the PS2, calling it a view into the future of PCs 2 years from then. There really was a sense of consoles showing what could be done and throwing down the gauntlet for PC hardware and software to figure out how to do it in a general-purpose machine.

The PS3 was pushing 7 cores back in 2006 when most PCs had 2. It could run games much better than PC hardware. I think it's really only current-gen (soon to be last-gen) where your statement became true. I'd guess the enormous hardware development costs became not worth the minimal quality gains at that point and Sony/Microsoft/Nintendo started concentrating on community development.

Not even that.. consoles still have to self constrain to a certain extent their size and form factor. Nobody is cramming a 3090 inside a console anytime soon.. have you seen the size of your average 850W PSU?

As well consoles typically cost at most $500 which gets you a really nice GPU but nothing else. The fact that the Xbox Series X is pushing 12Tflops at $500 is impressive.. you'd be hard pressed to build a better performing PC at that price.

In 2000 when the PS2 was released PC gaming was just starting to explode with the GeForce 256 GPU which was in essence the first T&L 4 Channel GPU.. it was also way expensive and easily on its own outpriced the PS2... So yeah compared to your run of the mill PC it was impressive but it really was exactly as said before.. allowed more and more opportunity to cross platform (something that still took a few more years to become reality).

TFlops apparently are becoming less and less of an accurate measure of performance or computing power.
One of the pro AMD arguments in light of the release of the 3080 is that the efficiency of those TFlops is lower than Turing's and much lower than AMD's last gen Navi. Big Navi is supposed to follow suit with high efficiency.

fangblackbone wrote:

TFlops apparently are becoming less and less of an accurate measure of performance or computing power.
One of the pro AMD arguments in light of the release of the 3080 is that the efficiency of those TFlops is lower than Turing's and much lower than AMD's last gen Navi. Big Navi is supposed to follow suit with high efficiency.

It's a completely inaccurate way of measuring GPU strength but without benchmarks its all we have.. its also a general way to measure power though.. the Xbox Series X at 12Tflops probably isnt 3 times as power as the Xbox Series S at 4Tflops but it does tell us that the GPU in the Series X is significantly stronger than the S.

LeapingGnome wrote:

Well the PS3 launched at $599 in 2006. In today's dollars that would be $780. If they were willing to go that high I am sure we'd get more advanced specs.

The prices were that high last gen because Sony was trying to recoup a ~$400M hardware investment in Cell. They didn't just pay AMD for a custom APU reusing previously developed CPU cores and GPU execution units, they funded their own architecture development involving custom acceleration hardware. That's why there was nothing like it available on the PC market.

Consoles used to deliver new microarchitectures of their own and push the envelope on system design. Now that they're essentially APUs in a sexy case, they can't do that. They can be much cheaper and expand the customer base. With the big money from recurring subscriptions, that's probably the right solution. I'll assume Sony/Microsoft's finance department has run the numbers.

Edit:
I was attempting to point out that consoles have only been PC-style hardware for 1 (soon 2) generations now. They used to be unique designs that did push the envelope in various facets. I wouldn't rule out market forces taking us back to that state of affairs at some point.

Notably because its hella expensive to design competitive APU's and SOC's at this point in the game.. 7nm and 10nm are a magnitude more difficult to develop than the 65nm and 80nm processes that existed with the PS3 was introduced.

While the Cell Processor was unique during its time it was essentially a Power PC Core that was in itself a Single Core with a "Virtual 2nd Thread" and then SPE's that could do simple programs since the SPE's werent really like modern CPU's (no branch prediction etc.) PC's by then were dominated by x86 and Wintel to the point that even Apple gave in... its no surprise that PPC essentially died.. when your costs are rising you look to the common denominator.. and that's x86 and Wintel for everything.

Just like the embedded and mobile/tablet market is dominated by ARM the APU and CPU market is dominated by AMD and Intel.. why? well because trying to bring something new to the market would take $B's in R&D which is money better spent elsewhere.. thats why even Apple's SOC's are ARM based.

It's far more likely that we could see consoles jump on the ARM ship if ARM can continue to with Nvidia's help develop 4K capable SOC's than either Sony, Nintendo, or Microsoft develop their own APU/SOC.

The PS3 was a monster at matrix math. It was so powerful, in fact, that the PS4 was a substantial downgrade in terms of raw throughput. That's part of why PS3->PS4 emulation was so limited, because it mostly took rewrites to make the games work. To emulate another system, you need a CPU that's faster than the target, and for the first time, we had a new console that couldn't match the old one.

In exchange, it had a lot more RAM and better general-purpose compute capacity; those SPEs were very limited in what they could do, but were very quick at doing those things. The single PowerPC core was the only core that could run the normal, branchy code that you'd expect in any kind of complex game. It was a transition from one 3GHz general-purpose processor and seven tightly focused 3GHz cores with massive throughput, to eight x86 processors that were clocked quite slowly. They were slower than the PowerPC, faster than the SPEs at general-purpose code, but much slower than the SPEs at matrix math. Trying to emulate old games on a machine with such weak cores would have been exceptionally difficult; ports were probably required in most cases.

The 360 was a more normal design, as I understand, just a three-core 3GHz PPC machine, also optimized for matrix math, and not as good at branchy code. (no out-of-order execution). It made developing for it pretty easy, and it made ports relatively straightforward. The new cores still weren't as fast as the old ones, but porting and semi-emulation wasn't quite the nightmare that trying to run PS3 code was.

So, at this point, the XB1 and PS4 are weak PCs with lots of slow cores. The next gen will be very solid PCs with nice fast cores, excellent I/O, and fairly strong graphic muscle, with lots of RAM. And with the way progress has slowed down, it may take quite awhile for PCs to be decisively better, and quite a long time before they're better at the same price.

Malor wrote:

The PS3 was a monster at matrix math. It was so powerful ...

that the Air Force used thousands of them to make a supercomputer. I recall other countries taking a similar approach because that hardware could do some things FAR better than PCs of the day for an equivalent price. CFD and other simulations come to mind.

Nvidia seems to be caught between designing hardware for ML/datacenter use and gaming use. If they stick with one design and keep tacking ever more toward the datacenter side, there'll be increasingly more room for someone else to go all in on optimizing for gaming graphics.

https://wccftech.com/amd-ryzen-conti...

Ryzen dominates the CPU market in 2020.

Holy sh*t. I'll finally be able to peg my 144Hz 1440p monitor. $700 still feels ridiculous, but more charts like this will help to convince me.

What's MaxQ in this context? Some sort of maximum load i.e. complicated scene to render?

Also, I wish these kind of comparisons stretched further back so those of us on a >1 generation upgrade cycle can see what ridiculous improvements we can expect to see.

Jonman wrote:

What's MaxQ in this context? Some sort of maximum load i.e. complicated scene to render?

Also, I wish these kind of comparisons stretched further back so those of us on a >1 generation upgrade cycle can see what ridiculous improvements we can expect to see.

Maximum aerodynamic pressure! j/k, I believe it's max quality settings.

I'm on a 1070 and a 2060S is a roughly 50% better than that, so my mind is going to be blown if I get a 3080. That'd be roughly a 3x increase in performance.

Max quality I would assume...

Jonman wrote:

Also, I wish these kind of comparisons stretched further back so those of us on a >1 generation upgrade cycle can see what ridiculous improvements we can expect to see.

Find your card in the TechPowerUp GPU database, and look at the Relative Performance area to find how your card compares to a 20-series card, then use that as a reference point.

I just upgraded my monitor from a 1440p 60Hz one to a 1440p 144Hz one.

My 1070 has been doing a good job of hitting at or close to that 60Hz limit on the old monitor. Looks like the 3080 is the perfect companion upgrade.

GTX 960 -> RTX 3080 = ~555% performance increase, which I guess I can live with.

Gremlin wrote:

GTX 960 -> RTX 3080 = ~555% performance increase, which I guess I can live with. :P

IMAGE(https://i.gifer.com/1SB.gif)

peanut3141 wrote:

Nvidia seems to be caught between designing hardware for ML/datacenter use and gaming use. If they stick with one design and keep tacking ever more toward the datacenter side, there'll be increasingly more room for someone else to go all in on optimizing for gaming graphics.

NVIDIA has already kind of split their offerings in two, given that the licensing only lets data centers use their professional cards (like the Quadro and the V100), not the consumer cards. And the V100 doesn't even have a way to output video, so it's not much use for games.

I don't see them dropping either half right now, though.

*Legion* wrote:
Gremlin wrote:

GTX 960 -> RTX 3080 = ~555% performance increase, which I guess I can live with. :P

IMAGE(https://i.gifer.com/1SB.gif)

Maybe I can play Skyrim at a resolution higher than 1280x720!

peanut3141 wrote:
Jonman wrote:

What's MaxQ in this context?

Maximum aerodynamic pressure!

/Insprucker

Yeah, that was my association too.

Gremlin wrote:
peanut3141 wrote:

Nvidia seems to be caught between designing hardware for ML/datacenter use and gaming use. If they stick with one design and keep tacking ever more toward the datacenter side, there'll be increasingly more room for someone else to go all in on optimizing for gaming graphics.

NVIDIA has already kind of split their offerings in two, given that the licensing only lets data centers use their professional cards (like the Quadro and the V100), not the consumer cards. And the V100 doesn't even have a way to output video, so it's not much use for games.

I don't see them dropping either half right now, though.

Yeah if I was going to say either player was getting hamstrung by trying to satisfy both the datacenter and gaming markets it would be AMD. The Vega cards were basically datacenter cards first and foremost repurposed for gaming. The Radeon VII is literally just Radeon Instinct server accelerators that didn't make the cut pushed out the door as a gaming GPU. Those cards outdid Nvidia's GPU's in raw compute power (going back to the earlier discussion about the usefulness of Tflops as a measure of performance) but it didn't translate into higher frame rates in games.

Jonman wrote:

I just upgraded my monitor from a 1440p 60Hz one to a 1440p 144Hz one.

How much time did you spend looking at your mouse pointer?

*Legion* wrote:
Jonman wrote:

I just upgraded my monitor from a 1440p 60Hz one to a 1440p 144Hz one.

How much time did you spend looking at your mouse pointer?

None until I read that.

Am I supposed to get high first? Is this the computer users version of looking at their own hands and quietly saying "whoah"?

If you haven't spent at least a couple minutes just dragging windows around on the desktop and going "so smooooooth" you haven't really done the whole high refresh rate experience.

Thin_J wrote:

If you haven't spent at least a couple minutes just dragging windows around on the desktop and going "so smooooooth" you haven't really done the whole high refresh rate experience.

Yeah, OK, that's pretty sweet.

*Legion* wrote:
Jonman wrote:

I just upgraded my monitor from a 1440p 60Hz one to a 1440p 144Hz one.

How much time did you spend looking at your mouse pointer?

by "did" do you mean "still even a year later"?