Comparing 6 and 16 cores, Lets know how well Ryzen 5 CPUs has aged? – LEARNALLFIX

Comparing 6 and 16 cores, Lets know how well Ryzen 5 CPUs has aged?

Comparing 6 and 16 cores, Lets know how well Ryzen 5 CPUs has aged?

serving over 25 years to computer enthusiasts. You can rely on TechSpot for reliable tech analysis and guidance. We might receive a commission if you make a purchase using one of our links.Index of Articles

AMD’s Ryzen 5 desktop CPU line has been the most well-liked over the last five years, with models such as the Ryzen 3600 selling in remarkable quantities. Subsequently, the 5600X also gained a lot of popularity, and although though it was released later than expected, the non-X version also became quite well-liked.

Although the $300 MSRP of the Ryzen 5600X didn’t exactly excite us, it was still a wonderful deal in comparison to other parts. As is customary, AMD moved quickly to implement price reductions. Eventually, over a year and a half later, they released the non-X standard 5600 processor for $200. Discounts were applied fast, bringing the price down to $120-130.

The Ryzen 5 5600 and 5600X were, of course, always reasonably priced and offered great value for gamers on a limited budget. But how do they hold up over time, and would you have been better off investing in an 8 or even 16-core model?

AMD Zen 3 in Context

Even while the 5600 series provided gamers with great value, not everyone was persuaded. Some people recommended the Ryzen 7 5800X or the 5700X, which were released alongside the 5600, in place of the 5600 series, claiming it was a bad purchase. Others went so far as to propose the Ryzen 9 5950X or bust, which may have been accurate despite its borderline absurdity. Since Zen 3 was released about four years ago, we’d like to compare them with contemporary games nowadays.

It goes without saying that the Ryzen 5 5600X will be slower than the R7 5800X and, by extension, the 5950X. The question is, by how much, and will it still be usable? It’s also important to review past prices, as the 5950X was never reasonably priced. It is possible to make a compelling case for the 5700X as a gaming CPU, and we did just that back in the day.

The Ryzen 5 5600 and Ryzen 7 5700X, which both offered outstanding value at its release prices before dropping even further as we previously discussed, seemed like the wise choices for gamers. Early in 2023, both the 5600 and the 5700X offered a 24% discount for nearly the same performance level, with the 5600 being a considerably better value than the 5600X and the 5700X being a much better value than the 5800X.

In contrast, the Ryzen 9 5950X was significantly more costly at $500, costing 160% more than the 5700X and little more than 280% more than the 5600.

We won’t test every CPU and clog the data because the 5600 and 5600X are nearly equivalent in terms of performance and are both unlocked CPUs (the same goes for the 5700X and 5800X). Rather, we will be contrasting the 5600X, 5800X, 5950X, and for fans of the 3D V-Cache, the 5800X3D. 32GB of DDR4-3600 CL14 memory and the RTX 4090 were used to test every CPU. Now let’s get started.

Benchmarks

Assassin’s Creed Mirage is up first. The 5600X and the 5800X, which is what we usually found three years ago, are fairly similar at 1080p. Just 2.5% faster on average and 7% faster at the 1% lows were the 5700X. Once again, the 5950X outperformed the 5600X in terms of average frame rate, increasing it by 10% with a 16% jump at the 1% lows. This is a good improvement, but it’s not really worth spending more than three times as much. The 5800X3D, which outperformed the 5600X by 40% in speed, is undoubtedly the hero in this case.

Since 4K data is frequently requested, we also included it. Remember that the RTX 4090’s 4K ‘Ultra High’ preset is mostly what we’re examining here. You may make the margins more like what we see in 1080p by turning on upscaling or lowering the quality settings.

There is even less of a difference between the 5600X, 5800X, and 5950X when it comes to Helldivers 2. At 1080p, the three components had performance that was equivalent. The average frame rate was increased by at least 27% by the 5800X3D, which was significantly quicker. This indicates that L3 cache is more important for gaming performance than core count.

Ratchet & Clank: Rift Apart demonstrates a notable improvement in performance between the 5600X and the 5800X. In this case, the 8-core processor performed a staggering 27% faster at 1080p, with 13% more 1% lows. While the 5600X still offered a good experience overall, the 5800X has seen a notable performance boost.

The only CPU that was unable to fully utilize the RTX 4090 at 4K was the 5600X. However, 110 frames per second on average is good for a CPU that costs little more than $200. During a 20-minute gameplay with the 5600X, frame time performance was outstanding.

Performance in Spider-Man Remastered likewise increases with the number of cores, although L3 cache is significantly more crucial for increasing frame rates. In this instance, the 5950X outperformed the 5800X by 25%, while the 5800X outperformed the 5600X by 11%. 15% quicker than the 5950X and 44% faster than the 5600X was the 5800X3D.

The 6-core Zen 3 CPU was barely slow, even if the 5950X outperformed the 5600X with an average speed of 140 frames per second and 117 frames per second for the 1% lows.

The game Cyberpunk 2077: Phantom Liberty uses a lot of CPU power. In contrast to Ratchet & Clank, the 5600X, 5800X, and 5950X all functioned similarly in spite of this. Once more, AMD’s 3D V-Cache significantly improved performance, allowing the 5800X3D to outperform the 5950X by 32%. We were very much GPU-limited at 4K, reaching only 80 frames per second, so you’ll probably need to enable upscaling to go up to about 100.

Another game that strains the CPU is Hogwarts Legacy. Even if the 8 and 16-core variants are speedier, there isn’t much of a difference. For instance, the 5800X outperformed the 5600X by only 9%, but the 5950X, perhaps as a result of a scheduling conflict, delivered inconsistent performance. Although 8-core components perform better, it’s difficult to determine if they are more affordable than 6-core versions.

With only a 3% increase from the 5600X to the 5800X, Horizon Forbidden West seems to play as well on the 5600X as it does on the 5800X and 5950X. In the meantime, the 5800X3D outperformed the 5950X by up to 25%.

Even though Dragon’s Dogma II is a game that heavily relies on the CPU, the 5800X outperforms the 5600X by only 4%, while the 5950X outperforms the 5800X by 9%. However, the 5800X3D was 36% faster, emphasizing the significance of L3 cache. Although the improvement was not great, having more cores made a difference, and the 5600X with its six cores was still quite useful.

Baldur’s Gate 3 is another game that shows off the 5600X’s exceptional performance. In this case, the 5950X performs 13% quicker than the 5800X, offering only a 6% performance boost. Performance is improved by having more cores, but not noticeably. The 5800X3D’s 3D V-Cache dramatically improves speed; it is now 51% quicker than the 5600X and 42% faster than the 5800X.

Because The Last of Us Part I is a CPU-intensive game that mostly uses the 5600X, the 5800X is 14% faster. In the meantime, the 5800X and 5950X performed similarly, with the 5800X3D offering the greatest results.

Lastly, Starfield is another application where the 6-core processor is unable to match the 5800X’s speed. Performance is still quite good, though, with 1% lows surpassing 60 frames per second. While the 5950X outperformed the 5800X by 23%, the 5800X outperformed the 5600X by 15%.

Calculating the Mean

This is an analysis of the geometric mean-derived average performance for the 11 examined games. The 5800X outperformed the 5600X on average by 9%, while the 5950X outperformed the 5600X by 14%. The 5800X3D outperformed the 5800X by 27% and 39%, respectively, for a significant performance boost.

Here is the 11-game average statistics from our November 2020 5600X day-one evaluation for comparison. There were only two instances during that time where the 5950X outperformed the 5600X by 12%, with an average difference of just 4%. Over the last 3.5 years, games have undoubtedly become more CPU-intensive, which was to be expected.

What We Acquired

That’s it, then. The Ryzen 5 5600X is on average 8% slower than the 5800X and 12% slower than the 5950X in many of the most demanding AAA games available today. Not bad for being so much less expensive. Furthermore, there was not a single instance where frame rates were not comfortably above 60 frames per second when CPU-bound, indicating that overall performance was outstanding.

Going all the way back to the initial release of the 5600X, 5800X, and 5950X, it’s difficult to argue that gamers on a tight budget should choose the higher core count components. Saying something like, “Spend more because more cores are better and more future-proof,” is insufficient.

When the 5800X was released, it cost $150, which is quite a bit more than the 5600X, which is 50% more. Three and a half years later, we still feel that while having more cores was good to have if you could afford them, it wasn’t really necessary for gaming. Even if we can now cite several instances where you are receiving more, it is now much simpler to justify the update because it offers a genuine performance boost even though the cost may not be fully recovered.

Even with the 167% increase in cores, the Ryzen 9 5950X was and is ridiculous for gaming, as the 5800X3D utterly destroys it with an average performance boost of 22%. This is a far larger margin than what we see between the 5600X and 5950X.

Of course, the 5950X makes perfect sense if you’re also doing core-intensive office applications, but it was never a wise decision to use it just for gaming. It is reasonable to conclude that those who upsold gamers on the 5950X solely for gaming cheated them out of a substantial sum of money.

To be clear, though, not all gamers should have purchased a Ryzen 5 or Core i5 today, just as not all gamers should have invested in the Ryzen 5 5600 series. Three and a half years later, our initial thoughts remain the same: these items are good choices at reasonable prices, and they work well for gamers on a tighter budget.

Unfortunately, AMD’s entry-level competitors now often provide far better value than Intel, and AMD must address this with AM5 if they are to win the entire retail market.
All things considered, it was intriguing to discover a few instances in which the 5600X was beginning to lag behind the models with larger core counts by a significant amount, including in Ratchet & Clank, Spider-Man, Starfield, and The Last of Us Part 1.

If we had included fifty or more games, both recent releases and well-known games from the previous few years, the overall margins would have been lower because most games aren’t that taxing on the 5600X.

Returning to some of the more recent content, it’s crucial to remember that core count shouldn’t be the primary determinant of whether or not to purchase a CPU for gaming or any other purpose. The overall performance of the CPU is what counts. For instance, in our testing, the 5800X3D outperformed the 5950X by a significant margin even though it had half as many cores, and the new Ryzen 5 7600 is on par with the 5800X3D even though it has two fewer cores and far less L3 cache.

When comparing CPUs with the same design, core count is somewhat significant; however, when comparing CPUs with different architectures, such as Zen 3 and Zen 4 parts, core count is nearly meaningless. With Intel’s P-core and E-core enabled components, it gets much more convoluted.

The resolution angle comes next. While some people might find 4K CPU benchmarking interesting, keep in mind that your goal frame rate—rather than resolution—is what matters. For instance, it’s not very helpful to learn that the RTX 4090 can only produce about 75 frames per second at 4K with the “ultra preset” if you want 144 frames per second in Cyberpunk 2077. Because all CPUs in that particular case were able to drive 70–80 frames per second during our test, all we are doing is testing the RTX 4090.

What you want to know is if the CPU can render the frames at the frame rate you want and what GPU settings you need to employ to get those frames. To put it briefly, there is very little relevant information on CPU performance that can be gleaned from GPU-limited 4K testing with an RTX 4090 utilizing ultra or high-quality settings—certainly nothing that cannot be learned from the 1080p data.

Leave a Reply

Your email address will not be published. Required fields are marked *