Skip to content
  • by
  • News
  • 2 min read

Comparing Performance: NVIDIA GeForce RTX 4080 vs RTX 4090

The NVIDIA GeForce RTX 4080 and RTX 4090 are two powerful GPUs in the market, alongside the AMD RX 7900 XTX. The RTX 4090 boasts over 16 thousand shaders across 128 SMs and 24GB of GDDR6X memory, delivering strong 4K performance within a 450W power envelope. On the other hand, the RTX 4080 features the smaller AD103 die with 9,728 shaders or 76 SMs, paired with 16GB of GDDR6X memory and a power consumption of 320W. While there is a significant difference on paper, how does it fare in real-world scenarios?

In terms of specifications, the RTX 4090 has higher clock speeds and more memory compared to the RTX 4080. However, it's important to note that actual performance can vary depending on the game and resolution.

Testing was conducted on a system consisting of an ASUS Z790 Maximus Hero motherboard, Lian Li Gld 360 cooler, Intel Core i9-13900K CPU, and 16GB x2 DDR5 6,000 MT/s CL 38 memory. The benchmarks focused on 1440p and 4K resolutions.

At 1440p, the RTX 4090 consistently maintains a lead of 15% to 20% over the RTX 4080 in games like “A Plague Tale: Requiem”, “Assassin's Creed: Valhalla”, and “Cyberpunk 2077”. However, in “Dying Light 2” and “F1”, the performance difference between the two GPUs is less pronounced.

Moving to 4K resolution, both GPUs perform well with average frame rates exceeding 200 FPS in games like “Hitman 3” and “Hogwarts Legacy”. The RTX 4090 generally holds an advantage, with a lead of around 9% to 25% over the RTX 4080, depending on the game.

It's worth noting that the Radeon RX 7900 XTX from AMD also outperforms the RTX 4080 in some games, such as “Cyberpunk 2077”. However, the RTX 4090 still maintains its lead over the competition.

Overall, the RTX 4090 showcases stronger performance across various games and resolutions compared to the RTX 4080. Its higher specifications, including more shaders and memory, contribute to the improved performance. However, it's important to consider individual preferences, budget, and specific game optimization when choosing a GPU.

Sources:
– Source article