29 months ago
I've been wondering this for a while now... and forgive me if this is a noob question but why couldn't GPU makers increase the GPU die size to provide massive compute-power increases and forgo some efficiency? I don't really understand the "we have to do everything in our power to shrink the lithography in order to increase compute power without decreasing efficiency" mindset, when it comes to elite-performance products.
To be honest, I'd actually consider a 200$ price increase over the top tier GPU if there was a new class of video cards available: the "more-power-consuming-and-requires-water-cooling-card-but-yields-40%-more-compute-power GPU." The "V12" engine, if you will, of the GPU world.
Consumers can buy a 1200w PSU that can power my neighbors' Tesla, but nVidia gave us a 1080ti @ 250w. That's great, but where's the "GTX Nitro" @ 400w with 40%+ increased performance over the ti model?
I presume my layman's understanding of the hardware prevents me from drawing deeper conclusions, so I figured I'd ask the much smarter PCPP community what y'all think/know! :)