Will AMD disrupt the graphics market with RDNA 3 and RX 7900 XTX?

1 year 6 months ago

Last week, AMD finally revealed its hand in terms of its next generation graphics line-up. Based on the new RDNA 3 architecture and featuring highly innovative designs, two products were unveiled: a flagship Radeon RX 7900 XTX priced at $999 and a cut-down RX 7900 XT - yours on December 13th for $899 (UK prices for both products remain unknown at this time). The RDNA 3 reveal was AMD's most promising opportunity in years to disrupt a discrete graphics market that sees Nvidia command an 80 percent share - and hopes were high that AMD could reshape the competitive landscape. So how did it fare?

What's clear is that the reality of AMD's products did not live up to the pre-launch hype delivered by leakers who clearly were not in possession of much in the way of actual facts. Talk of 2x performance boosts and 'almost 4GHz GPUs' clearly let down some fans and rather unfairly, took the sheen away from AMD's actual achievements, which are highly impressive in many ways. For example, an additional 50 to 70 percent of performance is, by and large, exactly what Nvidia achieved with RTX 4090. And we're seeing the first realisation of AMD's chiplet design in the graphics space, where a 5nm compute processor sits on an interposer with six memory cache dies at 6nm - saving money. This does seem to have come at the expense of clock speeds and thus raw performance - a 2.3GHz core clock is only a small bump faster than RDNA 2 when other 5nm products have proven prodigiously faster. But the point is, AMD is breaking new ground here with rewards that can only scale positively in future products.

Beyond the highly innovative chiplet design, the RDNA 3 architecture itself also looks like more of a refinement of RDNA 2 - there is a lot more compute power, but the compute units themselves look similar in design ethos to their predecessors. There's still no sign of the kind of RT hardware acceleration seen in Intel Arc and Nvidia products and there are seemingly no bespoke machine learning blocks either - everything is built into what AMD describes as its 'unified compute unit'. This has huge advantages in terms of die area (and thus cost) but what it also means is that RT and ML features will continue to lag behind the competition. Based on AMD's own numbers, RT performance vs last-gen rises almost entirely in line with non-RT performance.

Read more

Author
Richard Leadbetter

Tags