AMD Radeon HD 7900 Possible Specifications (XDR2 Memory, AMD Graphics Core Next)

AMD Radeon PCB

The specifications of AMD’s next generation of graphics cards seem to be there. The HD 7900 series (HD 7950, HD 7970 and HD 7990) will be certainly launched in Q1 2012 with 28nm transistors. The GPU, codenamed Tahiti (PRO and XT) will use a new architecture called GCN for Graphics Core Next (the current HD 6900 is based on a VLIW4 architecture). The other important change of the HD 7900 series, is the presence of the XDR2 memory in place of the GDDR5. The XDR2 memory is twice faster than GDDR5. The Radeon HD 7970 will have 2048 shader cores (1536 for the HD 6970), 128 texture units and 64 ROPs. And the power consumption will 190W, 60W in less than the HD 6970.

But before releasing the HD 7900, AMD will produce the GPUs for the HD 7800 series (HD 7870, HD 7850 and HD 7650) planned for the end of 2011. The Radeon HD 7870 will be powered by the Thames XT GPU (28nm, VLWI4 architecture, 1536 shader cores) and will be available with 2048MB of GDDR5 memory.

AMD Radeon HD 7900 series specifications

AMD Radeon HD 7800 series specifications


24 thoughts on “AMD Radeon HD 7900 Possible Specifications (XDR2 Memory, AMD Graphics Core Next)”

  1. DrBalthar

    Interesting move. Considering that nVidia can barely get GDDR5 to run at a recent speed with their shitty memory controller this will put them even further ahead in the game.

  2. ATifanclub

    Impressive numbers, XDR2, 28nm, lower powered, DX11.1, PCI-E 3.0…well thats hammer of Thor not just GPU.

  3. Squall Leonhart

    DrBalthar talking shit again i see, nvidia’s controller can hit 5.2Ghz quite easily.

  4. Leith

    Gotta love AMD recycling old architectures with a new model number… 68xx, now 78xx…

  5. Anon

    Nice paper launch.

    HD6970 vs GTX580

    2.64 billion vs 3 billion
    Engine Clock
    880 MHz vs 772 MHz
    Stream Processors / CUDA Cores
    1536 vs 512
    Compute Performance
    2.7 TFLOPS vs 1.58 TFLOPS
    Texture Units
    96 vs 64
    Texture Fillrate
    84.5 Gtex/s vs 49.4 Gtex/s
    32 vs 48
    Pixel Fillrate
    28.2 Gpix/s vs 37.1 Gpix/s
    Frame Buffer
    2 GB GDDR5 vs 1.5 GB GDDR5
    Memory Clock
    1375 MHz vs 1002 MHz
    Memory Bandwidth
    176 GB/s (256-bit) vs 192 GB/s (384-bit)

    GTX580 wins. 😛

    AMD trying to fool customers with high numbers.

  6. Promilus

    @Anon – I already have one 😉 And the purpose of money is not to spend them… it’s rather to spend them wisely 😛 Of course GTX580 > HD6970 in pure performance on stock clocks but not everywhere and not always 😉

  7. John

    These numbers look too good to be true for the low-end cards. The Radeon 7570 with 16 ROPs, and Power Consumption of 50W?!?! Similar spec’ed cards for the past generations ( 4600, 5500 series) have been using 8, with NVIDIA starting to use 4 ROPs on their 430,440,520 series. Sure, it is possible that this improvement can be attributed to the 28nm process, but I’d be surprised if these cards are priced in the $80-100 range.

  8. pr0or1337

    But when we talk about driver update reliability and on-game crashes everybody avoid ATI.

    I expect this will change in the near future, do it with AMD chipsets at least could be a good start point.

  9. cystacae

    @:pr0or1337: never had a crash issue due to bad drivers in Ati but was riddled with them via nVidia. Perhaps it is because I run beta drivers for both unless no recent beta is available then run WHQL.

  10. LogiGPU

    Why avoid, if game is opened drivers can’t be updated, simple isn’t it, update your self or disable auto-update.

  11. John

    According to documents circulating around the Internet, only the high-end GPUs will be manufacturer using the 28nm process. The lower-end GPUs will be manufactured using the 40nm process. Thus, expect cards with similar specs as AMD’s past 2 generations in the low-end range (definitely not 16 ROPs).

  12. Nuk3d

    I hate when people try to sell advice on only their own experience.

    @pr0or1337 but people have the opposite effect, and it’s probably something on your end or a defect with that particular card, not the whole line from amd(or even nvidia for that matter). also, game devs don’t always really optimize for amd..

  13. Corvin

    Thanks for posting, I have looked for AMD roadmap and didn’t find anything. I’m going to upgrade my Radeon 4850 but Radeon 6950 or 560 Ti and even 6970 and GTX 580 which I looked at, are weak in today games like Metro 2033 so I wanna something more powerful. I’m gonna wait 7900’s

  14. Promilus

    @Nuk3d – most “optimizations” in games for specific vendor aren’t done by tweaked game engine but by tweaked drivers. Thanks to partnership program nv has early access to game builds and can optimize their drivers to run it smoothly even BEFORE it’s released. Now some few months after release AMD gives tweaked drivers with perf. boost in that title while nv can’t boost it up more. Of course now you can give lots of obstacles… AMD is no good for tess of whole scene with high factors – kill performance on radeons by using it. NV is slightly worse with pixel shader post-processing – apply enough effects to get 6970 higher than gtx580… That’s how it works nowadays 🙁

  15. Squall Leonhart

    2000 cores in a SIMM design is not going to happen at 28nm

  16. Me

    It looks like a nice GPU. Too bad that ATI had so many bugs and incompatiblity in games due nvidia owning the whole market. Anyways I think is a nice step forward in performance. And this will make nvidia to do even more nice gpus. And finally they are taking in account the power consumption!!

    And for all fanboys.. stop fighting. Each one can buy the gpu most like. I use nvidia because mot of the game are optimized and made for nvidia besides all features I get in 3D design using nvidia with cuda, physx, etc.. But if ATI could offer something better, for sure I choose ATI. ATI is nice and cheap, good for gamers without too much money. nVidia is more for gamer entusiast and people that want use teh gpu for more than just play games.

    And remember that with more GPU vs GPU, more cheap and better products we have. So let they to fight to get some nices gpus the next year 😀

  17. Grey

    Wow the kinda stuff people spew about cards is ridiculous. I have used both companies products and their both great. That being said I am currently using AMD because after 6 Nvidia cards dying and 0 Ati/AMD cards dying I figured I was wasting my money. Not saying Nvidia is to blame because none of them were reference cards but still. Also, I have had issues with both cards drivers and lost several cards to those infamous 196.75 forceware drivers that were known to kill cards. Never had that issue with Ati/AMD drivers. Both companies have failed their customers in some way or another but sadly its a duopoly so pick your side and move on. They all perform similiarly at the same price points and give and take on features. Several things stand out from reading everyones comments.
    1) Both companies have been known to reuse previous gen architecture so don’t even try and use that as an excuse against the other.
    2) Most cards can’t handle DX11 worth a damn. Unless your at the High-End or running multiple cards your not getting ideal performance.
    3) Games like Metro 2033 shouldn’t be used as a baseline for evaluating your next purchase unless you plan on playing it all the time. Its a great benchmark and looks amazing but its not realistic to use it to judge the future of performance in DX11 titles.
    4)Physx is a zombie, Nvidia is moving away from it at least in the way we know it now.
    5)The argument about Nvidia being better for things other than gaming makes me assume people are talking about OpenCL or general computing. Both companies have strong showings in different applications for example, Bitcoin favors AMD very strongly. Folding@home favors Nvidia very strongly.
    6) Be happy there is more than one player in the Graphics card market. If there wasn’t competition consumers would suffer. You like Nvidia? Great! You love AMD? Fantastic. Just realize that both companies have their strengths and weaknesses.

  18. Alex

    Yay! Now if only game developers could release some games that could actually utilize the speed of these GPUs. Since every AAA game is a console port these days, I feel I will be safe with my 5870 for quite some time to come. Even my old 9800GTX can max out most new games even w/ antialiasing. The only reason you would want one of these is for multi monitor setups that have really huge resolutions like 5760×1080.

  19. maichi

    and nvidia trying to fool customers with high number of transistor but perform like sh*t.

  20. Leggy

    lol, and nvidia 500 series were the only proper series in years from that company, and now to correct some fanboism:
    gaming optimization has nothing to do with drivers, it has to do with your game, thats why so many multiplatform games work like crap on ps3 and then ppl say lulz ps3 sucks cause ps3 has less processing power, im not even bothering comparing the specs theyve been around for years and its more than clear in both flops & cpu specs which actually does better, also, if you stop for a minute and think at how dirty nvidia plays by even disabling features such as AA on nvidia games like batman as soon as an ATI card is detectedi think this statement clearer than ever, Nvidia is alot of hype and alot of crap talk, same drill on android.
    Nvidia has idd higher performance in 580 vs hd6970 however nvidia AA and AF are still some sort of cheap blurring filters compared to ATI AA & AF.
    The only card i actually disliked from ATI was the very first version of AMD 4870 available on market, it died on me so quickly because the vrm couldnt hold the gfx card power demand in order, reaching 93ºC on vrm while gpu was at 57ºC, it actually died on windows 7 installation.

  21. sfsf

    The use of the old architecture is kinda bummer.

    For the rest it looks great.

Comments are closed.