HD 7970 pushed to 1.26GHz

By dividebyzero
Jan 1, 2012
Post New Reply
  1. The guys over at VR-Zone have been experimenting with volt modding with the HD 7970....and this is what they came up with...
    [​IMG]

    36.97% core OC, 14.5% mem OC @ 1.25v

    24.7% increase in 3DMark 11 Performance
    26.3% increase in 3DMark 11 Extreme
    19.5% increase in Heaven 2.5 bench

    Meanwhile, Chiphell are reporting that VR-Zone's overclocks might be bettered by Sapphire's factory solution. 1335MHz core and a rather modest 5735MHz effective memory, for a pair of cards branded "Atomic" (the name last used on Sapphire's HD 4890), a 1125MHz core /5600MHz mem for the "Toxic" edition and "Flex" (Eyefinity 6) edition with 6GB of vRAM
    [​IMG]
  2. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,846   +61

    see....I told ya!:p:haha:
    Preview of "Sea Island " capabilities perhaps?

    Good god! ...look at the bandwidth!
  3. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    No such thing as a free lunch red....

    Higher clockspeed or higher memory bandwidth -or a juggled combination to keep with boards specification. Pretty much SOP I would think for any AIB's custom boards. Not sure how that affects those cards that have proven to be poor/average overclockers. Is pouring more voltage into the GPU the answer ? And if it is, then surely you're either raising board wattage or throtting kicks in earlier to cap the power use ? Again, I'd ask if there is a third option.
    Seems somewhat mutually exclusive to this thread from my PoV, if throtting limits are the defining statement. AFAIA, a less aggressive throtting should allow better performance on games/apps where the GPU/VRM is being throttled yet is still within TDP specification (i.e. when either the GPU or vRAM is being under utilised while the other is being constrained by throttling), I'm not entirely sure how the throttling limit is supposed to affect yield variances, unless the throttling can be completely relaxed in the face of increased core voltage*.
    You'll still need to compare apples-to-apples. Stock clocks-for-stock clocks at paper launch and product launch. As for...
    ...as I said before, I would agree that it is very possible...but it must come at the expense of either lower yield (to eliminate those poor samples which accounted for ~50% of review cards), or increasing voltage -assuming this would gain the required speed, and that is not the case in all instances from what I'm seeing.

    What I'm seeing here is soft modded voltage, and obviously, cherry picked GPU's (Sapphire) as was the case with previous Atomic/Toxic HD 38xx and 48xx and OC'ed cards in general. Good to see, but the parameters that guide other OC'ed components still apply here.
    1. Fan speed at 100% on a reference blower fan. Not a 24/7 solution IMO.
    2. Heavily OC'ed cards are still likely going to require a waterblock or an (expensive) third-party cooler such as Arctic's xtreme plus - you could conceivably get HD 6990 performance at HD 6990 pricing -there is no way on earth that a 1335M core uses any reference component aside from top-binned GPU*
    3. Performance never scales linearly to clock rate. The 32 ROP count is likely to cap performance gains, even if you profile the OC for both high core/low mem and average core/high mem. If this wasn't the case you would have the GTX 560Ti leaving the 580 in the dust....which doesn't seem to be the case

    *Since the boards in their present guise are pulling ~277w (average) @ ~1.15v (VR Zone's example) that gives you 229 Amps through 5+1 phases of Coiltronics 1007R3-R15 @ 61A per phase (366A max).
    Assuming that the boards TDP isn't to rise (i.e. throtting profile makes efficiency gains without blowing the power budget) then VR Zones example would be 277 x 1.25v = 346.25A (lumping GPU and vRAM together at the same numbers-so not an exact science) ...which is 5% off the boards design threshhold, which kind of leaves raising the power or increasening the voltage regulation which would require a complete deviation from the reference model's power deleivery/trace layout/PCB layers....and pricing. In effect what happens with every limited edition (MSI Lightning, Gigabyte SOC, Asus DCII, PowerColor PCS+ et al)

    thoughts? anything I've missed ?
    EDIT:
    That would be "Canary Islands" I think, and yes, probably. AMD are locked into GCN for the foreseeable future, so it isn't unreasonable that the next iteration of cards will be a more refined design on a more mature process node.
    Cool....the GK110 with 320 GB/sec should kick a$s then (512-bit x ~1250MHz memory clock). Haven't really ever put much stock in the bandwidth number generally:
    GTX 285.........512 bit....2484 memory..........158.98 GB/sec
    HD 5870.........256 bit....4800 memory..........153.6 GB/sec
  4. EXCellR8

    EXCellR8 The Conservative Posts: 2,278

    can't wait to get my hands on one... my 1GB 5870 isn't liking the eyefinity thing much lol
  5. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    Yields seems to be very constrained if this is any indication....unless TSMC is pushing out a large volume of perfect GPU's.

    @EXCellR8
    If you're desperate to get your mitts on a card, I'd suggest you browse some Greek etail sites...they don't seem to be too worried about official launch dates
  6. EXCellR8

    EXCellR8 The Conservative Posts: 2,278

    yea i'm planning to swap out some components around the time ivy bridge makes its debut this spring. not sure if i will wait on a new GPU but i sure do like what i see in the 7970. of course there is always the option of sitting out and seeing what nvidia brings to the table. i like amd cards but tbch i am just not impressed with driver support and having to take the backseat with optimizations (though that seems to be getting better). i'd like to stay with radeons but i am interested to see if nvidia has a comparable card in the same price range.
  7. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    Personally, I'd jump on the 7970 if it were more accessible. I'm picking that the card lands at closer to $600 than the MSRP, at least while the availability is an issue - which means it hits $US800+ in New Zealand. I don't think Nvidia are launching a comparable card in the near future - the closest you are likely to get is 7950/GTX 580 performance out of the GTX 660.

    The 7970 looks to have covered most of the bases pretty well, although I'll wait for some user reviews and some personal use before I make a definitive decision about upgrading my second rig's 5850's since there seems to be a wide disparity in conclusions so far - sound levels (acceptable to some, still too loud for others), a very wide variance in overclocking ability amongst them. I need to maintain one Nvidia and one AMD system in any case so it will probably come down to economics. Crossfired 7890's or 1.5GB 7950's are likely to be cost comparable to a 7970 so that becomes another reason to hold off until the model lineup in complete.
  8. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,846   +61

    Nope, I think you covered it. i was kidding though :haha:

    I honestly dont think AMD has a full grip on what they have in those stacks of 28nm wafers yet. I'm picking that it's better than they thought and there will be more in-between cards in this line like the 7890. only they wont be salvage cards , but price point and because they can. The NV GTX 770 too close to the 7950??? here's a 7960 for ya.
    The other point I was making is that during the limited selection for reviews the first time around, it may not have been noticed that the power management systems/throttling may not have been working up to par. thats why I will be l;ooking keenly at the wider round of reviews on the 9th when they hand them out to everyone.
    just a theory. :)
  9. EXCellR8

    EXCellR8 The Conservative Posts: 2,278

    im starting to wonder if there will be a 7990 with 4-6GB DDR5... perhaps not when the series is launched but possibly in the months that follow. there's definitely no reason that two 28nm chips on one PCB couldn't happen, but then again that would probably be the biggest flaunt card ever lol
  10. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    My thoughts exactly. I've never seen a new process that didn't have variability at its inception. Which was my reasoning against an across the board core speed hike.
    You're going to be limited by granularity stemming from the design, so your salvage parts (or otherwise) are limited to shader cluster disablement and vRAM size/speed. AFAIW
    HD 7970...6GB/sec GDDR5...32 ROP, 128 TMU, 2048 shaders
    HD 7950...5 or 6GB/sec.........32 ROP, 112 TMU, 1792 shaders
    HD 7890...5 GB/sec...............32 ROP, 96 TMU, 1536 shaders

    Doesn't leave too many gaps for further SKU's. You could I suppose crank core speed, but that merely encroaches on the next cards performance level -as well as power circuitry cost.
    Possible, although I would have thought that premier review sites such as Hardware France, PCGH and ComputerBase probably nailed it. You could have a hundred card reviews out and they would still be some of the most diligent and professional around. Of the possible reviews yet to come, I would sincerely doubt that many -if any- for instance actually tested tessellation factors in MS's DX11 SDK, and their power usage figures are usually spot on. AMD's own Powertune I think is set at 210 watts -/+ 20%, which tallies pretty closely with published power consumption.
    I'm pretty sure there will be a 7990 (with 6GB GDDR5). Technically it wouldn't be any harder an achievement than producing a 6990 - same thermals to a point, same power consumption. Scuttlebutt has the 7990 clocked at 850 core/ 5000 memory (as opposed to 925/5500 for the 7970) which would ease the power budget/heat output, and allow AMD to maybe sneak into the PCI-E 300 watt specification, which was the 6990's main failing point -both from a PR perspective and OEM's not using the model in top-tier builds (whereas the GTX 590 featured reasonably prominently)
  11. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,846   +61

  12. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    I noted that earlier (see pic #2 in first post), looked like either a screw up (unlikely) or deliberately bogus to flush out any loose lips/give Nvidia something to think about (more likely) since no SKU/detail (the "N/A" notation) was attached to the info...and would infer that AMD were having problems with the arch and had to go to market with a salvage (2048 shader) part......basically telling Nvidia that they were suffering the same fate as Fermi (GF100).

    Would be interesting if Tahiti was designed for 2304 shaders...both from a performance perspective and a prospective yield analysis, which would imply:
    HD 7980(?) : zero production
    HD 7970 (1st salvage part) : 11.1% shader loss
    HD 7950 (2nd salvage part): 22.2% shader loss
    HD 7890 (3rd salvage part): 33.3% shader loss
    Wouldn't be something AMD or TSMC would want to publicize given that a "broken" Fermi yielded 6.25% loss (GTX 480), 12.5% loss (GTX 470) and 31.2% (GTX 465) from the original 512 shader part.
  13. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,846   +61

    ooorrrr, they are all 2304 SPU, and just releasing the 2048's right now...LOL
     
  14. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    Riiiiiiiiiiight :rolleyes:

    KInd of makes sense if AMD have sacked their engineers and are just going to unlock the extra shaders and call it HD 8000.

    Although if they had that much power on tap, seems rather strange that they wouldn't halve or quarter the GPU and use it instead of rebranding lower tier 6000 series....maybe some kind of grand strategy in play? Has anyone actually checked to see if Bulldozer isn't using the same plan? All that extra space might be hiding an extra 4 cores !!!
  15. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,846   +61

    :haha:, although it would account for the "killing " of the 600m transistors!:haha:
  16. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    They weren't killed, John Fruehe and that one other guy at AMD HQ lost some beads off their abacus.

    2304 or 2048 or 1792, I just want it available to buy- no back orders, no price gouging, no "Sorry the Sapphire, Asus and Gigabyte cards are out of stock...how about this nice Diamond brand".
    Got a Z68 to put together (FOR MYSELF!) -need ALL new toys to go with it - been testing 2600K (4) and 2700K (2) to get a decent chip. BTW the NZXT Havik 140 is a helluva cooler (borrowed from a customers build)
  17. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,846   +61


    I saw Logans review, It certainly has the nastiest looking fan ever on it!
  18. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586

    Yeah, the fan blades look like one of those ceremonial daggers in a B-grade horror movie...or possibly a regular fan viewed while on acid. Very nice cooler though. Doesn't take up too much real estate. It's cheap and efficient and easy to fit - a bonus when testing half a dozen CPU's. I'll put the thing under water once it goes together, but the Havok isn't much worse than the Silver Arrow/NH-D14/Phanteks -maybe 1-2 degrees @ 1.4v -the dual pass rad with push-pull should be ~10C cooler again.

    EDIT: Tech Report's review in
    (2560x1600)
    Batman:AC +9.38% over GTX 580 (average of previous 9 reviews 19.79%)
    BF3 +19.35% over GTX 580 (average of previous 14 reviews 17.38%)
    Civ 5 +14.29% over GTX 580 (average of previous 3 reviews 14.56%)
    Crysis 2 +24.14% over GTX 580 (average of previous 13 reviews 23.49%)
    TESV +9.43% over GTX 580 (average of previous 5 reviews 19.89%)

    Bench results seem in line with previous reviews - allowing for the reviews that used some very odd methodology or custom IQ.
  19. polyzp

    polyzp Newcomer, in training

    hah 2 years from now 2 ghz wont be too high for a gpu
  20. dividebyzero

    dividebyzero trainee n00b Topic Starter Posts: 4,699   +586



Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...


Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.