Nvidia promises to ship next-gen Kepler GPU this year (not!)

By Matthew
Aug 3, 2011
Post New Reply
  1. Update (8/5): Nvidia has clarified a previous statement regarding availability of its next-gen GPU codenamed Kepler. In a nutshell, Nvidia won't be shipping any final products using the new silicon…

    Read the whole story
  2. ghasmanjr

    ghasmanjr TechSpot Booster Posts: 363   +86

    Just wow. In a few years, we're going to look at a GTX590 like it is a lowly GTX260.
  3. mosu

    mosu TechSpot Enthusiast Posts: 296

    No chance.Nope, just hype. Nvidia should focus on mobile combined with ARM processors, because they lost the battle on x86 front.Just my opinion, for now...
  4. New motherboards to support the new graphics card, how many more 100`s £££££`s will this cost just for a basic mobo and graphs card upgrade that were told we need by the media and the hype that the companies producing the goods shove at us, why didnt they wait to release sandybridge until the new pci-e slot was in place instead of the first release and subsequent release of the fixed sandy mobos and then the Z68 series.

    Well i dont suppose we should really complain, look at the fab X86/64 bit procs were still using, getting a little old tech now, why no 128/256 bit ????, and then of course theres the good old DDR3 ram, , what happened to cube ram or DDR4 at least....
  5. dividebyzero

    dividebyzero trainee n00b Posts: 4,783   +639

    According to TSMC, the slow ramping of the 28nm process is/was due to low (at least initial) demand for wafers. Of course this could be either true, or a spur for companies to place larger firm orders for wafer starts.

    @mosu
    Ah, where would be be without you. Are you and your industrial sized jar of K-Y hoping for a call from Abu Dhabi or Tom Seifert ?
    Nvidia presently hold ~90% of the pro graphics and HPC GPGPU market...and they've "lost the battle" pfffft.

    AMD are still starting from a long way back. Aside from the arch issues ( integrated cache/ECC memory controllers/anything larger than a 256Mb bus width), their biggest hurdle is going to be adoption of OpenCL, since they seem content to do little more than release a APP SDK every so often and hope people might decide to do something with it.
  6. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,867   +74

    Technology moves fast...get used to it, and you are not having motherboards shoved in your face....unless you work for Foxconn.

    And exactly what advantage would a 128 or 256 bit processor have???

    It's coming within 12 months.

    ...anything else?
  7. ---agissi---

    ---agissi--- TechSpot Paladin Posts: 2,382   +15

    If you understand computing and counting systems other than our 10-based system, you would see that 64bits provides us with ungodly big numbers. Numbers so big we dont even use them for the very most part. Thats why you dont see performance increases in 64bit cpus for the most part. The numbers being thrown down the (cpu) pipeline arent bigger than a 32bit number for the most part. Its not like a 16 bit number where the CPU would have to compute a calculation twice that a 32bit CPU can knock out in one round.
  8. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,867   +74

    Agassi,
    I wanted him to answer...but yeah :)
  9. dividebyzero

    dividebyzero trainee n00b Posts: 4,783   +639

    Not gonna happen.

    Guests are here for a good time, not a long time.
  10. MCJeeba

    MCJeeba Newcomer, in training Posts: 26

    Surprisingly enough, the GTX260 is still a good standard. Considering all Nvidia specializes in is antiquated versions of DirectX (think buying a console for your PC), it isn't surprising.
  11. MrAnderson

    MrAnderson TechSpot Maniac Posts: 488   +10

    The GPU side of Nvidia has been quite for a while. It seems like GPUs on PC are not the rock stars they used to be... you can see why... the games that are coming out are not really making use go... oh cannot wait for the next GPU... at least consumer wise... the attention is on the mobile market where we might see ARM versions of Windows 8, MeeGo, and HP WebOS running on Nvidia dual and Quad cores and maybe even on ARM based PCs (dual booting or not) toward the end of 2012 and early to mid 2013 with the newer discreet GPUs??? New download marketplaces and App stores making things exciting again? no... maybe Id Software's new Doom and Quake just in time for the new GPU power houses...
     
  12. its good to see that next big bump in gpu tech is coming.
    As we have seen that there was almost 2x performance boost when going from gtx 200 series to gtx 400 series and from hd 4000 to hd 5000 and if we remove nvidia's case, this performance boost was also power efficient.
    So we can expect same kind of boost in this new gpu tech.
    And we can see gtx 580 like performance on a gtx 660(if the naming scheme continues)
    And enthusiast series graphic card will give performance like todays dual gpu cards.
    And i will grab a sub high end series of card.
    But every good story have bad part, and in this case, we dont have a game which we still want to see on its max settings like crysis when it was launched.
    But its still good for consumers
  13. Im pumped for new gpu's like the next geek but what really is going to take advantage of it. Surely not all the dumbed down console ports as of late. Oh well i'll still buy it anyway.
  14. venomblade

    venomblade Newcomer, in training Posts: 69

    If that's the case, I'm eager to see what kind of games would be that demanding that a 590 would become lowly
  15. Burty117

    Burty117 TechSpot Chancellor Posts: 2,470   +299

    Just in time for Battlefield 3? what were the odds ;)

    Although i've still got a GTX260 i have been struggling to find too many games that proper max it out. I can run Crysis and Crysis 2, Company of Heros, Battlefield: Bad Company 2, All Source Engine games and well, pretty much every at full, sure I might have to turn the shader quality down a notch on the original Crysis and the same needs to be done on Battlefield: Bad Company 2, but appart from that it doesn't really struggle to much, usually staying above 30fps which isn't ideal but still smooth enough to play happily.

    I started out with a 7600GT then to the GTX260. Hopefully a GTX660 will be my next big jump :)
    But for the sake of the 400 series I pray the 600 series is a bit more power friendly.
  16. Okay -- so when do we get to play games that look so real that we can practically TASTE THE FOOD on the plate that the protagonist is eating in GTA 12 (or whatever...)

    ?!

    :D
  17. lol..^^

    Nvidia isn't APU.. so it will futher slip into insignificance on the PC & any other device that is trying to be effecient with energy. Laws of physics bro..
  18. Project Denver will reinvigorate Nvidia's product lines.
     
  19. dividebyzero

    dividebyzero trainee n00b Posts: 4,783   +639

    Unlikely IMO.
    A process shrink usually means that you can either:
    1. Keep roughly the same level of performance, but use less power, or
    2. Pack more transistors into the same die package, and possibly increase clockspeed.

    One of these options generates techgasms across the net, the other gives HTPC builders more range to choose from.

    Near in mind that Kepler is another compute card so will be packing a lot of transistors that have little relevance to gaming, as will AMD's new GCN architecture. I think you can expect both camps to be pushing as hard (clocks/wattage) as yields and commercially viable stock cooling allow

    GF110/GF100 (GTX5xx/4xx) -3000million trans...520-529mm² (40nm)
    GT200 (GTX2xx).....................1400......................576mm² (55nm)
    G80 (8800GTX/Ultra).............. 681.......................484mm² (90nm)
    G71 (78xx/79xx)...................... 278.......................196mm² (90nm)

    And AMD for comparison:

    Cayman (HD69xx)..................2640million.............389mm² (40nm)
    Cypress (HD58xx/5970).........2150........................334mm² (40nm)
    RV770/790 (HD48xx)............. 956....................... 260/282mm² (55nm)
    R680 (HD38xx).........................666.......................192mm² (55nm)
    and the exception that proves the rule...
    R600 (HD2900)........................700........................420mm² (65nm)
  20. ET3D

    ET3D TechSpot Paladin Posts: 948   +29

    dividebyzero, your two options don't conflict at all. There are always cards in the new generation which provide the same performance as the previous one for a lower price and less power. The top end has a lot more performance and uses a little more power than the previous gen.

    Since the slide in this news item is specifically about performance per watt, I think that NVIDIA is taking this matter seriously. Therefore there's a chance the 600 series will not raise power requirements more, just provide more performance.
  21. fpsgamerJR62

    fpsgamerJR62 Newcomer, in training Posts: 489

    My wishes for Kepler - bigger memory buffers for multi-monitor gaming and games with hi-res textures plus with the 28 nm die shrink, hopefully, more performance but less heat and noise.
  22. dividebyzero

    dividebyzero trainee n00b Posts: 4,783   +639

    The "top-end" was/is the primary thrust of my comments. Note that I used the top GPU for each arch/process node in my comparisons.
    I think it would be a given that cards that could be named GT 630, GTS 650, HD 7650 et al would be more power efficient- a quick look at the number of (relatively) competant cards that utilise <25w should be proof of that.


    Neither AMD, nor Nvidia would be looking at raising the power requirement I would think (not that I stated anything to the contrary in any case) since they are already uncomfortably close to the 300w PCI-SIG spec for enthusiast-class graphics cards (dual GPU excepted)
    No PCI-SIG validation = No insurance for OEM's.

    If you re-read my post, you'll note that I think TDP will not decrease for upper-tier (above mainstream) cards. The HD79xx will almost certainly be a larger and more complex die integrating compute functions, and will-by extrapolation-require a hefty power input...I most certainly wouldn't see Nvidia throwing away all their Fermi R&D and drastically altering their compute strategy.

    What will probably eventuate is an evolution of hardware and software based power limiters, to further muddy any attempts to pin down accurate independant power usage figures.
  23. I agree with you to a point, especially given that console ports are inherently hampering the code paths of their PC counterparts, but my situation is one which begs for a better card...

    I have a Dell Studio XPS 8100 and the spec was great until I was given (yes, given!) a brand new Dell 3007 monitor capable of running at 2560x1600. This meant that my hitherto powerful GTX570 was now being dragged into sub-40 framerates, even with an i7-870, SSD and 8GB RAM in support.

    The motherboard does not have a separate PCIe x16 slot and so I am left with the dilemma of changing an entire, highly specc'd machine just for the sake of additional GPU horsepower. The very latest CPU is faster than mine, granted, but not by a margin sufficient to warrant the jump.

    So, I will wait, and grab a next gen card so that I can continue to enjoy the rest of the machine for a little while longer. I will probably ensure that the next PC has two PCIe x16 slots, and use the next card along with another one, for SLI, meaning I get double the GPU power for the price of one (now much cheaper) next gen card at the time of upgrading.


    Best regards all


    Neil
  24. dividebyzero

    dividebyzero trainee n00b Posts: 4,783   +639

    Just thought I'd post this from VR-Zone

    Although the chart and the spec's don't necessarily tally if the vertical axis is measuring performance. i.e. if the GK104 (GTX560Ti successor) is truly 250w /384-bit then I'd be sceptical that it offers the same performance as an OC'ed 560Ti. Likewise the dual version of the same GPU. If the vertical axis is just showing the top-to-bottom nature of the product stack then I'd assume there will be salvage parts filling in the gaps.

    A couple of other observations:
    If the GK104 is 250 watt, then I'd doubt that a dual GK104 is going to tip the scales at much less than 400w...and these are the "small" GPU's. It's also probably a given that the GK112 with a 512Mb memory bus is going to be about as close to the PCI-E 300w limit as is technically possible.


Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.