TechSpot

Crysis and Metro 2033

By 12johnni
Dec 31, 2010
Post New Reply
  1. Well, you people know how Metro 2033 has been called the crysis of modern day. They both can easily eat a whole gigabyte of video RAM, and both bring the average system to its knees. However, for the time, Metro 2033 isn't as bad. With a Crossfire Radeon 5970 with four GPU's or a TRI-SLI GTX 580 can eat these games alive at any resolution you care to mention. Hope you have 4GB Radeons, though. And I wish there were 2GB GeForce varieties. The one let-down over AMD. Anyway, not something that a sane person would buy. But none-the-less possible.

    Even I who is crazy about graphics and GPU's would be stretched to buy even 2
    GPU's because of money, power consumption, and the realisation of how much money I actually should spend just to game instead of saving up for a car. I'm just settling on a GTX 580 from the American EVGA site - not the Australian one - they add $220 onto the price with even exchange rates. And I don't know where that X58 SLI Micro Motherboard has gone. I was really thinking of making a system with that.

    But anyway, back to the point. What was the most powerful GPU in 2007 when Crysis was released? I can't find out from Google. But my point is I'm pretty sure that 2 or 3 way multi-GPU setups in that wouldn't get close, or perhaps even die from lack of memory, especially with 8xAA (I like Multisampling). How or why did they make this game? How would they even test this game on the highest settings, in times before 2007? Did they have some sort of supercomputer which emulated powerful GPU's? I am still baffled. Does anyone remember back into 2007, before I researched this stuff? What would get the closest to killing this game with Multisampling? What was the biggest memory size?

    Any help would be appreciated. I'm dying to figure this out.

    Thanks

    12johnni
     
  2. EXCellR8

    EXCellR8 The Conservative Posts: 1,835

    Unless you're playing across multiple 1080p screens you won't need gobs of power to run either game. A single GTX 460 or 5850/5870 will run the games just fine on reasonable DX10/11 settings coupled with a decently powerful CPU.

    As a side note i would like to point out that 2033 was one of the worst games i've ever played...
     
  3. klepto12

    klepto12 TechSpot Paladin Posts: 1,115   +9

    i think what you are looking for is the 8800 ultra which was the fastest card 3 years ago
     
  4. red1776

    red1776 Omnipotent Ruler of the Universe Posts: 5,219   +157

    I think the 3870 and the 3870 x 2 were its counters for ATI.
     
  5. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Crysis' highest in-game setting is 4xAA. Gunning for 8 was an nvidia driver option- a rather pointless one in the case of this game.
    I think from memory that I originally got around mid-30 fps from 8800 Ultra's in SLI in DX10 @ very high settings (max i.q.) and 2xFSAA. Adding a third card (which I had briefly) added about 50% to the average fps. On paper quite playable numbers- in reality too much stuttering to make the experience worthwhile -and this was using a Q6700/4Gb DDR2/680i SLI system which was not a slouch by any means.

    Crysis, like any game or benchmark doesn't have to be run at max image quality during development- at least not at playable framerates. The game engine just needs to allow for the textures and triangles. These can be tested at a frame per second (as example) if need be, but in general, antialiasing, filtering, motion blur, ambient occlusion etc are post processing techniques added during the rendering of the frames.
    CryEngine 2's (Crysis) development would have been made available at SIGGRAPH 2007 if you can find any viable links.
    As an example of game engine rendering evolution (the adding of more complex graphical details etc.) Unigine's dev log is probably an ideal starting point (since it is ongoing)- you'll note that the process is concerned with programming for better graphical detail and paring away redundant processes within the framework of the chosen API's (Direct3D, DirectCompute, OpenGL etc.). The business of achieving frames-per-second is left to graphics processor architects and driver teams to implement.
     

Similar Topics

Add New Comment

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...