1900XT vs. 7900GT

Status
Not open for further replies.

LipsOfVenom

Posts: 160   +0
is the radeon better than the 7900GT? I can see the it has 512 RAM vs. Ndivia's 256, but the pixel shading processors throw me off. The 1900XT has 16 pipes but they list it as 48? I know all the statistics, but does anyone have any first hand experience with these two cards?
 
Feature-wise, the X1900 is a no-brainer. It can do SM3.0, FP blend HDR as well as HDR and AA. The NVidia cannot do this.

ATI has been forward thinking with the X1K series as it predicts that game performance will be more limited to shader performance than texturing. This can be easily shown by modern game titles (Oblivion, Half-Life2, FEAR, etc.etc.).

The item you are becoming confused about concerns the number of pixel pipelines (for texturing) versus pixel shader units. The X1900XT has 16 pixel pipelines for texturing, but 48 pixel shaders. This dedicates more transistors in the GPU for shader muscle/power aimed at modern directX games.

The 7900-Series, on the other hand, has 24 pixel pipelines & 24 pixel shaders. This gives it the ability to have higher fillrate for textures, but comes up short in shader muscle compared to the X1900 XT.

So think of screen pixels in a game- how many of them are raw texturing vs. the result of some form of shader code? More modern games are using lots of shaders so a bottleneck in shader performance (which most developers agree is shifting more and more towards, and already is..) can prevent a videocard from hitting it's fillrate potential- stalls in shader code hold up the framerate.

In all honesty though, regardless of raw hardware performance and muscle, it still comes down to developer loyalty/bias... and this gets even more pronounced going forward with SM 3.0. You can easily write shader code to favor NVidia hardware and perform less than stellar on ATI hardware, or vice-versa- write code that takes advantage of ATI hardware and performs less than stellar on NVidia hardware. Example- how a particular piece of shader code references texture data. 3dmark06 is a shining example of how you can make 2-4x the shader horsepower and avoid it to write IHV specificly optimized code that favors less available power by exploiting it's specific tricks/strengths while also penalizing stronger hardware for it's strengths. Most developers don't have time to write shaders to utilize the strengths of two platforms, so they'll usually pick what they know or where they are getting funding. So having 48 pixel shaders vs. 24 can make a game "break even" if it's coded specifically to utilize one IHV's shader code strengths fully and ignore another.
 
I agree completely with Sharkfood.

The X1900XT is the best choice! Just get a good, silent cooler for it, and you can overclock it without any problems...
 
thanks for hte post lipsofvenom..i was wondering the same thing...and very informative Sharkfood, thanks for the post.


sellmesanity said:
I simply agree with the guys above just because I have experienced the horror of the 7900gt.

What horror? please explain...
 
Status
Not open for further replies.
Back