Nvidia 9800GX2, waste of money

Status
Not open for further replies.
geforceslicc4.jpg
 
MetalX said:
Of course it does. It's a dual GPU card, and should be classified as a dual card solution.
Your right and wrong. A dual card solution would be two processors on two separate PCB's. The X2 is a dual processor solution on a single PCB. This ultimately makes it a single card solution and thus be classified as a single card not dual card. It just doesn't make sense to call it a dual card. You might as well call dual core CPU's two separate CPU's.

ALL the reviews I've seen to have also called this card a single card solution and is why its now the worlds fastest single card. Also this card has no indication that it uses crossfire to link both processors. And acts just like a single card.

Anandtech said:
Even more appealing is the fact that the 3870 X2 will work in all motherboards: CrossFire support is not required. In fact, during our testing it was very easy to forget that we were dealing with a multi-GPU board since we didn't run into any CrossFire scaling or driver issues.

Also you got to remember that this card runs much cooler than a single 8800GT and HD3870. Consumes a fair bit less power than a GTX and Ultra while idle and only a tad more than the ultra at load (mainly due to the inferior architecture)., and has a lower price tag than that of the GTX and Ultra, whilst still providing anywhere from 5-15% performance increase.

MetalX said:
Thats why it beats every Nvidia card, because none of them are dual-GPU.
7900GX2, 7950GX2. nVidia has has dual processor solution for a long time. These are not classified as single card solutions though because it really does use two processors on two seperate PCB's linked together via SLI bridge. So when the 9800GX2 comes out it won't be compared to single card solutions like the X2, but dual card solutions.

Also the cost to make the 9800GX2 will be a fair bit more than that of the X2, so nVidia will have quite a bit of trouble pricing them without losing money on their side. Also idle consumption will be just like dual cards, where as the X2's idle is far better than 2xHD3870's in crossfire.

The fact is, is that the X2 is simply superior. AMD have done a very well job with this, and have taken the performance crown back after over a year. nVidia won't be able to beat it until their next gen cards come out, which I hear has been delayed. Not to mention AMD's next gen cards are also being prepared for release fairly soon.
 
Actually....

sledgus said:
uhatemedoncha , that was a nothing post. We have already established that everything sucks on crysis, and that is not the issue of debate here at all.

On another note though, I must say that the 3870X2 is DAMN IMPRESSIVE! It wipes the floor with every single nVidia card, and smashes games at 2560 resolution! DAMN AWESOME....

Well you had stated that it was going to be better at crysis or something so I wouldn't go as far as calling it a "nothing post" and also in real gaming tests the 3870X2 doesn't beat the GTX single or the GTs in SLI. It seems that everone was basing their tests on game clips and canned benches as opposed to real in game performance. I'm assuming thats because ATI wanted it that way so they can try to squeeze a win out for at least a few days before anyone noticed. Unfortunately the guys at [H}ardocp caught them.... and for gods sake READ THE WHOLE ARTICLE BEFORE YOU REPLY TO THIS POST. ALL THE WAY DOWN TO THE EDITORS NOTE ON THE LAST PAGE. thanks...

http://enthusiast.hardocp.com/article.html?art=MTQ1NCwxLCxoZW50aHVzaWFzdA

So yeah it looks like ATI whipes the floor at benchmark tests but when ATI is not in control of the testing things come out a little different. It was a valiant effort by the red team but they should know its only going to make it hurt more when the GX2 comes out...
 
Just wanted to add....

Here is the editors note before you miss it:

(Editor's Note: You will see here today that our evaluation of the gaming performance produced by this video card does not track with some other big sites on the Web, and the simple fact is that those sites did not measure "gaming performance." Those sites measured frames per second performance in canned benchmarks and even some of them went as far as to use cut scenes from games to pull their data from. I have been part of this industry for years now and we are seeing now more than ever where real world gaming performance and gaming "benchmarks" are not coming to the same conclusions. Remember that when we evaluate video cards, we use them exactly the way you would use them. We play games on them and collect our data.

Another thing to think about is this. Do game developers want to provide built in benchmarks that show their games running slow? Or would the game developers rather put a game "benchmark" in that shows their game hauling ***? Do you think that slow benchmarks equal more sales?

The "3dfx way" of evaluating video cards is DEAD. It did have its time and place, but we are beyond that now. Any person using those methods to influence your video card purchase is likely irresponsible in doing so. You might even consider them liable. And I think that is going to come bubbling to the surface more and more as the industry matures. )

<end editors note>
I just wanted to say before people jump down my throat. I am not saying that anyone took bribes or anything like that. What sometimes happens is when a reviewer is handed a engineering sample they are locked into an agreement to only run certian test so only the strengths come out in the tests. ATI/AMD did it with the Phenoms on thier ghost release because they knew it didn't stack up to the C2D Quads. Basically you will be handed pre-fabricated clips from a game that has the player movement already programmed or sometimes its just a section of the game. Basically these clips keep the review out of sections of the game that would be difficult for the sample to handle.... at least thats one way.
 
Is this a ranting thread? I'm currently running 8800 ultra from xfx which is the highest rated card you could get when I bought it. It was actually downclocked at stock because of heat issues. So now I'll prob never find another one.

Anyway... Crysis runs fine on my pc. The performance to heat ratio at this point for the high end cards is already on end. I doubt we see cards that can run crysis at 80+ frame rates anytime soon. I actually contemplated buying one of those boards that support 3 pcix16 cards and just get two more cards, but even crysis isnt that much fun of a game to do that.
 
I never meant to start a debate (or rant), just trying to share info. I thought that's what we do here. I would never own either of these cards myself, believing that a single gpu is more viable than these. I do not need the biggest E-P (rhymes with venus) that money can buy, just what works for me.
I was also under the impression that there is no card that exists (yet) that can handle Crysis or FSX to it's full potiential.
 
HardOCP's chief reviewer Kyle is one of the most biased on the web who is committed to bashing Intel and ATI. Disregard his reviews and look at other tech reviews from Anandtech, X-bit Labs, Guru of 3D and others.
 
_FAKE_ said:
You might as well call dual core CPU's two separate CPU's.
Not the same thing. This card has two GPUs and two sets of memory for those GPUs. This memory is not directly shared. The card is actually two cards in one, except instead of using two separate PCBs and an SLI bridge, it places all the components of the two cards on one PCB, and the PCB acts in place of the SLI bridge. This card is like those "pseudo" dual core CPUs that are actually two single cores stuffed into one package, with no shared cache or anything.

Therefore I conclude it is a dual card solution, since it runs in SLI mode (EDIT: I guess that would be "Crossfire Mode ;))".

If it were two GPUs with shared memory, and it didn't run in SLI mode (EDIT: Again, Crossfire Mode, don't know why I put SLI :)), I could acknowledge that it was a "dual-core GPU." But it doesn't, as quoted here, on Page 15 of Guru3D's review, right in the first paragraph. So it's not. ;)
 
One thing I was wondering is if they had shared memory plus the two GPUs on one PCB, would that make the card about as powerful as the hardware in the PS3? Or would the multiple-SPD based architecture of the Cell CPU make a difference in overall performance?
 
sledgus said:
Could you really say that the PS3's hardware is more powerful than PC hardware?????



The graphics processor in the ps3 is truely remarkable, if i remember right, it has 3 cpus running at 3.2Ghz, and some insane graphics hardware, but im too lazy to look it up so, yeah...its good stuff.
 
I like the reviews here at TS, totally unbiased, not sayins "to buy" or "not to buy" certain products, just stating hardware specs and showing some comparable results to similar products. It basically lets you make the best decision for yourself.

I'm sure you guys remember the 7950gx2, this looks like the same thing, with higher clock rates, same amount of memory and dx10 ready.
 
supersmashbrada said:
The graphics processor in the ps3 is truely remarkable, if i remember right, it has 3 cpus running at 3.2Ghz, and some insane graphics hardware, but im too lazy to look it up so, yeah...its good stuff.


The brain of the thing is pretty impressive, but graphics power wise? It couldnt possibly be better than an 8800 Ultra
 
The RSX 'Reality Synthesizer' graphics processing unit is a graphics chip design co-developed by NVIDIA and Sony for the PlayStation 3 computer console.


[edit] Specifications
550 MHz G71 based GPU on 90 nm process [1][2]
300+ million transistors (600 million with Cell CPU) [3]
Multi-way programmable parallel floating-point shader pipelines[4]
Independent pixel/vertex shader architecture
24 parallel pixel pipelines
5 ALU operations per pipeline, per cycle (2 vector4 or 2 scalar/dual/co-issue and fog ALU)
27 FLOPS per pipeline, per cycle
8 parallel vertex pipelines
2 ALU operations per pipeline, per cycle (1 vector4 and 1 scalar, dual issue)
10 FLOPS per pipeline, per cycle
Maximum vertex count:1.2 billion vertices per second
polygon count: 450 million polygons per second


Maximum shader operations:100 billion shader operations per second
Announced: 1.8 TFLOPS (trillion floating point operations per second) (2 TFLOPS overall performance)[5] [6]
24 texture filtering units (TF) and 8 vertex texture addressing units (TA)
24 filtered samples per clock
Maximum texel fillrate: 13.2 GigaTexels per second (24 textures * 550 MHz)
32 unfiltered texture samples per clock, ( 8 TA x 4 texture samples )
8 Render Output units
Peak pixel fillrate (theoretical): 4.4 Gigapixel per second
Maximum Z sample rate: 8.8 GigaSamples per second (2 Z-samples * 8 ROPs * 550 MHz)
Maximum anti-aliasing sample rate: 8.8 GigaSamples per second (2 AA samples * 8 ROPs * 550 MHz)
Maximum Dot product operations: 51 billion per second [7]
128-bit pixel precision offers rendering of scenes with high dynamic range rendering (HDR)
256 MiB GDDR3 RAM at 700 MHz[8] [9]
128-bit memory bus width
22.4 GB/s read and write bandwidth
Cell FlexIO bus interface
20 GB/s read to the Cell and XDR memory
15 GB/s write to the Cell and XDR memory
Support for OpenGL ES 2.0
Support for S3TC texture compression [1]

[edit] Press Releases
Staff at Sony were quoted in PlayStation Magazine saying that the "RSX shares a lot of inner workings with NVIDIA 7800 which is based on G70 architecture. Since the G70 is capable of carrying out 136 shader operations per clock cycle, the RSX was expected to feature the same number of parallel pixel and vertex shader pipelines as the G70, which contains 24 pixel and 8 vertex pipelines. [2]

NVIDIA CEO Jen-Hsun Huang stated during Sony's pre-show press conference at E3 2005 that the RSX would be more powerful than two GeForce 6800 Ultra video cards combined. [2]
 
supersmashbrada said:
The graphics processor in the ps3 is truely remarkable, if i remember right, it has 3 cpus running at 3.2Ghz, and some insane graphics hardware, but im too lazy to look it up so, yeah...its good stuff.
I'm pretty sure that's the spec for the X360. The PS3 has 1 CPU divided into eight SPEs.
The reason I asked is that the HD 3870 X2 is able to perform 1+ Teraflops of calculations, almost equal to the PS3's 1.8 Teraflops.
 
sledgus said:
knees never rendering more than avg 50 or so frames a second
See, that's your problem. "One man's trash is another's treasure".

50FPS, by casual gaming standards, it is pretty good. :)

Yeah so what if it dips into 30... and maybe it hits 70 sometimes. It's still very playable and some people just consider that 'pretty good'.
 
Rick said:
See, that's your problem. "One man's trash is another's treasure".

50FPS, by casual gaming standards, it is pretty good. :)

Yeah so what if it dips into 30... and maybe it hits 70 sometimes. It's still very playable and some people just consider that 'pretty good'.

One mans trash is a homeless persons dinner lol

Who cares what people think is 'good', 30fps sucks and is not an enjoyable experience. True, it is the best experience that is currently possible, but it still sucks balls.
 
supersmashbrada said:
The graphics processor in the ps3 is truely remarkable, if i remember right, it has 3 cpus running at 3.2Ghz, and some insane graphics hardware, but im too lazy to look it up so, yeah...its good stuff.

The PS3's graphics chip was built around the G72 Core (Geforce 7800 Series), how the hell does it manage to pump out new generation graphics?

It's a console. As good as it may be 'for a console', it's no PC performer.
 
I don't know what he was saying, but can you please use proper grammar?

Personally, I think it will be better. That is just me, though.
 
mopar man said:
I don't know what he was saying, but can you please use proper grammar?

Personally, I think it will be better. That is just me, though.


lol hey! don't knock my grandma
 
Status
Not open for further replies.
Back