ASUS Immensity Concept Motherboard Pictures - Lucid and 5770

Status
Not open for further replies.

Archean

Posts: 5,650   +102
Asus Immensity Concept

Now if everything works out fine this can be ........ first real IG board which you can use for not only general computing purposes but for 3D apps/games alike ....... now that can be fantastic little surprise.
 
I've got my doubts with this one. The price is going to be astronomical - probably a lot more than buying a mainstream/enthusiast board and seperate HD 5770.
The grouping of CPU socket, HD 5770 and discrete graphics card looks like it's going to need some dedicated airflow over that area.
Biggest drawbacks for me are the Hydra chip- it still hasn't been demonstrated that Lucid's multi GPU solution scales anywhere close to Crossfire or SLI. It's first outing was a bit of a fizzer to say the least. If the drivers improve and Lucid can update to account for new games in a timely fashion then all good. The other problems I could see are that someone paying a hefty chunk of cash for an enthusiast board is likely going to be adding an enthusiast grade graphics card (or two) -will the HD 5770 add enough performance to justify it's inclusion ?
Lastly, graphics cards (for enthusiasts) become outdated faster than deli counter food. The HD 5770 is already old news and will likely be supplanted by a Southern Islands card within six months. Kudos to Asus for the concept, but I can't see it being much more than an oddity if it comes to market unless you could make the GPU component socketed like a CPU and thus upgradeable (extremely unlikely).
 
what DBZ said, there are two concepts here that needs lots more refinement. the cooling of the onboard GPU, and the Hydra chips performance is alpha release like.
 
I have more doubts about performance/stability of Hydra based solution.

I like the idea of DBZ by the way, a socket based GPU option, now that can be a real innovative idea, that should work for at least the generation of graphics processors which is in use at any given time. To your point of enthusiasts needing newer faster graphic cards all the time; I think its more of a software/drivers issue than a hardware issue, because hardware side can be brought up to level which makes it possible to utilize onboard and discrete GPUs in a combination.

For example, nVidia's SLI performance is superior when compared with ATI's Crossfire, whereas in many cases ATI's single graphic card is performing at a higher level than nVidia's. They have improved alot in a year or so, but they still need some way to go to catch with nVidia in this area. Now keep in mind this fact, I believe an onboard graphic processor coupled with Crossfire solution may present one hell of challenge for ATI's driver development team.
 
I don't know, I was recently reading some article about (sorry i forgot about the site but i will try find it for you) GTX480 SLI vs 5870 CF, and it supported my argument.
 
Probably game dependant. The GTX4xx seems to lend itself very well to SLI scaling- much better than most earlier cards
Metro 2033 for example (this game favours nVidia single card though)
BF:BC2 (pretty even, the game slightly favours AMD cards in single GPU)
AvP (Single card AMD, multi card nVidia)
DIRT 2 (Single card AMD, multi card nVidia)
(all at 1920x1080, GTX470 and HD5850)
And Tom's SLI / CF review (consecutive pages) at 2560x1600 (GTX480 and HD 5870)
Just Cause 2 (SLI scaling 71%, CF scaling 23%)
Stalker:CoP (SLI scaling 89%, CF scaling 56%)
Crysis (SLI scaling 87%, CF scaling: don't worry about it)
CoD:MW2 (SLI scaling 80%, CF scaling 79%)
DIRT 2 (SLI scaling 88%, CF scaling 62%)

And a very comprehensive Hexus comparison ( GTX 470 and HD 5850)
The Cliff's notes version...

.....................................SLI................CF........scaling %'s
BF:BC2 (1920).............57%..............72%
BF:BC2 (2560).............70%..............75%
CrysisWarhead (1920)..87%.............42%
CrysisWarhead (2560)..-10%...........-51%
Far Cry 2 (1920)............70%.............80%
Far Cry 2 (2560)............89%.............76%
Far Cry 2 (2560, 8xAA)..90%.............67%
HAWX (1920).................82%.............51%
HAWX (2560).................95%.............87%
HAWX (2560, 8xAA).......88%.............49%
DIRT 2 (2560, 8xAA)... ..56%.............80%

Pick the bones out of that.
 
That doesn't surprise me at all, D, I'm including the 4800 and the 200 series in there. Still building a lot of those systems. although the pricing of them tells me that the 4800's wont be around much longer.
 
That is great info put together at once plae for future reference DBZ ! kudos for that.

Best thing is, you saved me from digging around and finding all this info as well :D

And red, depending upon type of usage etc. I think 4000 series still holds value at some price points.
 
That doesn't surprise me at all, D, I'm including the 4800 and the 200 series in there. Still building a lot of those systems. although the pricing of them tells me that the 4800's wont be around much longer.

The HD 4850 and 4890 are disappearing fast here too. I think once the benchmarks for the HD 5750, 5770 and 5830 came out, a lot of people suddenly realized that they could do without DX11 for a while, save a lot of cash and game quite well at DX9 and 10. For a brief time between the the launch of the 5750/5770 and the launch of the 5830, the price of a 4890 here was 15-20% less than the reference 5770, but at that price they disappeared rather quickly.
 
Its a side note, but if AMD can get some good results from their upcoming APUs and few years down the road Intel also follow suite (as expected it will too eventually be able to push its IG performance to much more respectable level). And since in the emerging HPC market Intel's derivative(s) from its Larrabee project are being viewed/received in much more positive light than the nVidia's offering.

I think in the longer run, nVidia's very existence as major player may be at risk?
 
I think in the longer run, nVidia's very existence as major player may be at risk?

Geez i hope not Arch, I think they are in for quite a sobering period of introspection, ( not resting on their laurels) and it could get worse if 'southern islands', or HD6xxx, or whatever they are calling it (due out this fall) is more than a minor refresh.if its before Nvidia can get a full line of mainstream Fermi cards out there it may get painful for them and consumers for that matter. But in the long run I don't think Nvidia is going to lose its place in the market. Fiscal downgrading and all. I profess the preceding to be nothing more than speculation on my part :)
 
Which is why nVidia are concentrating more on system on a chip (SOC), Tegra and ARM. Also the compute (HPC) market will be there for them until there is a competitor for CUDA, hence Fermi's original purpose as a compute card (EEC memory, caching and DP performance) and the use of 6,500 C2050 (GTX470 compute cards) in Nebulae (#2 supercomputer) and Mole 8.5 (#19 supercomputer).
Factor in the S2050 and S2070 (note the pricing) and the fact that Larrabee is no earlier than 2015 and the picture is somewhat less clear.
Discrete graphics cards aren't going away in a hurry. An APU won't provide the horsepower at CPU power levels for the forseeable future to handle gaming at 2010 levels, and from here on in its multi-display gaming, multi-monitor 3D gaming, superHD (7680x4320), and a host of other graphics intensive tech which possibly includes Raytracing as something more than a demo.
I think nVidia will diversify and probably move away from the coonventional graphics market - I think I stated as much around the beginning of the year- and they probably/possibly downsize as a result. A lot will depend on AMD's R&D budget and how much of their efforts are going into the fight against Intel.
 
Which is why nVidia are concentrating more on system on a chip (SOC), Tegra and ARM. Also the compute (HPC) market will be there for them until there is a competitor for CUDA, hence Fermi's original purpose as a compute card (EEC memory, caching and DP performance) and the use of 6,500 C2050 (GTX470 compute cards) in Nebulae (#2 supercomputer) and Mole 8.5 (#19 supercomputer).
Factor in the S2050 and S2070 (note the pricing) and the fact that Larrabee is no earlier than 2015 and the picture is somewhat less clear.
Discrete graphics cards aren't going away in a hurry. An APU won't provide the horsepower at CPU power levels for the forseeable future to handle gaming at 2010 levels, and from here on in its multi-display gaming, multi-monitor 3D gaming, superHD (7680x4320), and a host of other graphics intensive tech which possibly includes Raytracing as something more than a demo.
I think nVidia will diversify and probably move away from the coonventional graphics market - I think I stated as much around the beginning of the year- and they probably/possibly downsize as a result. A lot will depend on AMD's R&D budget and how much of their efforts are going into the fight against Intel.

I was speaking in regards to their place in the gaming market. I hope they do not bow out or make it a low priority or prices will be sky high.....oh wait...ATI is the 'nice' corporation....they would never do that...:rolleyes:
 
It isn't about being nice or .....not, red. In continuation of DBZ comments, I think by middle of next decade IG performance levels probably will reach discrete graphics (at least good enough for mainstream level) hence it will hurt nVidia again more than any one else in this game. Although, having 3 major players in the market would be awesome for us 'the consumers'.

And DBZ its not about hardware plateform alone (i.e. cuda/tegra etc.), why Intel is having a bit of more positive response for their future plans is because of portability/strong software support with regard x86 architecture which nVidia lacks/or simply can't compete against.
 
It isn't about being nice or .....not, red.

That was a sarcastic remark in reference to a earlier thread in which an unidentified guest asserted that if Nvidia had no competition in the market, they would raise their prices to exorbitant levels. however if ATI had the market to themselves, they would keep prices low because, 'they are the company that cares'. This is of course not the case as you also have pointed out.


Although, having 3 major players in the market would be awesome for us 'the consumers'.
 
I was speaking in regards to their place in the gaming market.
I see.
In which case I think the short-medium term is pretty much a continuation of the last 3-4 years (since the G80 and R600 era). Southern Islands are re-jigging for better tesselation performance (although the red team tell us it's irrelevant) but still using the 256Mb frame buffer and nVidia are probably not far away with a Fermi refresh with less variance in transistor length (less leakage) and a lower TDP- possibly with a better GDDR5 controller which should translate into a 512 core part closer to the original spec (725-750MHz core/1450-1500 shader/ 4800 effective memory)...after that it's anyones guess.AMD's Northern Islands timeline and success or failure is going to depend largely on Global Foundries (so far untested) 28nm process I think, while nVidia's fortunes rest with TSMC's version -hopefully they learn a bit from the cover-your-eyes-bad 40nm debacle.
Hopefully both companies can execute on their designs.
In the meantime, nVidia aren't going anywhere. They still have a massive share of the OEM market, and judging by the latest Steam survey nVidia's 9400M powered Mac's seem to be selling well enough.
I hope they do not bow out or make it a low priority or prices will be sky high.....oh wait...ATI is the 'nice' corporation....they would never do that...:rolleyes:
Haha "Nice" and "Corporation" are mutually exclusive concepts.
BTW: How many reviewers and diehard AMDnatics will kick up a fuss when Southern Islands HD 6870?/6850? parts release with a 200+ watt TDP ?
And DBZ its not about hardware plateform alone (i.e. cuda/tegra etc.), why Intel is having a bit of more positive response for their future plans is because of portability/strong software support with regard x86 architecture which nVidia lacks/or simply can't compete against.
x86 has some major limitations as ARM has shown the wider tech community, and has been apparent for some time. Intel in general get "positive response" mainly because....they are Intel (i.e. the markets biggest player).
Bear in mind that nVidia are just as much (if not more so) a software based entity as hardware. CUDA is a programming language (not hardware) that has proved be be very effective and easily utilized. OpenCL/AMD Stream don't, at first glance, seem to be gaining that much traction, but the lead-in time for GPGPU is very long and nVidia have been at this game for quite a while. Also, PhysX is still the most widespread physics engine in commercial gaming use --off the top of my head I can think of only Bullet that has any other market penetration in PC (Havok ? :rolleyes:)
 
Okay D, correct assessment or no?
I am a bit confused as to why Nvidia put so much design and engineering into GPGPU for cards that are sold largely for gaming purposes. The gap between a gaming GPU and GPGPU has not been bridged yet with the public's use of them, as sales figures demonstrate. The engineering and sophistication of the Fermi line was obviously problematic to produce and cost prohibitive to a large section of the population. It also appears that its adoption by the public is still a few generations off, and that that the APU, fusion,larrabee etc will be here and viable by then. Or is it merely a matter of economy and they use the same silicon for the Tesla/quadro line? Its more sophisticated, much more powerful, and more versatile than its competition.However, like you, I build these things, and have yet to have anyone approach me with the prerequisite for a new gaming build that included "needs to have superior parallel processing capabilities" it seems to have grossly overshot the public's need, understanding, and or willingness to pay for the Fermi architecture. (speaking in generalities of course)
 
Looking back at nVidia's recent history it seems that originally the idea was to die shrink the GTX200 series using the 40nm process, the G212, but for a number of reasons- most of which haven't reached the light of day- the G212 was abondoned, leaving nVidia without a high-end offering. What they did have was Fermi, which was designed from the outset as a compute card (hence 72-bit ECC vRAM even on the gaming cards) and was basically pressed into service as a desktop part. In essence every GTX 480/470 is at heart a Tesla compute card featuring less memory, a different BIOS and optimized more for speed and single-point precision calculation (C2050 cards run slower, use less power and concentrate more on the double precision aspect). So in short, Fermi was never intended for gaming in the form that is seen now, but as a $US2500 compute card.
It's the compute side of the architecture, the polymorph tesselation/shader flexibilty that makes the card able to take the hit of heavy graphics settings and earlier, more conventional cards lose a sizeable chunk of performance under heavy AA settings, whereas these cards don't really suffer at all in some scenario's. Whether that was planned or purely a byproduct of the architecture or not I do not know.
I personally don't think the APU is of great benefit except for jack-of-all-trades systems and OEM's. The power required for even a modest GPU that could be classed as "mainstream" would be ballpark 80-100w if it has to have the shader horsepower for tesselation, dynamic lighting, ambient occlusion etc. (at the moment AMD's APU is looking no further than HD 5750 performance through 2012) which means that a multi-core CPU part needs to be around 25w at 100% loading...maybe in 5+ years when we head for sub 16nm process. Both nVidia's Fermi refresh and AMD's Southern Islands are supposedly in the 200-215w TDP envelope, a die-shrink/arch change at 28nm in 2011 would possibly drop that down if both companies don't find other features to add, and probably 22nm after that will as well....but is either nVidia or AMD going to draw a line at features on the top-end cards just to save some wattage if the other company keeps upping their feature set?
Larrabee....well who knows. A lot of mixed messages emanating from Intel and every other white coat with a pocket protector. I think Intel has found out just how hard designing graphics accelerators actually is, and if Intel have proved two things over the past few years they would be that they sure can execute on CPU, and they don't have clue one about graphics. I think Larrabee 3 (?) wasn't slated for introduction until 2015 in any case -this before the project was (sort of) canned.
Anand's take on Larrabee
All this diatribe is my definitely only my opinion and speculation regarding the future GPU- I don't think there are a lot of hard facts around at present, although plenty of supposition and a lot of conflicting voices. Sorry for the longwinded reply !
 
Some of the arguments you have raised are excellent. I may be wrong, but I have a hunch that nVidia probably knew that its relationship with Intel would run into trouble at some point; hence the tried to focus on HPC market which in their view had greater potential and less competition. Now yes, Fermi was aimed at that market; but there is another dimension to this issue, i.e. cost; I think (and if as you also said) Intel is able produce something tangible which can compete with Tesla (which they probably may have ), Intel's production costs are much more competitive than nVidia, so the mix of these two is a huge plus for them. Anyway, as you rightly pointed we'll know when we get to cross the bridge, until then it is just an educated guess from bits we can gather from here or there.
 
I noticed that Intel had cranked the PR machine into life concerning the MIC (Knights Corner). Lovely shot of Mr. Skaugan with the wafer.
One thing that struck me when I saw the wafer is the size of the dies. There's a better picture of the wafer here sitting to the left of the decidedly nVidiaesque looking accelerator/processor (?). Counting the dies on the wafer (it's 12 inches) shows 13 x 10 (being a rectangular die according to the Intel die shot on their page) so that would imply that each die would be approximately 23mm x 30mm (23.4 x 30.5 being pedantic) for an overall die area of 690mm² ( or 714mm² , again being pedantic), which seems very, very big.....for comparison, Fermi is 529mm² and the 65nm GTX2xx die is 576mm²........I can see why the blue accelerator has a PCIe 6 and 8 pin connection.
 
And it is an 32nm part ........... i guess things will clear out by 2011/12 period, also in that period Intel probably will be shifting to its 'tick' phase (after getting the tock out i.e. Sandy Bridge) ..... until then we can't be sure what will be the 'real world' specs & performance of these parts.
 
Status
Not open for further replies.
Back