Previewing DirectX 12 Mixed GPU Gaming with Ashes of the Singularity

Thanks for this Steve.

Now we wait to see if nVIDIA can do better than they did with a future driver.
As for mixing GPU's, my OCD won't allow it. That and if one card does better than the other, why would you mix them?
The logic for mixing them would be that if you play 2 different games, one game might have an affinity for Nvidia, while the other might have an affinity for AMD. By just swapping the primary card you get the best of both worlds, and still get a boost in both games.
 
Nvidia is the Microsoft of the graphics cards. So greedy. It is a constant obstacle.
 
Techspot on damage control mode... "Only one DX12 game", "not representative". Come on... I thought you guys are knowledgeable about hardware. At least try to hide your nVidia bias better. Jesus.

AMD will dominate in DX12. They have the faster hardware. Under DX11 the hardware is limited (which will btw be fixed with Polaris), and under DX12 their full potential comes to light. Even without concurrent async, AMD cards are pulling ahead. With async the gap will only widen. And no, nVidia cards cannot do it, and neither will Pascal. AMD will be on top, at least until Volta comes out. You don't have to believe me. Just wait and see. And FYI, cards like the GTX 970 are already outdated. Anyone who went for it instead of the R9 390 is going to have regrets.

On another note, it's interesting that using a graphics card from two different vendors (for example, Fury X + 980 Ti) has the better scaling with dual graphics cards. Oxide itself does not know why.
 
Techspot on damage control mode... "Only one DX12 game", "not representative". Come on... I thought you guys are knowledgeable about hardware. At least try to hide your nVidia bias better. Jesus.
That's usually a prelude to some pseudo-marketing spiel by someone with way too much personal investment in supporting another multi-billion dollar company. Let's see if I'm right...
AMD will dominate in DX12. They have the faster hardware. Under DX11 the hardware is limited (which will btw be fixed with Polaris),
Please provide proof. Mantle was in part pushed by AMD because they could not control the DX11 driver overhead issue. I've heard nothing to suggest that the overhead issue is being overhauled for DX11. Polaris is hardware. DX11 overhead is software. If you are expecting a brute force approach through sheer GPU horsepower you will very likely be disappointed. Both IHV's have had very similar performance down their product stacks for the last twenty-odd years and there is no reason to believe that the paradigm is about to come to an end. Even when both vendors have used different manufacturing partners in the past (I.e ATI's use of TSMC's 130nm Lo-K and Nvidia's use of IBM's 130nm FSG) the performance has been fairly equal.
and under DX12 their full potential comes to light. Even without concurrent async, AMD cards are pulling ahead.
Yeah?
2015-09-25-image.gif

You'll have to do better with the guerrilla marketing than that. Anyone with rudimentary web search skills would know that best case scenario for AMD in Fable Legends is parity
With async the gap will only widen. And no, nVidia cards cannot do it, and neither will Pascal.
You can no doubt provide proof of this. Thought not. By all means voice an opinion, but don't proclaim it as fact if you can't substantiate it.
The other thing to consider is that even for current architectures, async compute is not the be all and end all for DX12. AMD has it's own issues with conservative rasterization and ROV's which could lead to performance issues or unavailability if a developer decides to implement new DX12 transparency or custom blending functionality.
You don't have to believe me. Just wait and see.
I don't really have to do either. While the Fable Legends benchmark results are readily available, and the announced list of DX12 games are Unreal Engine heavy - which doesn't tend to favour AMD - I certainly wouldn't buy into your blinkered Sunnyvale Nostradamus schtick.
 
Last edited:
Techspot on damage control mode... "Only one DX12 game", "not representative". Come on... I thought you guys are knowledgeable about hardware. At least try to hide your nVidia bias better. Jesus.

AMD will dominate in DX12. They have the faster hardware. Under DX11 the hardware is limited (which will btw be fixed with Polaris), and under DX12 their full potential comes to light. Even without concurrent async, AMD cards are pulling ahead. With async the gap will only widen. And no, nVidia cards cannot do it, and neither will Pascal. AMD will be on top, at least until Volta comes out. You don't have to believe me. Just wait and see. And FYI, cards like the GTX 970 are already outdated. Anyone who went for it instead of the R9 390 is going to have regrets.

On another note, it's interesting that using a graphics card from two different vendors (for example, Fury X + 980 Ti) has the better scaling with dual graphics cards. Oxide itself does not know why.

Nice troll post.

The only person on damage control here is you. And quite frankly AMD doesn't need you to fight their battles for them.

What you fail to see is by the time there are a handful of DX 12 games out both NV and AMD will be on a new generation of cards so at the end of the day none of this will really matter.
 
Would anyone run a dual 390X? It may be cold outside at the moment but a dual 390X setup would solve all your heating problems ;) You might actually need to turn on the air-con.
It would draw a lot more power than a central air heating system though.
 
@dividebyzero Even though I buy AMD graphics, I must say how much I appreciate your well thought out and intelligent posts. Thanks, that's 'my two cents' as the expression goes.
 
Techspot on damage control mode... "Only one DX12 game", "not representative". Come on... I thought you guys are knowledgeable about hardware. At least try to hide your nVidia bias better. Jesus.

AMD will dominate in DX12. They have the faster hardware. Under DX11 the hardware is limited (which will btw be fixed with Polaris), and under DX12 their full potential comes to light. Even without concurrent async, AMD cards are pulling ahead. With async the gap will only widen. And no, nVidia cards cannot do it, and neither will Pascal. AMD will be on top, at least until Volta comes out. You don't have to believe me. Just wait and see. And FYI, cards like the GTX 970 are already outdated. Anyone who went for it instead of the R9 390 is going to have regrets.

On another note, it's interesting that using a graphics card from two different vendors (for example, Fury X + 980 Ti) has the better scaling with dual graphics cards. Oxide itself does not know why.
Talking about hiding bias...
 
On paper, yes, this looks great. However, I have to question running both nvidia and AMD drivers on the same system. I expect there to be issues! I 100% guarantee we'll see various strange problems arise.
 
This looks good, to be able to mix and match if most games support DX12.
Looks like my R9 290X is a keeper, if AMD cards do better than Nvidia on DX12.
 
This looks good, to be able to mix and match if most games support DX12.
Looks like my R9 290X is a keeper, if AMD cards do better than Nvidia on DX12.
This specific game is optimized for AMD. Overall, games are usually optimized for nVidia and have been for years and most likely always will.
 
Will I be able to run my GTX480 alongside my 970? What gen does a GPU have to be to run in this mode? Now that I ask, I will venture a guess and say it has to be DX12 compatible?.

You could but I doubt it would be worth it I've tried using my MSI 660 Ti PE/OC (Faster than a 680 in most cases) and a Zotac 630 SYNERGY 4gb (terrible card never buy) and ran the 630 as a dedicated PhysX and I tried some benchmarks (Metro 2033) and performance went down. Linus tried it with a 580 and a 9800 and same thing happened.
I have more faith in Vulkan and DX12 as MS tends to break things more than fix, but it all comes down to the game dev's which In more recent years have been getting worse and worse.
I'm guessing, at least for the time being, you will need a cards of similar performance, within 1-2 generations eg 680/770/960. Later on they might add automatic load scaling/balancing (really not sure what to call it).
All I can say is consoles can suck it! :D
 
You could but I doubt it would be worth it I've tried using my MSI 660 Ti PE/OC (Faster than a 680 in most cases) and a Zotac 630 SYNERGY 4gb (terrible card never buy) and ran the 630 as a dedicated PhysX and I tried some benchmarks (Metro 2033) and performance went down. Linus tried it with a 580 and a 9800 and same thing happened.
I have more faith in Vulkan and DX12 as MS tends to break things more than fix, but it all comes down to the game dev's which In more recent years have been getting worse and worse.
I'm guessing, at least for the time being, you will need a cards of similar performance, within 1-2 generations eg 680/770/960. Later on they might add automatic load scaling/balancing (really not sure what to call it).
All I can say is consoles can suck it! :D

I believe the reason for this is the main GPU cannot be too many generations ahead of the card running Physx if so you get a performance regression.

A 9800 and a 580 is a good example a better dedicated card for that setup would be a 650 or 750Ti card
 
What truth unless you built a time machine no one posting here knows how the DX 12 landscape will fill out.
Don't question him, we are biased if we do, he is not.

Let me explain some things regarding async, because people here seem to be completely lost as to why I'm saying some things... Even to the point of claiming I'm biased. Maybe, after this, you will understand.

As soon as async is used properly, AMD will have the advantage. There is no question regarding this. It is basically a free performance increase, so, for it not to be used seems to be quite the waste, and that's an understatement. I don't think developers wanting to use async will be a problem. They've been talking about it for a long time, and it's regularly being implemented for consoles using GCN. To port it to AMD's GCN on PC would not be a huge task. The porting itself is harder than making the already implemented async to work. There are also multiple games to be released this year that will be using it.
Wouldn't be the first time though, that an ATi/AMD technology gets ignored because of nVidia's power in the gaming industry, and that will be the only factor limiting this performance advantage from AMD. When DX10.1 gave almost free anti-aliasing, it was ignored because nVidia hardware couldn't handle it. Even worse, they removed it from a game;
http://www.anandtech.com/show/2549/7

Back to async...
To do async like it's supposed to, nVidia requires a preemption context switch. To explain what that means, I have to highlight some other things first so that you people can understand what's going on.

When people are talking about AMD's async compute, what they actually mean is that graphics/shader tasks and compute tasks are processed in parallel, AND they are at the same time processed in a 'random' order. The latter part is not exactly accurate, but it makes it easy to understand. Some tasks (no matter if they're of a computing or graphical nature) are long, and some are short, and what I mean by processing in this random order is, that you can basically insert the short tasks in between the long tasks avoiding the GPU from idling. The GCN compute units can handle all the long + the short graphics/shader tasks AND the long + the short compute tasks mixed within each other like a soup. All the long, short, graphics/shader, and compute tasks are interchangeable with each other to be processed within AMD's compute units. This blending makes the efficiency of the GPU go very high.

nVidia's hardware cannot do this in the same way. They can handle either the mixing of long and short graphics/shader tasks, OR handle the mixing of long and short compute tasks. This is what we mean when we say it requires a context switch. You have to keep switching between graphics/shaders and compute tasks. This is obviously less efficient than AMD's hardware solution. And yes, being able to blend the short and long graphics/shader tasks is more efficient than doing them in order. Same for compute. But a context switch is costly. If you're doing async graphics now, you have to basically throw out your whole bowl of graphics soup to create a new compute soup, and this causes delay. What you gain by running the graphics/shaders soup and the compute soup separately in an asynchronous manner, is lost by having to switch between them.

Obviously, nVidia is claiming it can do Async, and they would not be lying necessarily, but, it's completely different than AMD's, and well, it's borderline useless for performance gains. And they will not be admitting this, because they advertised their graphics cards as superior for DX12, due to being capable of DX12.1, even though it doesn't exist. And they would get some backlash if it turns out that some graphics cards from 2011 (GCN 1.0) do some things (like async) better under DX12 than their "DX12.1" 2015 cards.

I hope this clarifies some things... Ashes of the Singularity is representative of DX12 performance + async. And I hope you understand now that it is futile to hope for nVidia's performance to be anywhere near AMD's performance when async is used. Don't wait for it because you will be disappointed. The elimination of the CPU overhead issue with the jump from DX11 to DX12 already gave AMD's cards a huge boost. Add in async, and the only card that can maybe compete is the out of the box overclocked 980 Ti to the Fury X. Within all other price points, AMD's cards will smoke nVidia's under DX12 + async.

nVidia has admitted that their preemption problem is still a long way from being solved. Here, on page 23 of their 2015 GDC presentation it is stated;
http://www.reedbeta.com/talks/VR_Direct_GDC_2015.pdf

That's why I'm saying this is representative of DX12 performance. All things that I've stated here is verified information that is available for everyone to find and understand. Pascal likely won't fare any better, and Polaris will be the cards to go for. Especially since Polaris will be fixing its front end to eliminate the DX11 issues that current GCN cards face. Combine that with the async benefits, and it's a no-brainer. That is of course, if Polaris indeed delivers what it promised.

Again, nothing that I stated here is secret. It's all over the place if you actually make the effort to understand what is available and has officially been published and investigated, which is exactly why I find it atrocious that a tech website which is supposed to know better than a layman (which I am), is making unfounded claims on nothing more than empty opinion without any data to back it up. And it gets annoying when these comments are always to the benefit of the same one and detrimental to the other. It even makes me wonder who sponsors this place.
If this was the other way around, nVidia having Async and not AMD, no one would be saying that it's not representative, but it would be seen as expected. Objectivity is nowhere to be seen, and if someone points this out, others have the audacity to call that someone biased.

Going back to what I just explained regarding async... If I as a layman that simply keeps up with PC hardware news as a hobby (with a full-time job in something completely different and at the same time is building a house btw) can figure these things out, why can't the staff of multiple websites do this, even though they have superior access to information in every way? What is the media doing these days...?
 

I think what you are missing is no one is disputing AMD current advantage since they have it in hardware.

What I've been saying it isn't a huge deal because when we finally get to the point where you need more than 10 fingers to count all the DX12 games out NV will finally have it in hardware. Its great bragging rights now but in the grand scheme of gaming we will already be on a different generation of cards or two generations ahead you are aware of how many years it takes to develop games. Both sides will be on completely different hardware in that time frame. So having a lead now does not equal a lead in the future.

The reason people are calling bias is you are ignoring this fact.

And for the record my GPU is a Radeon 7970Ghz!!!

I prefer AMD gpu's to NV's but I also look at things objectively with a dab of logic thrown in.
 
I think you're underestimating the DX12 adoption rate, considering this year alone there will be at least 5, but, I might be wrong about that. However, another point is that right now it's much better to invest in a GCN card in terms of longevity since async will be giving performance boost. I'm quite sure you're enjoying your HD 7970 still, since it also benefits a bit from async. In a way, AMD's cards being future proof hurts their sales, because the upgrading necessity is less frequent than nVidia's...
 
Does anyone know if we can carry on using GeForce features like 3D vision and Nvidia shield in Nvidia/AMD configuration?
 
Back