Benchmark DirectX 12 with 3DMark's freshly released Time Spy test

Scorpus

Posts: 2,162   +239
Staff member

It's been a long time coming, but now it's finally here: 3DMark's 'Time Spy' benchmark is now available to test DirectX 12 graphics performance in a synthetic setting.

Time Spy is a very demanding benchmark that uses an array of DirectX 12 features, including asynchronous compute, explicit multi-adapter, and improved multi-threading. The test runs at 1440p by default, which is enough to crush modern graphics cards even though 3DMark's previous flagship test, Fire Strike Ultra, ran at 4K.

For those who are deeply familar with Futuremark's graphics benchmarks over the years, Time Spy will be of particular interest to you. The test itself features a museum setting with a number of exhibits, each featuring an older 3DMark benchmark, including those found in the modern version like Fire Strike and Ice Storm. Plus it looks visually incredible.

3DMark Time Spy is available in all versions of 3DMark, including the free version. Those who fork out for the Advanced or Professional versions do get a collection of extra tests, including the ability to disable asynchronous compute, and a stress test that could be particularly useful for overclockers.

You can download the Basic (free) version of 3DMark right now from our download section here, which includes the Time Spy benchmark. If you want to purchase the Advanced version, it's currently on sale for just $10 through Steam.

Permalink to story.

 
So, I think I'm beginning to understand:
- DX12 improves AMD GPUs, making up for fairly poor drivers in DX11 and on a performance/price finally pushing ahead of nVidia (DX12, Mantle, Vulkan - all pretty much can do this)
- nVidia has some things to improve to match, but still outperforms on the large body of DX9-DX11 games
- TDP will moderate, allowing many to upgrade without doubling the size of their PSU

Time Spy provides a DX12 benchmark (tight focus) and some interesting results. However, it does not provide for a broader assessment. It is an interesting tool.

A new performance value proposition (performance @ resolution / price aka PRP) needs to be better defined, understood and accepted. Elegant performance (like seeing the glint in the horsefly's eye as you mount your trusty steed) may be important for the elite screen folks (4k and higher), but it is the PRP that will sell most of the new mid-range cards to us blokes with resolutions of 1k or even less.

Now I will bet there are folks who will agree and disagree. Looking forward to hearing from all. Would there be value in a ratio like ( performance @ resolution ) / price?
 
So, I think I'm beginning to understand:
- DX12 improves AMD GPUs, making up for fairly poor drivers in DX11 and on a performance/price finally pushing ahead of nVidia (DX12, Mantle, Vulkan - all pretty much can do this)
- nVidia has some things to improve to match, but still outperforms on the large body of DX9-DX11 games
- TDP will moderate, allowing many to upgrade without doubling the size of their PSU

Time Spy provides a DX12 benchmark (tight focus) and some interesting results. However, it does not provide for a broader assessment. It is an interesting tool.

A new performance value proposition (performance @ resolution / price aka PRP) needs to be better defined, understood and accepted. Elegant performance (like seeing the glint in the horsefly's eye as you mount your trusty steed) may be important for the elite screen folks (4k and higher), but it is the PRP that will sell most of the new mid-range cards to us blokes with resolutions of 1k or even less.

Now I will bet there are folks who will agree and disagree. Looking forward to hearing from all. Would there be value in a ratio like ( performance @ resolution ) / price?

It will be the same as performance/dollar unless you plan on reducing graphics settings for lower resolutions. Then again, there should be a consensus on what settings can be reduced at lower res without degrading the game beauty.
Also, I think async compute improves performance on AMD cards on both high and low graphics settings.
 
It will be the same as performance/dollar unless you plan on reducing graphics settings for lower resolutions.
I wonder... I think there might be rapid changes in perceived performance as resolutions drop - especially for cards with smaller amounts of memory. It would be interesting to learn what these configurations are where a small reduction in resolution caused rapid changes in performance.
 
Like every new DirectX it will take years for developers to really start utilizing it anywhere near its potential.
By that time todays GPU's, even the latest releases, will be obsolete.

While AMD did themselves a favor by preparing beforehand with their feature support to help them right out of the gate, I feel by Christmas time both red and green will have a better handle on everything they do good, and others things they don't.
 
Last edited:
Back