Assassin's Creed Valhalla can't hit 60fps@4K even with the RTX 3090

midian182

Posts: 9,745   +121
Staff member
A hot potato: Tomorrow sees the launch of another Ubisoft title that promises to make the most of next-gen hardware: Assassin’s Creed Valhalla. It’s a gorgeous-looking open-world action-adventure, meaning it’s bound to be taxing on PC components. But anyone lucky enough to own an RTX 3090 can expect to hit well above 60 fps in 4K at Ultra settings, right? No, apparently not.

The worrying news comes from the GameGPU YouTube channel (via DSOGaming). It appears to show Valhalla running on Nvidia’s Ampere flagship paired with a Core i9-10900K. The in-game benchmark has it averaging around 56 fps, though some sections fall as low as 40 fps.

In our review of the RTX 3090, the $1,200 card averages 120 fps across 14 games at 4K/high quality. Even the fairly demanding Horizon Zero Dawn managed 79 fps at 4K/Ultra settings. But it seems that, not for the first time, Ubisoft is releasing a title that will struggle on some PCs. In the company's PC requirements specs for Valhalla, it lists an RTX 2080 Super and Intel i7-8700K for 4K@30fps

Anyone who bought the original Watch Dogs might remember it being almost unplayable on even the beefiest rigs until Ubisoft released several patches, and the latest entry in the franchise—Watch Dogs Legion—is proving to be a nightmare for those looking to hit 60 fps with Ultra settings/high-resolutions, especially with ray tracing enabled. And it’s even worse for anyone with an aging processor.

It’s possible, of course, that there will be a day-one patch from Ubisoft that improves performance, and we can expect a new driver update from Nvidia to do the same, but the threat of another Ubisoft title that runs like a pig won’t help the company’s less than stellar reputation in this area.

Permalink to story.

 
It’s absolutely fine for the 3090 to struggle at 4K ultra on a brand new game as long as the visuals justify the performance.
 
It’s absolutely fine for the 3090 to struggle at 4K ultra on a brand new game as long as the visuals justify the performance.
Is it though? This just sounds to me like they didn't have time to optimise the game and are focusing on the content and actually "finishing" the game. Optimisations are always in the "nice to have" column and are worked on last.
 
From what I have seen, its the same very poorly optimized engine from AC origins and odyssey, the even uses the exact same (garbage) models of horses and many other assets, they do not deserve more then 15-20 euro for this, so will buy after an year.
 
The title should say can't hit 4k 60FPS at Ultra. If Series X is running this at 4k 60FPS, a 3090 can too. Maybe volumetric clouds on ultra is like the previous games where turning it down one notch gets like a 20% performance improvement.

Still, the AC team is horrible at optimizing for PC.
 
The title should say can't hit 4k 60FPS at Ultra. If Series X is running this at 4k 60FPS, a 3090 can too. Maybe volumetric clouds on ultra is like the previous games where turning it down one notch gets like a 20% performance improvement.

Still, the AC team is horrible at optimizing for PC.
Xbox run AC with settings below "low" with checkerboard trickery and high textures, so its not really a good comparison. The problem here is the poor optimizations for PC, they squeeze everything possible for consoles and do lazy ports after for PC, and this has to stop.
 
Quote: "Tomorrow sees the launch of another Ubisoft title that promises to make the most of next-gen hardware: Assassin’s Creed Valhalla."
Correction: It's a cross gen game, so no it will not make the most of the next gen hardware. There was a statement from Ubisoft recently where they said the need to redo their whole engine for next gen, to fully (make the most of) utilize next gen hardware, both consoles and PC.

Quote: "In our review of the RTX 3090, the $1,200 card..."
Correction: The MSRP of the cheapest RTX 3090 is $1,499. The real market price is even more than that.

On topic: ACV runs 4k 60 on XSX, so it is only Ubisoft's fault it does not run 4k 60 on such a powerful GPU, it does need optimizations, but no matter how better the Ultra looks and hits the performance on PC still it should...

Also, I would not be surprised if a Ryzen 5000 + RX 6800XT + SAM will actually manage to run 4k 60 Ultra, without any patch. That would be funny.
 
These games are optimised for rDNA 2. Which I think is pushing a lot of int 32 calculations and saving them in infinity cache. As far as I can tell most of amds new tech are using int 32 instructions. Something Nvidia just nuked on their cards.
 
I was actually surprised I could run Odyssey as well as I did on my hardware. 6700k + gtx 1080 got me 40-50 fps on very high preset at 2k with almost no stutters. Great looking game that to me doesn't look much of a downgrade compared to Valhalla.
 
It’s absolutely fine for the 3090 to struggle at 4K ultra on a brand new game as long as the visuals justify the performance.
It is not the game, the graphics quality is no more than normal, the graphic engine is poor , ubisoft are know for bad 3d engine performance.
 
As far as I can tell most of amds new tech are using int 32 instructions. Something Nvidia just nuked on their cards.
The INT32 capabilities of the RTX 3000 is no worse than with the 2000 series - if anything it will be marginally better due to the increase in L1 cache bandwidth. INT instructions are used for flow control, branching, etc. In the case of RDNA, two dedicated scalar units per CU handle basic INT32 instructions and the more complex ones are handled by the SIMD32 units. In the case of Turing and Ampere, there are separate INT32 units for all such instructions.
 
Same with WDL, these Ubi open world engines are often CPU bound. I don't know if that is the case here or not. The other thing this points out though is just how demanding 4K continues to be. Even though the next-gen consoles tout 4K 60fps performance, we're seeing dynamic resolutions, 30 fps, or both in many games.

The RTX 3080/90 are definitely powerful new GPUs, If the XSX is just behind the 2080 Super then the 3080 should be at least 50% more powerful than the XSX, perhaps the 3090 is 70% more powerful than the XSX. However, when PS4 came out there were GPUs that were easily twice as powerful, like the 780 Ti, which was about 250% the performance of the PS4, so that kind of gives you a clue of the excellent value of the XSX, and even the PS5 in the current marketplace. The 780 Ti is still a decent 1080p card today, I don't think we'll be saying the RTX 3080 is still a decent 4K card seven years from now. I don't think we will be in 4 years honestly.

The 3080 is not as powerful as the number 30 Tflps suggests, and not by a long shot. The thing is, that to achieve that a program has to be using exclusively FP32 math. Nvidia estimated that modern games average about 25% INT32 math. I don't really know how exactly how this works, but if we assume that means 25% of the cores will normally be running INT32, then the real performance that can normally be expected of the RTX 3080 Should be around 22.3 Tflops, about 150% the performance of the 2080 Ti. But, the average real advantage that is being seen in game benchmarks is somewhere between 25-35% depending on which games were reviewed. If we give it the higher, that means the comparative Tflps would actually be right around 19.2, less than what is being claimed by the 3070. It also suggests that about a third of the cores are typically being reserved by INT32 math.

I would wager that the RX 6800 XT will have a significant advantage in these Ubi open world games due to its boost clock and core counts, 4608 is not the 8704 that Nvidia claims, but @ 2.25Ghz and 4608 cores the XT should be capable of 20.7 Tflps, better than the 19.2 (adjusted Tflops) of the RTX 3080. That being said, the 3080s RT capabilities will likely give it better RT frames in a game like WDL (even with DLSS off) than the 6800 XT. Assuming that the 3080 really has better RT capabilities.
 
The INT32 capabilities of the RTX 3000 is no worse than with the 2000 series - if anything it will be marginally better due to the increase in L1 cache bandwidth. INT instructions are used for flow control, branching, etc. In the case of RDNA, two dedicated scalar units per CU handle basic INT32 instructions and the more complex ones are handled by the SIMD32 units. In the case of Turing and Ampere, there are separate INT32 units for all such instructions.

Yup. Nvidia’s marketing has been terrible with respect to the separate INT pipeline on Turing. It’s only served to confuse people.

All of Nvidia’s unified shader architectures run INT instructions on the SIMDs dating back to G80. AMD introduced scalar ALUs with GCN that are helpful where an entire wavefront of threads share the result of a calculation. For normal INT instructions AMD runs those on the SIMDs just like Nvidia.
 
Is it though? This just sounds to me like they didn't have time to optimise the game and are focusing on the content and actually "finishing" the game. Optimisations are always in the "nice to have" column and are worked on last.

I don’t know how optimized this particular game is. Just saying that a 3090 struggling at 4K ultra is not necessarily a bad thing.
 
Is it though? This just sounds to me like they didn't have time to optimise the game and are focusing on the content and actually "finishing" the game. Optimisations are always in the "nice to have" column and are worked on last.

Go back and look at the performance of the 2080 Ti when a new game came out, the same thing occurs. These things take time.
 
Back