Hogwarts Legacy GPU Benchmark: 53 GPUs Tested

Can we just be honest here. nvidia will fix these issues with a driver update and the people praising AMD won't be able to anymore. things like this happen occasionally and nvidia always fixes the issue.
 
RTX 3070 beats 6750XT 12GB at 1080p and 1440p ultra setting (no RT)

Any other setting (like RT or 4K) will run at poor fps on 6750XT. In other words, 12GB on 6750XT does not have advantage over 3070 in any realistic setting that most people will use.

What is the point of "large memory" when it is only useful on setting that most people will probably not use.... People won't be using RT on 6750XT... Even 6800XT drops below 60fps at 1080p with RT. Most people these play 60+fps. Nvidia 8GB and 10GB cards is NOT an issue in setting/resolution that runs 60+fps....
 
Last edited:
Let's face it...outside of the usual binary arguments, the game is perfectly playable on either AMD or Nvidia - Except at 4k ultra using RayTracing. It is only debatable whether it's worth paying two grand for a graphics card that can't do it.
 
I'm playing the game at 3440x1600 with a 12900k / 4090. The game run terribly. I have it maxed out (RT maxed) with DLSS quality and it dips in the 40s all the time. Especially in that early courtyard area. Sure it runs in the 120s at times but drops down a LOT! This is just due to poor code, I can run RDR2 totally maxed smoothly in the 130s. Talking benchmarks for this game in its current condition is pointless. It's going to run poorly on everything.
 
No regrets buying that RX 6750 XT now for $100 less than a 3060 suck on that 4080/4090 LOL!!! Never had a use for raytracing before with my 2000 series card. Still don't plus I don't want to use a crutch like DLSS anyway. Fuzzy graphics are such a dope feature yo *fart noise*
 
Last edited:
Let's face it...outside of the usual binary arguments, the game is perfectly playable on either AMD or Nvidia - Except at 4k ultra using RayTracing. It is only debatable whether it's worth paying two grand for a graphics card that can't do it.

I'm not a PC gaming enthusiast just to achieve "playable". The reality is this is such a bad port you cant even max it on a 4090. Its a great game with utter garbage code thats an embarrassment for their attempt at PC gaming. I haven't seen a game this bad since FarCry 6. If a game plays better on console than a 4090 thats just a joke.
 
I'm glad that you realise this truth that too many people can't seem to understand. Games don't need RT to look amazing. I've played games like Gotham Knights and Cyberpunk 2077 with RT on and RT off. I really can't tell either way so I just leave it off because if I can't tell, it's just a waste of electricity.
The only games I've played where RT seems to make a difference is Control and Guardians of the Galaxy, other than those two the performance cost is either too high (or theres so much other stuff happening on screen cause its a game youre playing,) that the effects of RT are missed.

the reflections and lighting game engines already have are so darn good(hogwarts legacy, AC origins/odyssey/valhalla, callisto protocol, days gone, rdr2, gta5) that RT imho right now isn't worth it yet, and that comes from someone who bought their card because I wanted a decent rt choice.
 
Nice review ! Thxs . What I understood from an article from PCworld , 7900 series suxx in this game , fortunately there is TechSpot . 👍
Said that , there is something wrong with 7900 series memory .
 
https://www.techspot.com/review/2099-geforce-rtx-3080/
"Doesn't necessarily future proof the GPU however, as Nvidia paired it with 10GB of VRAM which might prove insufficient in a year or two..."
I didn't mean you specifically Steve. I know that you're a big fan of the RX 6800 cards. I meant "tech press" as a more general term. I don't want to name names but I think you know which members of the tech press to whom I was referring. HINT: It wasn't you specifically.

Nevertheless, I think you were too soft on nVidia for the 10GB VRAM buffer. I realise that you're a polite gentleman and I do like that about you but people were spending thousands on these cards because they didn't know any better. RT or not, DLSS or not, those cards will be relegated to 1080p gaming sooner rather than later. For that reason alone, I would've advised people to avoid them.

Now, I realise that the performance is good and the quality is definitely good and I've always said that nVidia doesn't make bad products, the company itself is what I can't stand. This is a perfect example of why I won't use them and rarely advise others to use nVidia. These cards have a built-in Achilles' heel that was glaringly obvious to people who know PC tech. Jensen called this card "The Flagship" but since when does a flagship card have less VRAM than cards of previous generations?

If a card has a built-in Achilles' heel like this, it shouldn't be one that costs that much money. If they had done this to, say, the RTX 3050 or RTX 3060 it wouldn't have been so bad because people aren't cashing in their life's savings for those cards. I may snicker at people who spent all that money and are now encountering this problem but it's not because I bear them any malice, quite the contrary. It's because I've become somewhat hopeful that nVidia will suffer the wrath of all the people they screwed over.

I realise that it's probably not going to happen (nVidia has always gotten away with it before) but they've never done something this egregious before. Sure, there was the GTX 970 VRAM debacle but that was just 512MB that was allocated in such a way that it couldn't be used but that didn't have a huge impact on the card's longevity. In this case, people are going to be stuck at 1080p and they're going to see the people with Radeons not having to break the bank a second time because, let's be honest here, the difference between 10GB and 16GB on a video card is MASSIVE.

When buying a video card, you're buying hardware. Sure, there are software packages offered by both companies that can enhance your experience but software behaviour can always be replicated, improved and updated. On the other hand, hardware like video cards can't be replicated, improved and updated. The amount of VRAM that a card is born with is the same amount that it dies with. This is why I believe that the hardware features of a video card far outweigh any software features.

Now, I will concede that not all software features are created equally. I wouldn't be so foolish as to say that ATi's implementation of OpenCL is as good as CUDA but it did still work and for most people (who don't use it professionally), it was fine for the odd times that they would ever use it. OTOH, PhysX was ultimately defeated by Havok which proves that Havok was the better solution. I realise that neither OpenCL nor Havok were AMD creations but their creators are irrelevant as long as they can be used.

DLSS and FSR comparisons have become comedic because, as Tim so rightly pointed out, you have to slow the game down and literally search for differences to make these comparisons at this point. At this point, any quality difference that exists (if any) would be so slight as to be irrelevant. Thus, the DLSS selling point has been successfully nullified by FSR but the extra 6GB of VRAM can never be nullified.

Did it begin that way? No, initially DLSS was clearly superior but it is that truth that proves my point most effectively. DLSS was nullified as a feature by FSR because FSR improved and now DLSS doesn't offer any real advantage over FSR. In fact, I would say that FSR is superior overall because it doesn't block anyone from using it regardless of what GPU they have, something that I consider to be significant and worth supporting.

If I'm buying a high-end video card, I don't even need upscaling tech (at least, not at first) and if I ever do, I can be sure that the maker of my card will have a polished solution by the time I need to use it. OTOH, if I'm buying a low-end card, upscaling tech will be more relevant but my priority on a low-end card would be the best native performance I can get for my money.

Sure, for a low-end card, DLSS was an advantage but only in the short-term. Where nVidia shot itself in the foot was in the belief that they were somehow justified in trying to charge more for an RTX 3050 than AMD did for an RX 6600 XT. I'm not talking about MSRP here because those became irrelevant in that generation very quickly. The prices on Radeon cards did fall but they fell because AMD allowed them to while nVidia didn't.

I honestly don't understand how these two are considered equivalents sometimes when it's so patently obvious that GeForce cards are a terrible buy. Right now, the RTX 3050 costs $280 at Newegg and to get a Radeon that competes at the same price point, you'd be paying $5 less at $275 despite the fact that the RX 6600 XT is a staggering 54% faster than the RTX 3050. The hardware is what creates that performance delta and no amount of software can mitigate it. Note that these are valid "Sold and Shipped by Newegg" prices, not prices that have been manipulated by scalpers. DLSS cannot even hope to offset that large performance difference and that's not even taking into account that FSR does completely mitigate any advantage that DLSS offers. Thus, the hardware matters.

I'm not an AMD fanboy, I just buy Ryzen and Radeon because in both the CPU and GPU space, AMD is definitely the lesser of two (or now three) evils. For gamers, every software-based advantage that a card has never lasts longer than a few months. First nVidia had GSync so AMD released FreeSync. Some said that GSync is superior but FreeSync didn't cost anything and was good enough to eliminate screen-tearing. Now they're indistinguishable. AMD released Smart Access Memory so nVidia encountered by enabling resizable bar. Now, some would say that SAM is superior to resizable bar but, again, the difference isn't enough to matter so who cares? We have DLSS which was countered with FSR which again, some say DLSS is better, but, again, the difference is too slight to matter. Now we have this "frame-generation" technology (which I find dubious at best) that only works on RTX 40-series cards (how nice of them) and AMD will have a response for it probably in April or May.

As we see, software advantages are short-term only. Hardware advantages are both short and long-term. More attention should therefore be paid strictly to the cards themselves, the hardware. Sure, for professionals, the software support matters more but how many video cards are purchased with professional applications in mind compared to the number that are purchased for gaming? I'd be surprised if it was more than 0.1% of total sales.

This is where nVidia shifted the narrative to their own advantage. With the exception of the halo-level, they compete very badly against AMD so they come up with these gimmicks to try to justify the extra cost as if it's some kind of magic that only they can deliver. I gotta hand it to them though, the tactic works for the masses and the tech press should be telling people about shenanigans from any company that's trying to pull them. Your coverage of the RTX 3060 8GB was nothing short of phenomenal. More articles like that are what this industry needs, not more articles trying to compare DLSS with FSR. (y) (Y)
 
No regrets buying that RX 6750 XT now for $100 less than a 3060 suck on that 4080/4090 LOL!!! Never had a use for raytracing before with my 2000 series card. Still don't plus I don't want to use a crutch like DLSS anyway. Fuzzy graphics are such a dope feature yo *fart noise*
Sure , no RT doesnt mean no reflections in game at all . nvidia wants to make believe that and makes more nv*****s .
 
The only games I've played where RT seems to make a difference is Control and Guardians of the Galaxy, other than those two the performance cost is either too high (or theres so much other stuff happening on screen cause its a game youre playing,) that the effects of RT are missed.
Yes, I've seen the difference in Control and it does look markedly different but Control is basically a game that nVidia paid someone to make in order to show off RT. I've never played Guardians but I have played CP2077 and Gotham Knights. Any difference that it made in those games was nothing that I could notice except that some surfaces in Gotham Knights were a bit shinier with RT turned on. I actually started laughing when I realised that was what people were trading high fps for.
the reflections and lighting game engines already have are so darn good(hogwarts legacy, AC origins/odyssey/valhalla, callisto protocol, days gone, rdr2, gta5) that RT imho right now isn't worth it yet, and that comes from someone who bought their card because I wanted a decent rt choice.
Sure, and there's nothing wrong with that. You wanted to see what all the fuss was about and I don't blame you for that because I was actually curious about it myself. When I saw the implementation of it on the Turing cards, I remember thinking to myself "Why would anyone care about that in a first-person shooter or RPG game?" because I never notice what shadows or ponds look like when I'm playing a game.

I went and played some games at my friend's place on his RTX 3080 (he's an FS2020 nut so that's why he has this card) to see what RT was all about. He wasn't the least bit impressed with RT because of the performance hit and told me that it was essentially BS. I wanted to try it anyway so he brought up Metro: Exodus. I won't lie, it looked very nice but when actually gaming, it just vanished into the background and I didn't care. What I did care about though was the performance hit. Turning it off and upping the resolution to 2160p was a far better experience than 1440p with RT on. This is why he was unimpressed with it. He only chose the RTX 3080 because GeForce cards give better performance on average in FS2020 than Radeon cards.

I knew then that it could be an amazing thing but I also knew that we were nowhere close to the level that it would be worth paying more for (and we still aren't).
 
Yes, I've seen the difference in Control and it does look markedly different but Control is basically a game that nVidia paid someone to make in order to show off RT. I've never played Guardians but I have played CP2077 and Gotham Knights. Any difference that it made in those games was nothing that I could notice except that some surfaces in Gotham Knights were a bit shinier with RT turned on. I actually started laughing when I realised that was what people were trading high fps for.

Sure, and there's nothing wrong with that. You wanted to see what all the fuss was about and I don't blame you for that because I was actually curious about it myself. When I saw the implementation of it on the Turing cards, I remember thinking to myself "Why would anyone care about that in a first-person shooter or RPG game?" because I never notice what shadows or ponds look like when I'm playing a game.

I went and played some games at my friend's place on his RTX 3080 (he's an FS2020 nut so that's why he has this card) to see what RT was all about. He wasn't the least bit impressed with RT because of the performance hit and told me that it was essentially BS. I wanted to try it anyway so he brought up Metro: Exodus. I won't lie, it looked very nice but when actually gaming, it just vanished into the background and I didn't care. What I did care about though was the performance hit. Turning it off and upping the resolution to 2160p was a far better experience than 1440p with RT on. This is why he was unimpressed with it. He only chose the RTX 3080 because GeForce cards give better performance on average in FS2020 than Radeon cards.

I knew then that it could be an amazing thing but I also knew that we were nowhere close to the level that it would be worth paying more for (and we still aren't).

You guys are all missing the point here. You should expect it to run on top hardware well, even if you don't have that great of a card. And this is not a situation like "Crysis" where a very advanced game was delivered before there was hardware on the market to run it.

This is a sloppy, rushed mess of a game for PC that was delivered to grab money with a hollow promise to fix it later with patches. This is unacceptable and shameful for the PC gaming community! The reason it runs like crap is not because RT is so demanding, its because PC comes dead last on their priority list!
 
You guys are all missing the point here. You should expect it to run on top hardware well, even if you don't have that great of a card. And this is not a situation like "Crysis" where a very advanced game was delivered before there was hardware on the market to run it.

This is a sloppy, rushed mess of a game for PC that was delivered to grab money with a hollow promise to fix it later with patches. This is unacceptable and shameful for the PC gaming community! The reason it runs like crap is not because RT is so demanding, its because PC comes dead last on their priority list!
You could be right and from what I understand there is a patch that fixes this problem (although I don't know what it is because I don't own the game yet) but there's no question that there is a definite line drawn in the performance charts that basically say "Only cards with 12GB or more can do this" and this is regardless of whether the card is Radeon or GeForce. Hell, the RTX 3060 out-performed the RTX 3080 because of this. That's just crazy!

Even if this is a problem with the game itself, it's still a perfect demonstration of what is to come. This will be normal very soon and there won't be any debate about implementation. Remember that Far Cry 6's HD texture pack requires 11GB of VRAM to implement and that game came out in 2021. It's only a matter of time (and it won't be long) before 10GB is no longer sufficient at 2160p with or without RT. Hell, I remember when 8GB of RAM was considered overkill for a PC and now, for some things, even 16GB isn't enough.

I'm still at a loss as to why the RTX 3080 has 10GB, the RTX 3070/Ti has 8GB and the RTX 3060 has 12GB. That's pretty bass-ackwards if you ask me. :laughing:
 
Last edited:
You could be right and from what I understand there is a patch that fixes this problem (although I don't know what it is because I don't own the game yet) but there's no question that there is a definite line drawn in the performance charts that basically say "Only cards with 12GB or more can do this" and this is regardless of whether the card is Radeon or GeForce. Hell, the RTX 3060 out-performed the RTX 3080 because of this. That's just crazy!

Even if this is a problem with the game itself, it's still a perfect demonstration of what is to come. This will be normal very soon and there won't be any debate about implementation. Remember that Far Cry 6's HD texture pack requires 11GB of VRAM to implement and that game came out in 2021. It's only a matter of time (and it won't be long) before 10GB is no longer sufficient at 2160p with or without RT. Hell, I remember when 8GB of RAM was considered overkill for a PC and now, for some things, even 16GB isn't enough.

I'm still at a loss as to why the RTX 3080 has 10GB, the RTX 3070/Ti has 8GB and the RTX 3060 has 12GB. That's pertty bass-ackwards if you ask me. :laughing:

Because the XX80 cards have always been the last card you want to buy. the 980, 1080, 2080, 3080, and now 4080 have all sucked. Either get the xx80ti (or 90) or wait for the xx70ti.

(I have owned the 1080ti, 2080ti, 3090, and now 4090)
 
Yeah that's true. The Radeons were clearly the better cards overall in this test and they where the only cards that weren't using drivers that were optimised for this game. Ironic, eh? :laughing:

Exactly! We want to win even better. RDNA3 won the first round here with, effectively, beta-quality drivers. The real match starts only with 23.2.1, every GPU/gaming benchmark for next-gen cards (and even for RDNA2 at least) is now obsolete since this driver delivers an average +5% across the board, tons of games are faster. Time to flip places in most charts where a Radeon card was just narrowly behind something else... ;)
 
Admittedly I have only watched videos online but graphicly it certainly doesn't look like it should be so demanding. Am I wrong here? I could be but I just don't see anything special.
 
I recently updated my 2080 Super to a 6800XT, not for gaming so much as photo-processing. 8GB was a serious problem when do 2x AI upscaling with Topaz Gigapixel on 45MP files. I knew the 3080 was better at RT, but didn't care, 10GB was not enough of an upgrade. My 6800XT renders upscaled images muvh faster than the 2080 Super. Also it's on average 50%+ faster in games at 1440p. The fact that second hand it was also a hell of a lot cheaper than the 3080 or 3080 Ti was the final nails n the coffin. I had high hopes for the 7900XT, but the value is not there at around $1600AU and performance no where near what the hardware seems capable of.

4070 Ti is a joke for the price. If it were 256 bit, 16GB and just had less cuda/tensor cores than the 4080 and was say 20% slower, it would be ok for $799.
 
That's just horseshit. First of all, the 3080 plays fine even at 1440p with RT. Im playing at 3440x1440 with a 3060ti. You just need to - you know, lower the textures. Saying the 3080 is obsolete because you can't play the game maxed out on a 2.5 years old card is laughable at best. How many fps did you get on cyberpunk ultra maxed out 1440p, a 2 year old game on your 6800xt? Using your logic, your 6800xt was obsolete the moment it hit the market, lol

You say DLSS is useless, but your 6800xt actually gets 39 fps at 1440p RT ultra with no FSR. So...are you enjoying the game at 39 fps or what?
QFT, every last bit. AMD's bois working overtime to try shoot down the 3080 lol. HUB's testing seems like the loner here, TPU posted expected results, and iirc the patch pretty much addresses it for Nvidia GPU's if you did see the issue. Obsolete lol, what a joke, the obsolete thing here are HUB's numbers.
 
Last edited:
QFT, every last bit. AMD's bois working overtime to try shoot down the 3080 lol. HUB's testing seems like the loner here, TPU posted expected results, and iirc the patch pretty much addresses it for Nvidia GPU's if you did see the issue. Obsolete lol, what a joke, the obsolete thing here are HUB's numbers.


Strawman is presenting a strawman...

The facts are, that the 6800XT is a better all-around performer than the 3080. It's just lemmings and kids who's heads are stuck on marketing that do not know. So these same lemmings feign superiority... hiding from facts:

yet...
CP2077_1440p-p.webp
 
Last edited:
Typical brute force approache sprinkled with upscaling techniques to make up for the coding shortfalls. I'm 35% complete progess and the game and performance is allover the place utilization no so much. Imo the best balance with 4090 Suprim liquid at 7700x at 5.65 ghz and 32 gig ram cl30 at 6ghz at 4k ultra settings rt on upscaling off but frame generation on for the best visual balance with playable latency. Getting around 80 to 90fps with Nvidia dlaa anti aliasing.
update reflex set to boost mode*
 
Last edited:
Back