Cyberpunk 2077 DLSS + Ray Tracing Benchmark

If what you claim is true, then why don't Microsoft simply release a new version of Minecraft where the world is made of 1mm particles?
Don't move the goalposts. Your original claim was that virtual reality was forever impossible, despite any future advances in computing power. Now your complaint is that Minecraft doesn't currently offer a fine-grained enough simulation? I'm sure future versions of Minecraft, circa 2030 or so, will offer a much smaller particle size.
 
Don't move the goalposts. Your original claim was that virtual reality was forever impossible, despite any future advances in computing power. Now your complaint is that Minecraft doesn't currently offer a fine-grained enough simulation? I'm sure future versions of Minecraft, circa 2030 or so, will offer a much smaller particle size.

My claim is that simulation is forever impossible. Minecraft was just an example. Once they manage to switch to 1mm resolution (in the far future) people will be happy for a few months, and then they'll start complaining that they can see the little cubes (or spheres).

So, in the next iteration they'll switch to even smaller particles, that cannot be seen by naked eye. And guess what? Someone will say "I took a look through the magnifying glass in the game, and I could see the building blocks. That's crap. I want a better simulation, where I can watch things through the magnifying lens without seeing the quantum particles of the game".

In the next iteration they'll make even smaller particles. Which can take a long time, because 10x reduction in size means 1000x increase in data size and billions of times higher requirements on computing. Still, eventually, they'll manage to do it. But then someone will say: "I took a look through the microscope, and all I could see are little spheres. I wanna be forensic expert in the game and analyze blood samples. But your crappy game has a minimum object size of 0.05mm which is just not enough."

And so on. Users are never satisfied. Some of them will ask for more details, others will ask for a bigger world. Compute requirements grow exponentially with both values.

If you think users will be satisfied one day, then why aren't they already? Nowadays games look and feel orders of magnitude better than games from 1990'es. It's an abnormal improvement in quality. So one would think gamers are now happy. But they aren't. Whatever exists today, tomorrow is old. And that's why simulating the world is a never ending story.

Well, that's one part why it's impossible. Customers would actually be happy once the sim is indistinguishable from reality. But that can never happen, since reality is too complex. Even if we make a computer the size of Cyprus, it still wouldn't be able to simulate our planet with the satisfying amount of details.
 
Worth noting, now that I run another PC with a 3080 there isn't a whole lot of difference between a RTX 2080 Ti once the bios is updated to give it more juice and you have a proper fan curve. (or water cooling).. have a $120 AIO on my 2080ti and a 380w max power limit now. It certainly beats any 3070 and isn't far off my 3080.

If you have a RTX 2080 Ti it is worth checking in GPU-Z -> advanced -> nvidia bios & check what your power limit is set at. As most cards (dual 8x pin) have the same PCB but are co*k blocked by a bios power limitation &/or crappy fan curve, just try to keep it under 60 degrees max with a custom fan curve and boost it up. If it is under 312w or so, just update your bios to a 338w, 360w or the 380w generic KFA/Galax 'https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910'. If you have a 3 fan or with 3x 8pin you can use a 450w+ power limit.

Easy process to up the max power limit >360w & expect the 1440p results to be closer what can be achieved at 4K. Quality cards that keep getting quicker thanks to nvidia's love for improving DLSS & RT drivers.

After putting the 300-380w bios on my elderly Asus Dual fan (sept 2018) with a custom fan curve in Msi Afterburner keeping it generally under 60 degrees in all weather.. but later a cheapo AIO H55 (radiator behind a case front intake fan), Kraken NZXT mounting braket & noctua 92mm fan for vram & vrm cooling it now lives in the 2100-2145mhz +1000 vram idling at ambient temp and max in mid 50s.

Generally, comparison benchmarks use 220-250w 2080 Ti where they have plenty of headroom with quality 50c capacitors rather than 20c ones used on the newer series. jk just they are still much better than jacket man attempted to show. Especially, if you have a 2.7+ socket card or water cooled it :p either way, the extra OC ability, 352-bit bus & 11gb vram allows them to beat RTX 3070s in anything, unless it is literally underclocked.

Beats RTX 3070s in most tests by at least 20% including in 3DMark vs every RTX 3070 that have run it.

Couple examples of my 2080 Ti in 3DMark.

Port Royal (My 2080ti = 10506) vs all RTX 3070's in the world (8207): https://www.3dmark.com/search#advanced?test=pr P&cpuId=&gpuId=1342&gpuCount=0&deviceType=ALL&memoryChannels=0&country=&scoreType=overallScore&hofMode=false&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=

Timespy (2080ti = 16111) vs (fastest 3070 in the world =15726 & 12523 avg)
https://www.3dmark.com/search#advanced?test=spy P&cpuId=&gpuId=1342&gpuCount=0&deviceType=ALL&memoryChannels=0&country=&scoreType=overallScore&hofMode=false&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=

Fire Strike Extreme (2080ti = 18447) vs RTX 3070 15.5k or 19.2k RTX 3080: https://www.3dmark.com/fs/24479322
 
The 20xx series are still good enough.
but the heat of 20xx are to high.
the 30xx series use lesser V w.
if you can run a game in low with 10xx ti gpu and want better non rtx you can go low on gpu.
even 1920x1080p gameplay.
why go 4k when you can go 1k.
later when nxt 4 th gen comes it will beat 3xxx gpu gen with about 70% and better.
so why not wait 4k and go for 2k a while.
amd intel nvidia want us buying to ekspensive gpu s cpu to just getting a half year later some better FPS ingame.
pcie 5.0 and so on.
give us a break we must first go for 3.0 gaming then pcie 4.0 and stable drivers.
what comes after dx12 2 dx13 ?
so youre gpu can run 4k 8k now but will be a lame gpu cpu later when ddr5 comes up with 8400 mhz on stable drivers into the future.
ddr 3 4 gaming would soon look like ages ago.
pcie 1.0 was far cry 1 and many other games at 1 gb speed.
now you will needing 24-48 gb just to run games ?
if I find a 48 gb ram gpu congrats me.
could it run better then a rtx 3090 ti ? https://www.nextron.no/home/product/VCQRTX8000 still I cant buy a pcie 3.0 and I must wait for pcie 4.0 on both amd intel nvidia. too high money cost. no way I could newer buy to that price. EVER
 
Last edited:
Back