I would avoid using That example as you Don't typically see Nvidia improve that drastically in a generation. It's an outlier generation that definitely have Nvidia the room to price gouge Turing customers.
It's not really an outlier, it's just a full node jump.
Maxwell GTX970 was TSMC 28nm, Pascal (GTX10) went to TSMC 16nm. Big density gain, in addition to the architecture gains.
Turing (RTX20) was only a half node, not even that arguably. It was called '12nm' but it was only a very slightly improved 16nm, with modest transistor density gains. Basically 16nm+.
So there was never going to be a big jump from Pascal to Turing, especially since a huge amount of die area was handed over to the ray tracing and Tensor cores.
As far as we know Nvidia have gone to 7nm EUV. So thats second generation 7nm, or 7nm+. Whether it's Samsung's or TSMC that is a full node jump over TSMC 12nm. Another big density gain.
If the clocks scale well too and Radeon VII suggests they will, you're looking at another potentially large step on the scale of Maxwell to Pascal in 2016.
We see Minecraft ray traced demo in Xbox X right? Since the videogame will run every game at 60fps@4k and Rtx2080ti run the same title at 40fps without dlls I think RDNA 2 will have a improved solution to selected Nvidia dlls games.
This is the worth time to buy new vga ever.
Let's wait for rdna2/ampere gpu and buy a 5700xt for U$200 or 2070s for U$230
Nobody official has said every Series X game will run everything 4K 60FPS. Most developers may target it but it's not a guarantee.
That Minecraft ray traced demo ran 1080p and averaged over 30FPS, but not a lot more.
RTX2080Ti averaged 70FPS @ 1080p native on Tomshardware tests. The Series X demo didn't display any more performance than say an RTX2060 Super or RTX2070 which averaged ~45FPS.
Don't be thinking Series X is somehow much faster than existing RTX cards.