I believe is a cost savings measure even though it doesn't seems like one: Nvidia basically works on a single core architecture. After they're done with it they actually develop both their enterprise/compute solutions in software to match and also their gaming features: I believe this is why Nvidia launched ray tracing and specially DLSS when they did which was a result of including tenser cores on the shared architecture since those were on demand for certain workstation/enterprise workloads and then find a way to take advantage of them on the gaming software suit like saying "Hey let's just use this ML compute power to implement stuff useful to gamers like Machine Learning based scaling tech"
Obviously I am greatly over-simplifying things here today but instead of having 2 separate lines of chips they develop one and then adapt needs on either side of the consumer or enterprise equation to utilize the hardware features regardless of the situation.
And in that regard well, miners really are just compute customers that have so much demand and so little actual requirements for compute than they can do as efficiently with consumer products, in fact even more so since expensive features like ECC memory for vram is not something that impacts hash rate so the consumer product is more cheaper since it's optimized for them better, by accident.
Mining sha hashing only needs strong integer performance while real compute use cases needs strong fp64.
Gaming needs fp32 at max.
As seen done by amd, differentiating compute and gaming gpu will allow nvidia to use expensive latest end euv fabrication for high margin compute gpu while low margin gaming gpu stays at cheaper duv fab.
It also beneficial for availability as duv has higher production rate than euv.
While rdna 2 duv availability is bad, it's just amd maximising capital usage on higher margin zen3