I believe is a cost savings measure even though it doesn't seems like one: Nvidia basically works on a single core architecture. After they're done with it they actually develop both their enterprise/compute solutions in software to match and also their gaming features: I believe this is why Nvidia launched ray tracing and specially DLSS when they did which was a result of including tenser cores on the shared architecture since those were on demand for certain workstation/enterprise workloads and then find a way to take advantage of them on the gaming software suit like saying "Hey let's just use this ML compute power to implement stuff useful to gamers like Machine Learning based scaling tech"
Obviously I am greatly over-simplifying things here today but instead of having 2 separate lines of chips they develop one and then adapt needs on either side of the consumer or enterprise equation to utilize the hardware features regardless of the situation.
And in that regard well, miners really are just compute customers that have so much demand and so little actual requirements for compute than they can do as efficiently with consumer products, in fact even more so since expensive features like ECC memory for vram is not something that impacts hash rate so the consumer product is more cheaper since it's optimized for them better, by accident.