Since the last few (read: most) comments aren't particularly topic, let's try to bring it back into focus.
It's also possible that the mystery GPU is a
GV100 (Volta) replacement, in which case it would be a
Tesla model, rather than a GeForce or Quadro. The V100 add-in card has a base clock of 1.23 GHz and the HBM2 is clocked to 0.876 GHz, so the indicated clocks in the unknown GPU aren't wildly different from these.
Just reference, this is the same test done using a Titan X (Pascal):
Benchmark results for a System manufacturer System Product Name with an Intel Core i7-9700K processor.
browser.geekbench.com
Finding a random Tesla V100 result, that's done using a 6 core/12 thread CPU, gives:
Benchmark results for a Google Google Compute Engine with an Intel Xeon processor.
browser.geekbench.com
The results are a little down on the unnamed device in some area, ahead in others:
Benchmark results for a System manufacturer System Product Name with an Intel Core i7-8700K processor.
browser.geekbench.com
Sobel 46 vs 69.3 Gpixels/sec
Canny 7.39 vs 12.0 Gpixels/sec
Stereo Matching 890.4 vs 873.2 Gpixels/sec
Histogram Eq 26.0 vs 30.5 Gpixels/sec
Gaussian 10.6 vs 16.2 Gpixels/sec
DoF 5.85 vs 7.38 Gpixels/sec
Face Detection 302.9 vs 307.0 images/sec
Horizon Detection 3.68 vs 5.44 Gpixels/sec
Feature Matching 0.899 vs 5.44 Gpixels/sec
Particle Physics 23082 vs 19714 fps
SFFT 1.86 vs 2.60 Tflops
The Feature Matching results are vastly different, though. Here's another V100 result, but this time with a far larger CPU (48 cores):
Benchmark results for a Dell Inc. PowerEdge R740 with an Intel Xeon Platinum 8268 processor.
browser.geekbench.com
It would seem that the CPU, unfortunately, has a significant impact on the CUDA Compute results. making it difficult to properly compare these new findings.