Using double instead of single is like being able to do periodic (expensive) full recomputations to control stability of a fast-updating chain instead of barely having enough precision when doing the full recomputations.
Chain a few those (say, in a Markov chain type of random walk) and it's an exponential growth. And, lest you think this is a 'very small niche', any matrix multiplication has O(n) operation per element. If your algorithm is unstable at single precision floating point, it's going to be unstable at double precision as well.ĭo you even know what you're talking about? error propagation, does that ring a bell? I'll give you a hint, if your computation requires a large number of operations then the absolute magnitude of machine rounding errors is critical. Now, the NVIDIA Quadro M6000 puts up pretty much a clean sweep. Last time around, there were some benchmarks that still favored team red. One thing that's changed with the launch of the M6000 is that AMD no longer trades shots with NVIDIA for the top pro graphics performance spot. NVIDIA has said that the M6000 will beat out their previous gen Quadro K6000 in a significant way in pro workstation applications as well as GPGPU or rendering and encoding applications that can be GPU-accelerated.
Also, like the GeForce GTX Titan X, the Quadro M6000 has 12GB of GDDR5, 3072 GPU cores, 192 texture units (TMUs), and 96 render outputs (ROPs). Like the Titan X, the M6000 is based on the full-fat version of the Maxwell GPU, the G200. NVIDIA recently took time to bring that same Maxwell goodness over the workstation market as well and the result is the new Quadro M6000, NVIDIA's new highest-end workstation platform. MojoKid writes: NVIDIA's Maxwell GPU architecture has has been well-received in the gaming world, thanks to cards like the GeForce GTX Titan X and the GeForce GTX 980.