Comments on: Nvidia Previews Ampere Kicker To Turing GPU Accelerator https://www.nextplatform.com/2020/10/12/nvidia-previews-ampere-kicker-to-turing-gpu-accelerator/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Fri, 16 Oct 2020 22:42:58 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Vincent Poncet https://www.nextplatform.com/2020/10/12/nvidia-previews-ampere-kicker-to-turing-gpu-accelerator/#comment-155817 Tue, 13 Oct 2020 16:42:41 +0000 http://www.nextplatform.com/?p=137221#comment-155817 In reply to Timothy Prickett Morgan.

Ah yes !

]]>
By: Matt Hillebrand https://www.nextplatform.com/2020/10/12/nvidia-previews-ampere-kicker-to-turing-gpu-accelerator/#comment-155782 Tue, 13 Oct 2020 03:35:40 +0000 http://www.nextplatform.com/?p=137221#comment-155782 One advantage of the A6000 over the 3090 would be that you can stack four of them together on one XL-ATX motherboard, regardless of their clock speeds and prices. However, I’m just going to stick with two 3090s until the A6000 is revamped next year with GDDR6X memory and what not.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2020/10/12/nvidia-previews-ampere-kicker-to-turing-gpu-accelerator/#comment-155775 Tue, 13 Oct 2020 00:51:58 +0000 http://www.nextplatform.com/?p=137221#comment-155775 In reply to Vincent Poncet.

The architecture whitepaper says it can do both: See Page 38 of this:

https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf

]]>
By: Vincent Poncet https://www.nextplatform.com/2020/10/12/nvidia-previews-ampere-kicker-to-turing-gpu-accelerator/#comment-155768 Tue, 13 Oct 2020 00:06:17 +0000 http://www.nextplatform.com/?p=137221#comment-155768 ” On the Tensor Cores with FP16 multiplication and FP16 accumulate, the GA102 has 142 teraops and hits 282 teraops when sparse matrix support is turned on and appropriate. ” Isn’it “On the Tensor Cores with FP16 multiplication and FP32 accumulate …” ?

]]>
By: Vincent Poncet https://www.nextplatform.com/2020/10/12/nvidia-previews-ampere-kicker-to-turing-gpu-accelerator/#comment-155767 Tue, 13 Oct 2020 00:05:28 +0000 http://www.nextplatform.com/?p=137221#comment-155767 ” On the Tensor Cores with FP16 multiplication and FP16 accumulate, the GA102 has 142 teraops and hits 282 teraops when sparse matrix support is turned on and appropriate. ”
Isn’it “On the Tensor Cores with FP32 multiplication and FP16 accumulate …” ?

]]>