Comments on: With Blackwell GPUs, AI Gets Cheaper And Easier, Competing With Nvidia Gets Harder https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Thu, 18 Apr 2024 20:36:54 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: HuMo https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-222032 Wed, 20 Mar 2024 10:48:59 +0000 https://www.nextplatform.com/?p=143832#comment-222032 In reply to EC.

The revenge of the AI frog in a low-precision glass jar? 8^b ( https://www.agecon.com.au/fun-weather-facts/frog-in-a-jar )

]]>
By: UK https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-222029 Wed, 20 Mar 2024 10:15:14 +0000 https://www.nextplatform.com/?p=143832#comment-222029 In reply to John.

…I think you are just too wise to not hop on any train passing by…or to somehow stay a bit away from any babel towers currently building up, as you don’t want to get damaged when they will break…

]]>
By: SC https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-222005 Tue, 19 Mar 2024 21:40:18 +0000 https://www.nextplatform.com/?p=143832#comment-222005 TPM-long time reader, love this analysis and this reminds me of the Nvidia K80 which I recall essentially was a packaging innovation of putting 2xK40 GPUs on one card. Nvidia clearly has upped their game almost 10 years later.

]]>
By: Samuel A https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221994 Tue, 19 Mar 2024 15:58:13 +0000 https://www.nextplatform.com/?p=143832#comment-221994 @John. What drives the market is big bets! not what’s needed now. Traditional HPC like CFD belongs to the later, and small evaluations in CPUs are sufficient, unless the community comes up with new AI algorithms for CFD that are more accurate than the current mod/sim (not happening in weather).

]]>
By: EC https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221992 Tue, 19 Mar 2024 15:17:08 +0000 https://www.nextplatform.com/?p=143832#comment-221992 In reply to Slim Albert.

Nvidia de-prioritized PF64 after V100 as it needs twice the die area of FP32. While FP64 will remain important, if one looks at some traditional FP64 work loads, say weather modeling and forecasting, ML running at lower precision will certainly replace in the near future. https://www.prnewswire.com/news-releases/the-weather-company-expands-collaboration-with-nvidia-to-advance-ai-based-weather-forecasting-and-visualization-capabilities-302091980.html

]]>
By: John https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221990 Tue, 19 Mar 2024 13:54:29 +0000 https://www.nextplatform.com/?p=143832#comment-221990 So I wonder how useful these Hopper devices are going to be for the large HPC shops, or even the smaller HPC shops. I worked many moons ago at a CFD software vendor and it was amazing the need for more memory and cpu drove them. The simulations want more precision (going from 8 to 16 to 32 and to even higher levels) was one way to get better results. Now I’m sure some really smart people have made some big changes, but I don’t see how FP4 or FP8 even really help detailed simulations like weather or CFD or other math heavy analysis.

So does Blackwell really work for those loads, or has AI taken over Nvidia (and the industry) to such an extent that HPC will be left behind? I’m still skeptical that AI will be the end all that people fear/love. But maybe I’m too old for the excitement and too conservative to see the possibilities.

]]>
By: Xpea https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221987 Tue, 19 Mar 2024 13:00:20 +0000 https://www.nextplatform.com/?p=143832#comment-221987 I think you made a mistake on the memory. 8* 8Hi HBM stacks = 8*24GB = 192GB. That’s with all the 8 stacks populated. When 12Hi HBM3e will be available, the total memory will go to 288GB (8*36GB)

]]>
By: Slim Albert https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221966 Tue, 19 Mar 2024 02:52:07 +0000 https://www.nextplatform.com/?p=143832#comment-221966 Impressive upgrade by NVIDIA … essentially making Blackwells double-Hoppers, with adaptive-precision tensor engine dynamically updated to include the FP4 micro scale. My bet on 192GB of HBM is that it is 8 stacks of 24GB as described here: https://www.nextplatform.com/2023/07/26/micron-revs-up-bandwidth-and-capacity-on-hbm3-stacks/ (32GB stacks might be the nexter gen?).

FP64 could be weak point (HPC), but all the rest, and GB200, look great (esp. for AI) IMHO!

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221962 Tue, 19 Mar 2024 00:47:17 +0000 https://www.nextplatform.com/?p=143832#comment-221962 In reply to EC.

Yes. Thanks.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2024/03/18/with-blackwell-gpus-ai-gets-cheaper-and-easier-competing-with-nvidia-gets-harder/#comment-221961 Tue, 19 Mar 2024 00:46:56 +0000 https://www.nextplatform.com/?p=143832#comment-221961 In reply to Chris Sommers.

Yup. Moving too fast typo.

]]>