Comments on: Nvidia To Build DGX Complexes In Clouds To Better Capitalize On Generative AI https://www.nextplatform.com/2023/02/22/nvidia-to-build-dgx-complexes-in-clouds-to-better-capitalize-on-generative-ai/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Fri, 03 Mar 2023 22:48:22 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Hubert https://www.nextplatform.com/2023/02/22/nvidia-to-build-dgx-complexes-in-clouds-to-better-capitalize-on-generative-ai/#comment-205315 Sun, 26 Feb 2023 20:26:41 +0000 https://www.nextplatform.com/?p=141939#comment-205315 In reply to Hubert.

[in-reply to TPM]: Quite so (esp. w/r to CUDA)! On the CPU front, in relation to enthusiasm for an arch., I’ll quote Robert Smith who develops Quantum Computing simulation software in Common Lisp (SBCL) at Rigetti, eventually targeting Summit: “I also like ppc64el because I have two POWER8 servers racked up in my spare bedroom”. Many young minds with Neoverse laptops and workstations (affordable yet performant ones) would likely do wonders for that arch’s software stack, in short and long terms (I think). There’s a bit of a gaping hole between smartphones/tablets/hobby-boards on the low end, and HPC/AI-ML/Enterprise/Cloud (A64FX, Grace, Altra-Max, Graviton3, …) on the high end, except for Apple’s very nice kit (which doesn’t quite run Linux nor Windows natively I think).

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/02/22/nvidia-to-build-dgx-complexes-in-clouds-to-better-capitalize-on-generative-ai/#comment-205226 Fri, 24 Feb 2023 12:58:26 +0000 https://www.nextplatform.com/?p=141939#comment-205226 In reply to Hubert.

And hence, the IBM mainframe and the AS/400 (Power Systems running IBM i) are still around. Fully supported stacks with complex and hard to move applications make even even proprietary hardware palatable. Ya know, like a TPU. Or an Nvidia GPU. HA!

]]>
By: Hubert https://www.nextplatform.com/2023/02/22/nvidia-to-build-dgx-complexes-in-clouds-to-better-capitalize-on-generative-ai/#comment-205217 Fri, 24 Feb 2023 07:41:49 +0000 https://www.nextplatform.com/?p=141939#comment-205217 In reply to EC.

I think that “fully supported and portable software stack[s]” are key to long-term success of this (and other new) architecture[s]. Especially for ARM CPUs, that have been the “future” knees-bees for (at least) a decade (and now challenged by RISC-V/VI). For these, as in prior gravy periods, it should be strategically most beneficial to seed university computer labs with workstations (and, possibly, small supers) of the target arch., so as to develop a generation of young minds trained in the programming, and use (and enthusiasm for), such highly-performant (yet slightly non-mainstream) computational systems. It worked nicely for SGI and SUN back in the days (dot-com boom) … but x86 fought back with outstanding affordability. If “emergent” system archs can remain relatively “affordable” (yet performant and energy efficient), they should get themselves a rather sweet ride, for a couple of decades, provided that the enthusiasm of young minds can be switched on for them.

]]>
By: EC https://www.nextplatform.com/2023/02/22/nvidia-to-build-dgx-complexes-in-clouds-to-better-capitalize-on-generative-ai/#comment-205189 Thu, 23 Feb 2023 19:10:47 +0000 https://www.nextplatform.com/?p=141939#comment-205189 DGX cloud. AIaaS. CSPs have obviously evaluated the Grace+Hopper solution and proposed business model. The idea every one of the CSPs is bought-in is a big deal. There is obviously a lot more to be disclosed about the solution and partnerships in March. Would these CSPs just wheel-in prepackaged Nvidia servers and plug them in to their highly curated data centers if they were wanting in some way? I think not, so it’s pretty safe to assume partners are enthusiastic. Either that or in the DGX form factor is the only way they could acquire next level Grace+Hopper performance. And a fully supported and portable software stack just creates migration opportunity, or so believes Jensen “On-Prem” Huang.

]]>