Comments on: In G42, Cerebras Finds The Deep Pockets And Partnership It Needs To Grow https://www.nextplatform.com/2023/07/25/in-g42-cerebras-finds-the-deep-pockets-and-partnership-it-needs-to-grow/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Fri, 18 Aug 2023 19:43:00 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Nicolas Dujarrier https://www.nextplatform.com/2023/07/25/in-g42-cerebras-finds-the-deep-pockets-and-partnership-it-needs-to-grow/#comment-212480 Fri, 18 Aug 2023 19:43:00 +0000 https://www.nextplatform.com/?p=142693#comment-212480 I am wondering if adding emerging Non-Volatile Memory (NVM) MRAM (like VG-SOT-MRAM from European research center IMEC to replace SRAM cache) would dramatically improve Cerebras CS-2 Wafer-Scale-Engine power efficiency ?

I am a firm believer that emerging Non-Volatile Memory (NVM), especially MRAM (spintronics), maybe used in new innovative ways (using the stochastics effects) could likely dramatically enhance AI systems power efficiency

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/07/25/in-g42-cerebras-finds-the-deep-pockets-and-partnership-it-needs-to-grow/#comment-211687 Wed, 26 Jul 2023 13:54:10 +0000 https://www.nextplatform.com/?p=142693#comment-211687 In reply to Art Scott.

Noted. I almost added it to the spreadsheet, but ran out of time. Suffice it to say, at around 60 MW, Aurora won’t look so good if that is indeed where it ends up. At 30 MW or 35 MW, El Capitan will look pretty good, and better than Frontier at 21 MW. No idea what the Microsoft/OpenAI and Inflection AI systems will bur, but I will take a stab and update the table.

]]>
By: Art Scott https://www.nextplatform.com/2023/07/25/in-g42-cerebras-finds-the-deep-pockets-and-partnership-it-needs-to-grow/#comment-211686 Wed, 26 Jul 2023 11:38:21 +0000 https://www.nextplatform.com/?p=142693#comment-211686 Please a few words about power envelope, thermal envelope.

]]>
By: Slim Albert https://www.nextplatform.com/2023/07/25/in-g42-cerebras-finds-the-deep-pockets-and-partnership-it-needs-to-grow/#comment-211671 Wed, 26 Jul 2023 04:27:05 +0000 https://www.nextplatform.com/?p=142693#comment-211671 A superb and well-deserved win for Cerebras! Their innovative wafer-scale systolic dataflow architecture is quite daring, and it is great to see its adoption spread. If I understand, it helps to fluidify memory access bottlenecks by bringing more of the computations to the data, yet not being exactly an in-memory processing approach. This seems to be a (or the) major challenge in applying AI at scale, so wow!

As Eric mentioned in the “El Capitan” comments, HPCG is hit by memory access issues much more than HPL (and thus more relevant to AI perf). The best machines, in terms of ratio of HPCG to HPL perf in top500, are the NEC vector motors (eg. Earth Simulator SX-Aurora at #13 in HPCG and #63 in HPL, for which the HPCG perf of 0.75 PF is 7.5% of its HPL perf of 10.0 PF), followed by Fugaku (HPCG perf is 3.6% of HPL perf), and all other machines have HPCG/HPL ratios below 3.5%. The French Vsora claims that its upcoming Jotunn4 chip will up that ratio to above 50% (in AI space, not HPC) and should be interesting to watch, and see if it pans out as a non-wafer-scale competitor to Cerebras.

For now though, it is surely time for Cerebras to celebrate! As Kool and the Gang nearly said: “Cerebration time, come on”!

]]>