Comments on: Debunking Datacenter Compute Myths, Part One https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Sun, 29 Oct 2023 12:41:05 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Timothy Prickett Morgan https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215687 Sun, 29 Oct 2023 12:41:05 +0000 https://www.nextplatform.com/?p=143125#comment-215687 In reply to JayN.

Not sure if they can be converted to a less intensive set of vectors and run through the clone AVX-512. Eventually they will have Xilinx DSP engines doing the math.

]]>
By: JayN https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215663 Sat, 28 Oct 2023 19:11:38 +0000 https://www.nextplatform.com/?p=143125#comment-215663 Intel has the AMX tiled matrix processing per core, which has become an important discriminating feature for AI inference. What do AMD’s x86 cores do when they hit those instructions?

]]>
By: Slim Jim https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215310 Fri, 20 Oct 2023 17:55:01 +0000 https://www.nextplatform.com/?p=143125#comment-215310 In reply to François Lechat.

… or might they bake themselves some EPYC CPO NoC and do away with external switches?

]]>
By: Slim Jim https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215163 Wed, 18 Oct 2023 17:41:45 +0000 https://www.nextplatform.com/?p=143125#comment-215163 In reply to Timothy Prickett Morgan.

A hardware system (pool of heterogeneous units) that reconfigures itself (connection-wise) dynamically to match an executing program’s computational graph (code) and data access patterns?

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215159 Wed, 18 Oct 2023 14:07:01 +0000 https://www.nextplatform.com/?p=143125#comment-215159 In reply to Slim Jim.

Maybe we literally need lego blocks of compute all linked by fast interconnects and we literally build a system for a particular workflow by connecting stuff. No more static configurations at all. I mean, imagine how much fun would it be to breadboard such a machine? Literally snapping things together and then pouring data in one end and getting answers at the other end. When the task is done, reconfigure and do a different workflow.

Maybe making FPGAs better and cheaper and faster and making electricity way, way cheaper is the real answer. I keep coming to this conclusion. That general purpose compute is actually an illusion unless it is completely reconfigurable.

]]>
By: Slim Jim https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215158 Wed, 18 Oct 2023 13:53:54 +0000 https://www.nextplatform.com/?p=143125#comment-215158 In reply to Timothy Prickett Morgan.

I see your points (you could both be right), but I remember a time when the FPU was a separate chip (eg. 8087, 80287), the MMU was also a separate IC (AMD fixed that before Intel), and, these days, separate northbridges are also disappearing in favor of in-package I/O dies. Contemporary FPUs and MMUs in particular are programmable through the CPU’s ISA, as are current vector units, and one may hope (maybe) that future GPUs get similarly tightly integrated with CPUs, and into a unified instruction-space (no more need to separately feed the external data-hungry beast).

Then again, if the data processing pipeline through the GPU is too distinct from that of the CPU (eg. uncommonly systolic) these offsprings may need to be kept separated (each calling the other to “come out and play” on an as needed basis).

]]>
By: Slim Jim https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215140 Wed, 18 Oct 2023 02:11:03 +0000 https://www.nextplatform.com/?p=143125#comment-215140 I wonder what the AMD CPU-accelerator roadmap is at present (aside from GPUs) for datacenter-type chips (EPYC/Zen). Intel is reportedly letting go of Altera and etching purpose-specific accelerators directly into its Xeon SR Max devices (eg. Intel DSA and IAA). An interesting accelerator (IMHO) developed at Lawrence Livermore for addressing the memory wall is ZFP compression (an R&D 100 winner in 2023; implemented as ZHW in FPGA; presented at SC22; https://computing.llnl.gov/projects/zfp ).

Are there interesting plans for acceleration subsystems of this type at AMD R’n’D, maybe involving FPGAs, that VP Lynn Comp could relate to the TNP audience (if they are not overly secret)? I would imagine that HPC-specific subsystems (for example) might be restricted to related SKUs, and possibly optionally activated? Is this an important area of activity targeted by the AMD server group?

Maybe these (above) could be questions for a follow-up interview …

]]>
By: François Lechat https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215128 Tue, 17 Oct 2023 18:45:48 +0000 https://www.nextplatform.com/?p=143125#comment-215128 I want to ask question too: is it good if AMD complete datacenter set with switch discussed here: https://www.nextplatform.com/2022/06/24/amd-needs-to-complete-the-datacenter-set-with-switching/ ?

]]>
By: Hubert https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215117 Tue, 17 Oct 2023 14:41:40 +0000 https://www.nextplatform.com/?p=143125#comment-215117 Interesting interview! I’d like to bring up an additional myth, and a question, that have been discussed in recent TNP pieces and comments, and that cloud CVP Comp may find interesting too (and maybe provide perspectives on):

Myth 6: Datacenters and factories are best located in Austin, TX, as compared to Asheville, NC.

Question: How do you view (at present and in the future) cloud-based vs on-premise HPC (eg. in light of DOE and SC23)?

]]>
By: Slim Albert https://www.nextplatform.com/2023/10/16/debunking-datacenter-compute-myths-part-one/#comment-215095 Tue, 17 Oct 2023 02:05:48 +0000 https://www.nextplatform.com/?p=143125#comment-215095 A most serene interview, a pastel of meditative and jovial undertones! I’m happy to hear from AMD, since the competition (Nvidia and Intel) have put out many annoucements of late, without much riposte, making me worry that the swash-buckling old-spice stronger swagger El Capitan might have run out of rhumba, or worse! I guess that this mightiest of HPC pugilists is just quietly prepping for the biggest smackdown in Exaflopping ballroom showdown history, with quiet discipline, incense, herb tea, a bit of yoga, and physics!

Mythwise (to this TNP reader) there isn’t much controversy in the 5 that were discussed. For example, we’ve seen how Frontier moved the Pareto curve for both performance and efficiency (simultaneously), and expect no less from MI300A and its siblings. SMT has its advantages in the datacenter and could be pushed to 4 or 8 threads on some SKUs, as suggested by the interviewer. The competition from ARM is interesting, and it seems to me that AMD could easily CSS-copy-paste itself some Neoverses unto MI300N and MI300V APUs for those customers that want this option, or even just for fun!

]]>