Comments on: Intel Gets Its Chiplets In Order With 6th Gen Xeon SPs https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Mon, 01 Jul 2024 19:44:12 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Ken Chao https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-226225 Mon, 01 Jul 2024 19:44:12 +0000 https://www.nextplatform.com/?p=143005#comment-226225 In reply to Timothy Prickett Morgan.

How is such “multiplexing the transport for the channels” going to boost the gain?

I also thought may be PAM4, in that way it may double the DDR to QDR on the host interface.
But apparently, I may be also wrong.
Haven’t been able the get the true idea of how multiplexing can boost performance without changing clock frequency, DQS speed, data bus width of 64-bit

]]>
By: Slim Jim https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-214375 Sat, 30 Sep 2023 14:30:20 +0000 https://www.nextplatform.com/?p=143005#comment-214375 In reply to Slim Albert.

Yep, and depending on which way your sweat mostly leans, you might need extra-antiperspirant (or not) when considering the IO500 storage subsystem performance results (https://io500.org/), where Pencheng (Tsinghua) and JNIST/HUST (Huawei) blow the competition straight out of the drip-pan!

]]>
By: emerth https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-214212 Wed, 27 Sep 2023 21:20:11 +0000 https://www.nextplatform.com/?p=143005#comment-214212 That the E cores run at lower clocks due to thermal limits, yet the entire industry calls them “Efficient” is a testament to Intel’s greatest strength: marketing.

]]>
By: HuMo https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-214108 Mon, 25 Sep 2023 18:00:58 +0000 https://www.nextplatform.com/?p=143005#comment-214108 In reply to Slim Albert.

My-oh-my … MEEP meep (The Road Runner?) and ACME (Wile E. Coyote?) … what could possibly go wrong??? Major cliffhanging, delayed-gravity, anvil dropping, painted tunnel-entrance, EuroHPC entertainment … straight ahead!!!

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-214075 Mon, 25 Sep 2023 02:56:39 +0000 https://www.nextplatform.com/?p=143005#comment-214075 In reply to Hubert.

I think of MCR as like PAM4 for memory. Not literally, but effectively. It’s not really doubling the channels but multiplexing the transport for the channels and boosting the gain.

I’m going to do a separate piece on MCR.

]]>
By: Hubert https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-214063 Sun, 24 Sep 2023 20:28:19 +0000 https://www.nextplatform.com/?p=143005#comment-214063 I like those 17 execution ports of Sierra Glen, numbered 00 to 31, leaving room for 15 more (or different mixes) in future updates! Also, 64 KB of L1 I-cache is right on the money, but 32 KB for D-cache looks a bit low to me (64 KB could be more competitive I think, if it fits). Support for channel-doubling MCR DIMMs is definitely great (my interpretation of MCR). And there seems to be CXL 2.0 support in the I/O die hardware (last slide), possibly some protocolar IP aimed at simplifying channel setup and management (the slide mentions hot-plug)? Not to mention Intel In-Memory Analytics Accelerator (IAA) for database workloads (but what does it do, really?).

All-in-all a quite interesting DC chip — not sure what I’d want to add to it (except more L1 D-cache)!

]]>
By: Slim Albert https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-214034 Sun, 24 Sep 2023 06:36:43 +0000 https://www.nextplatform.com/?p=143005#comment-214034 In reply to HuMo.

It’s (mildly?) interesting that there isn’t a lot of international competitors for this rumble, maybe just A64FX.

It makes one wonder what the status of the EU’s SiPearl Rhea (72x Neoverse V1, 2x256b vectors) might be, whether it might be paired with Barcelona’s MEEP ACME self-hosted RISC-V-based accelerators (MareNostrum Experimental Exascale Platform’s Accelerated Compute and Memory Engine), and/or if their derivatives might find datacenter applications.

Also, although the Phytium/Matrix-2000+ (FTP/MTP) 64x armv8 cores paired with 128x matrix units is presumably seeing field use in Tianhe-3, and the SW26010Pro that pairs a single “proper” CPU (MPE) with 64x 256-bit “vector” cores (CPEs) (in groups of 6) is running in OceanLight, their 14nm process (?) seems to set their usable performance behind that of competitors. SMIC’s DUV N+1 7nm would be ok for efficiency (eg. Huawei smartphones) but not so much for performance (where EUV would be needed). It could be a good thing or a bad thing, depending on one’s perspiration …

]]>
By: HuMo https://www.nextplatform.com/2023/09/22/intel-gets-its-chiplets-in-order-with-5th-gen-xeon-sps/#comment-213975 Fri, 22 Sep 2023 21:33:22 +0000 https://www.nextplatform.com/?p=143005#comment-213975 Great to see Intel stepping back into the ring of the competitive no-holds-barred datacenter rumble with AMD, Ampere, and NVIDIA, who seem to have had just too easy a time of it lately! 144 P-cores of AP Granite sounds like a Dwayne-Johnson rock-solid ticket for this show!

Speaking of which, with chiplets ahoy, and copy-paste gods-of-the-Neoverse Cascading-Style-Sheets (CSS), it should be but a walk in the Zen monastery’s park for AMD to shimmy itself together (duct-tape? nope) an enlightened MI300N AI/ML Buddha, with 96 ARMs and 4 basic Instincts … less Seattle, more Shaolin!

A 288-ARM Buddha (6 x 48-core-dies) might even have the computational guts to meet the E-cores, behind the announced Sierra-Forest, for an explanation on perf/watt (or not?)!

Will IBM’s heroic POWER10 ever stop sulking and rise back to the challenge? Who will be the suplexest of them all? “Inquisition minds” … (as Lechat once said!).

]]>