Comments on: Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks https://www.nextplatform.com/2023/12/14/intel-emerald-rapids-xeon-sps-a-little-more-bang-a-little-less-bucks/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Thu, 04 Jan 2024 17:13:22 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Kevin G https://www.nextplatform.com/2023/12/14/intel-emerald-rapids-xeon-sps-a-little-more-bang-a-little-less-bucks/#comment-217732 Mon, 18 Dec 2023 20:38:48 +0000 https://www.nextplatform.com/?p=143394#comment-217732 The core clusters in Sapphire Rapids XCC were arranged as 4×4 with one removed for the memory controller. Thus the 60 core pairs are full enabled and not binned down parts. Intel originally targeted 56 cores max SKUs but due to delays and ever increasing yields, was able to produce enough volume of 60 core out there. Emerald Rapids XCC is indeed a 5×7 arrangement but two cores removed and replaced with memory controllers this time. Thus each chiplet has a maximum of 33 cores. Thus the 64 core parts are binned down a bit from the 66 theoretical maximum.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2023/12/14/intel-emerald-rapids-xeon-sps-a-little-more-bang-a-little-less-bucks/#comment-217610 Fri, 15 Dec 2023 18:49:41 +0000 https://www.nextplatform.com/?p=143394#comment-217610 In reply to Francis King.

But of course… funny how the AI-infused spell checker didn’t note that.

]]>
By: Francis King https://www.nextplatform.com/2023/12/14/intel-emerald-rapids-xeon-sps-a-little-more-bang-a-little-less-bucks/#comment-217606 Fri, 15 Dec 2023 18:18:57 +0000 https://www.nextplatform.com/?p=143394#comment-217606 “as Mr Spock says in The Rath Of Khan” – I’ve never seen this film, but it’s probably similar to the film “The Wrath of Khan”.

]]>
By: Slim Jim https://www.nextplatform.com/2023/12/14/intel-emerald-rapids-xeon-sps-a-little-more-bang-a-little-less-bucks/#comment-217596 Fri, 15 Dec 2023 14:18:07 +0000 https://www.nextplatform.com/?p=143394#comment-217596 Nice introduction and analysis! I like these emeralds as they are gemologically less expensive than sapphire (and ruby), yet very resilient, and perfect for everyday wear (good naming on Intel’s side). I imagine that Type 3 CXL memory might find uses for very large databases or AI model weights (maybe that’s how 3D XPoint is advantageously replaced?). Also, if the Mixture of Chefs approach to AI pans out (salad chef, entrée chef, pastry chef, fromager, sommelier, …) then the AMX and large caches on these chips could prove quite useful. The goose could be sauced while the soufflé bakes and the salad is tossed (for example), each on their own node, socket, or subset of cores. I think it sounds delicious (to paraphrase other computational gastronomers)!

]]>