Comments on: The Future Of System Memory Is Mostly CXL https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Sun, 13 Aug 2023 09:28:32 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: BaronMatrix https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-212316 Sun, 13 Aug 2023 09:28:32 +0000 https://www.nextplatform.com/?p=140859#comment-212316 CXL is a very exciting tech… I also just read an article involving “chiplet clouds” and that seems to be what AMD is working towards with 3d VCache… I expect that by Zen6 AMD will be using multiple stacks to keep even more data closer to the chiplet… From what I’ve seen these do sit between SRAM and HBM for complexity and bandwidth per module… Micron is working on 256GB DDR5 which will allow for up to 2TB RAM… It will be interesting to see how fast mobo makers add more CXL modules… Most servers need RAM more than PCIe devices so that means they can replace the PICe slots with CXL connectors…

AMD is already using them in EPYC servers and may be the first to make a chiplet-based DPU or FPGA…

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-194834 Tue, 26 Jul 2022 01:01:21 +0000 https://www.nextplatform.com/?p=140859#comment-194834 In reply to Ian Sagan.

Yes, Network Processing Unit. Apologies. Something for software-defined networking that predates the DPU, kinda.

]]>
By: Ian Sagan https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-194466 Tue, 19 Jul 2022 12:59:18 +0000 https://www.nextplatform.com/?p=140859#comment-194466 Great article and you mention Marvell Octeon NPUs?
Network Processor Units or is this a typo CPU/DPU?
Be good to explain the terminology.

]]>
By: Robert https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-194103 Thu, 14 Jul 2022 10:33:17 +0000 https://www.nextplatform.com/?p=140859#comment-194103 Isn’t the bandwidth per core issue taken care of by things like 3DV cache both on CPU and GPU? I’ve read xDNA3 will is basically going with more Infinity Cache and narrower memory controllers and that some models will have 3D stacked Infinity Cache. At least in DC they’re also supposed to have coherent memory between CPU/GPU. IIRC AMD is also going to support CXL at some point, but isn’t it kind of a niche thing really considering those problems seem to already be kind of solved? Or am I completely missing something?

]]>
By: William Shaddix https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-193801 Thu, 07 Jul 2022 15:58:04 +0000 https://www.nextplatform.com/?p=140859#comment-193801 In reply to Timothy Prickett Morgan.

While they’re not hard numbers, there’s also some comparison to RDMA here to show the level of improvement with CXL https://camel.kaist.ac.kr/public/camel-cxl-memory-pooling.pdf

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-193784 Thu, 07 Jul 2022 11:49:09 +0000 https://www.nextplatform.com/?p=140859#comment-193784 In reply to Mark Hahn.

It’s a little bit more than a NUMA hop with PCI-Express 5.0. But it will be better as PCI-Express gets faster is the hope. Some data from Facebook:

https://www.nextplatform.com/2022/06/16/meta-platforms-hacks-cxl-memory-tier-into-linux/

IBM can do its OMI memory over Bluelink with an under 10 nanosecond add over its former memory controller. IBM’s implementation is quite good and shows the way.

]]>
By: Mark Hahn https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-193764 Thu, 07 Jul 2022 05:27:08 +0000 https://www.nextplatform.com/?p=140859#comment-193764 Latency
Latency
Latency

How close can CXL come to 50ns latency? If it’s not close (say, within a factor of two compared to local RAM), then it’s just a new form of swap.

Can you get hard numbers from any vendors?

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-193748 Thu, 07 Jul 2022 00:43:40 +0000 https://www.nextplatform.com/?p=140859#comment-193748 In reply to PAUL.

I think that making infrastructure stretch further and driving up utilization is the same thing as lowering its real costs.

As for making memory itself cheaper, I suppose there is a chance that we might be able to break out of the DIMM form factor at some point. But the memory cost is really down to the DRAM cost and the fact that memory makers get far more profit selling tiny pieces of memory for smartphones than they get from selling big hunks of it for servers, and that is why the price went up by 2X a few years ago. They like memory prices being high. They have come down some, and until we have a bust cycle where they are overprovisioning in their factories, it seems unlikely. And the memory DIMM makers might try to keep the server price high anyway.

]]>
By: PAUL https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-193741 Wed, 06 Jul 2022 21:19:26 +0000 https://www.nextplatform.com/?p=140859#comment-193741 In reply to Timothy Prickett Morgan.

So this works for big databases on big servers. (IBM, superdome) but that is probably not a large enough market to support the technology. What is the use for 1-2 socket servers running cloud/edge/hpc workloads? I guess it’s the MAN, and provisioning memory to the node as needed. Possibly integrated with k8s or a VM allocator. As I suspected, it doesn’t really fix the cost of the memory problem, it just fixes the cost of having to provision every node with memory that only some need. I just wonder if the cost goes down enough to make it worthwhile.

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2022/07/05/the-future-of-system-memory-is-mostly-cxl/#comment-193734 Wed, 06 Jul 2022 18:36:20 +0000 https://www.nextplatform.com/?p=140859#comment-193734 In reply to HuMo.

Why not? You just need PCI-Express and a decent implementation of CXL.

]]>