Site icon The Next Platform

Intel Pushes Out “Clearwater Forest” Xeon 7, Sidelines “Falcon Shores” Accelerator

If Intel hopes to survive the next few years as a freestanding company and return to its role as innovator, it can not afford to waste its time and it cannot afford to make any more mistakes.

Which is why the new top brasses at Intel – chief executive officer of Intel Products Michelle Johnston Holthaus and chief financial officer Dave Zinsner, who are co-CEOs of the company – are pushing out the next generation of its E-core Xeon CPUs and turning a future AI accelerator that represents the convergence of its datacenter GPU and AI accelerator lines into a research platform as it goes back to the drawing board to create a rackscale AI design that presumably will do a better job competing against Nvidia.

The roadmap changes, which we backcast into our recent CPU roadmap story, were made during the call with Wall Street analysts yesterday going over Intel’s financial results for Q4 2024. We will get to the numbers in a second, but first let’s update the Intel datacenter compute engine roadmap, starting with the “Clearwater Forest” Xeon 7 E-core chip that was expected later this year and that was expected to be the first datacenter chip to use its 18A RibbonFET manufacturing process.

“So I really look at the datacenter market in kind of two buckets,” Johnston Holthaus said on the call when asked about the timing change for Clearwater Rapids. “We have our P-core products, which you know is Granite Rapids and then we have our E-core products which equates to Clearwater Forest. And what we’ve seen is that’s more of a niche market, and we haven’t seen volume materialize there, as fast as we expected. But as we look at Clearwater Forest, we expect that to come to market in the first half of 2026. 18A is doing just fine on its performance and yield for [Diamond] Rapids, but it does have some complicated packaging expectations that move it to 2026. But we expect that to be a good product and continue to close the gap as well. But this is going to be a journey.”

Johnston Holthaus said “Granite Rapids” in the paragraph above, but meant to say “Diamond Rapids.” The Granite Rapids Xeon 6 processor has already been launched and is etched with the Intel 3 process, which if you want to be generous is akin to 3 nanometer processes from Taiwan Semiconductor Manufacturing Co.

It sounds to us that some of the packaging kinks need to be worked out for Xeon-class chips using 18A. We also think, given the niche nature of the E-core variants, Diamond Rapids P-core variants could come to market ahead of E-core Clearwater Forest variants. And if it gets crazy enough, the E-core chip could be moved to custom products and not rolled out as part of the official Xeon roadmap at some point.

The reason that might happen is simple: The E-core chips do not have full-on AVX vector math units and they do not have AMX matrix units or HyperThreading simultaneous multithreading. The first and second are increasingly important for AI inference and light AI training. AMD does not have matrix math units on any Epyc CPUs as yet, but it does have full-on vectors on boith the plain vanilla and skinny core “C” variants for the “Genoa” Epyc 9004 and “Turin” Epyc 9005 CPUs. The AMD approach to skinny cores is to cut the L3 cache in half for the C variants and rejigger the core such that you can fit more cores in a socket. With the Genoa chips, that was 33 percent more cores, and with Turin, it is 50 percent more.

As we put it at the Granite Rapids launch: Intel Brings A Big Fork To A Server CPU Knife Fight. It is amazing to us that Intel didn’t do the same trick that AMD did to make a skinny core and preserve compatibility, and it is also amazing to us that hyperscalers and cloud builders indicated that they were fine with this forking of the architecture. Then again, a hyperscaler/cloud builder talking up FPGA-based DPUs is how Intel got scared into spending $16.7 billion on Altera. Sometimes, Intel gets a bad steer, and it doesn’t compensate by stiffening its arms. (And to be fair, we caught a little FPGA religion, too, at the time, but still think Intel’s exuberance was much larger than ours.)

Because of the intense competition that AMD is bringing with the Genoa and Turin datacenter CPUs, Intel has had to circle back and actually chop prices on the Granite Rapids chips, which were announced in September 2024 with the highest prices we have ever seen for Intel CPUs. This price cuts better align Granite Rapids chips to Genoa Epyc 9004 pricing, which was probably happening informally in every deal anyway. All five of the Granite Rapids chips that were announced last fall had a price cut, with three getting a 30 percent cut, one getting a 20 percent cut, and another getting a 13.4 percent cut. The top bin parts with 128 and 120 cores and the lower bin part with 72 cores had the deepest cuts.

That brings us to the ever-changing “Falcon Shores” accelerator.

Three years ago, Intel was working on a GPU product line anchored by its “Ponte Vecchio” Max GPU and followed by its “Rialto Bridge” kicker and at the same time was creating a hybrid CPU-GPU device called Falcon Shores that, like AMD’s MI300A, would mix Xeon CPU cores and matrix math units that were derived from these GPUs as well as the Gaudi line of XPUs that it got through its acquisition of Habana Labs. In February 2022, we contemplated what Falcon Shores might look like, and the expectation was that these accelerators would plug into the same sockets as the Granite Rapids CPUs.

by March 2023, Intel’s GPU efforts were in a shambles with Ponte Vecchio being very late and Rialto Bridge canceled. AT the time, there were rumors that Falcon Shores using only GPU engines would be the future discrete accelerator from Intel and that project was pushed out “beyond 2025.” And in June 2023, Intel downplayed the whole idea of a hybrid CPU-GPU device and merged the Gaudi matrix math engines with the Falcom Shores all-GPU machine to create a new and improved plan for Falcon Shores. The idea was to take the wide Ethernet pipelines used in the Gaudi architecture and the Falcon Shores GPU and Gaudi matrix math to create a unified accelerator that could run workloads designed for Gaudi and Pente Vecchio.

Well, forget all of that. Now everything is shifting out further to a still future accelerator called “Jaguar Shores.”

“We are not yet participating in the cloud-based AI data center market in a meaningful way,” Johnston Holthaus said, stating what is obvious to us all given the explosive fortunes of Nvidia. “We have learned a lot as we have ramped Gaudi, and we are applying those learnings going forward. One of the immediate actions I have taken is to simplify our roadmap and concentrate our resources. Many of you heard me temper expectations on Falcon Shores last month. Based on industry feedback, we plan to leverage Falcon Shores as an internal test chip only, without bringing it to market. This will support our efforts to develop a system-level solution at rack scale with Jaguar Shores to address the AI datacenter. More broadly, as I think about our AI opportunity, my focus is on the problems our customers are trying to solve, most notably, to lower the cost and increase the efficiency of compute.”

So that is the end of Falcon Shores and the beginning of Jaguar Shores, of which we know nothing other than the hinted at rackscale approach mentioned above.

With that, let’s talk about Intel’s datacenter business as it was in the fourth quarter.

In the December quarter, Intel’s overall revenues were $14.26 billion, down 7.4 percent year on year, but up 7.3 percent sequentially. The company posted a $152 million net loss, compared to a $2.67 billion net gain in the year ago period, but a whole lot better than the nearly $17 billion loss it booked in Q3 2024 due mostly to restructuring charges.

Intel ended the quarter with $22.06 billion in cash and investments in the bank, money it is correctly parceling out very carefully as it invests in products and foundries.

Here is the breakdown of Intel’s revenues and operating income by group for the past two years:

And here is the same data shown graphically for revenues:

Intel was able to hold revenues in its Datacenter and AI (DCAI) group relatively steady in the fourth quarter, with sales down 3.3 percent to $3.39 billion but up 1.1 percent sequentially. Operating profits were $233 million, a pretty anemic 6.9 percent of revenues and down 68.4 percent from the year ago quarter. Intel’s profit levels for datacenter products has been declining over the past year, which is no doubt concerning and which is also compelling the company to break its Network and Edge (NEX) group into pieces, with Xeon processor and networking product sales in NEX eventually being merged back into DCAI. (Other parts of it will go into its Client Computing Group at some point in the future.)

The NEX group had a revenue bump of 10.3 percent to $1.62 billion, and operating income was $340 million (20.9 percent of sales), which is the healthiest profit levels Intel has had in this line of business in years. The Altera FPGA business, which we also think of as part of the core datacenter business, had sales of $429 million, down 10.6 percent, and operating income of $90 million, a factor of 21.5X higher than a year ago.

We have been tracking Intel’s Data Center Group, the core business that was created for server products way back in the day, for a long time, and have used the combination of DCAI and a portion of NEX and Altera as a proxy for the historical Data Center Group that was first headed up by Pat Gelsinger so many years ago.

Here is the trend:

Intel’s rise in the datacenter and its hegemony over server compute is evidenced in the run from 2009 through 2019, and you can see where AMD started taking share in 2020 and how this has caused Intel much grief since then.

The server CPU business has become intensely competitive, and it is safe to say that a lot of Intel’s revenues are supply wins, not design wins, considering the much higher core counts and performance that AMD delivers for most workloads at this point.

Intel used to sell flash and 3D XPoint memory storage, and it also used to sell network switches, and this all made the “real” systems business at intel much larger. (The company still sells network adapters and DPUs.) Since 2015, we have been modeling this “real” datacenter business, which ascended to around $9 billion a quarter in 2019 and kissed it once again in 2021 during a hypercaler and cloud buildout before deflating down to around $5 billion a quarter now. Profits have collapsed much faster, as you can see, but are better than the minor loss Intel posted in the third quarter of 2022.

The question we have – that everyone has – is: Can Intel ever get above $5 billion a quarter again in the datacenter, and can it do so profitably? Because we like competition, we certainly hope so.

Exit mobile version