The artificial intelligence market is being occupied by accelerator-creating brands, such as NVIDIA GPUs. Intel wants the AI server market not to depend on having to buy an external accelerator with its next Xeons, even if they have to add HBM memory to do so.
Specifications of the following Intel Xeon
Intel has published a table where we can see the specifications of its Xeon that will come out after the recently launched Ice Lake-SP, which are called Sapphire Rapids, but have been referred to as EAGLE STREAM instead of Xeon by Intel itself. .
Its specifications as can be seen in the table are as follows:
- The number of cores per CPU will go from 40 of the current Xeon to 56 Golden Cove cores, the same that will be used in the Alder Lake for CPUs for notebook CPUs and desktop PCs later this year.
- Unlike current Xeons, Intel will adopt a chiplet-based design for the first time. Which will be composed of 4 symmetric processors of 14 cores each.
- Configurations of 2, 4 and 8 processor sockets, on the other hand the TDP has increased from 270 W to 350 W.
- The RAM will not be DDR4 but DDR5 with 8 memory channels. The maximum speed supported will be DDR5-4800.
- 80 PCI Express 5.0 lanes, compared to the 64 PCI Express 4.0 lanes of the current Xeon, also with CXL support.
- UPI 2.0, the Ultraprocessor Interconnect now uses 4 lines and its speed has increased to 16 GT / s.
- Improvements for Deep Learning, with the addition of AMX units,
But what really stands out is the ability to use HBM memory, specifically each CPU has a 4096-bit HBM2 interface, with which it can be connected to 4 HBM2e stacks with a total capacity of 64 GB and a bandwidth of 1 TB. / s, which results in monstrous bandwidth for what a CPU is.
What do you need an Intel Xeon HBM memory for?
A few months ago when we told you about the HBM memory support, specifically HBM2E, in the next generation Xeon we mentioned the fact that an HBM memory stack supports up to 8 memory channels, so in the same way that we can connect the 8-channel DDR4 or DDR5 CPU we can do it with 8 HBM memory channels.
What we didn’t expect was a configuration that made use of a 4096-bit HBM interface, which translates into an impressive 1 TB / s bandwidth, which is an exaggeration for what a CPU is, the explanation? The addition of the Intel AMX units, which is a systolic array similar to those used in the Tensor Cores of NVIDIA GPUs and Google TPUs. All of these cores require large amounts of bandwidth to function.
This means that Intel will stop using the inefficient, in consumption and architecture, AVX-512 units to bet on a type of hardware already proven in the face of everything that surrounds artificial intelligence in its next generation Xeon. We do not know if it would include it in the Alder Lake, given the bandwidth required by the algorithms for AI.