Intel Follows AMD on Chiplet Journey
Intel’s next-generation Xeon Scalable processor family, which it’s calling Sapphire Rapids, is shaping up to be a turning point for the embattled chipmaker despite delays pushing its release back to early next year. Late last month, the company shed some light on the upcoming data center chip during the annual Hot Chips 2021 conference.To get more latest news about intel, you can visit shine news official website.
Sapphire Rapids is a mile marker of sorts for Intel. It will be Intel’s first to fully embrace a chiplet architecture — Intel calls these tiles — and it will be the first mainstream data center processor that supports DDR5, high-bandwidth memory, PCIe Gen. 5.0, and compute express link (CXL).
“Sapphire Rapids delivers a step function in performance across a broad set of scalar and power workflows,” Arijit Biswas, principal engineer at Intel, said during a presentation at Hot Chips 2021.Sapphire Rapids will see Intel abandon monolithic dies, like those used in this year’s Ice Lake Xeon Scalable, in favor of multiple compute tiles that are packaged together under an integrated heat spreader.
“At the heart of Sapphire Rapids is a modular, tiled architecture that allows us to scale the Xeon architecture beyond physical reticule limitations,” Biswas said.
These tiles are interconnected using Intel’s embedded multi-die interconnect bridge (EMIB) technology, which allows them to communicate with each other and share resources. Using the technology, “we are now able to increase core counts, caches, memory, and I/O,” Biswas said.
If this sounds familiar, that’s because AMD did the same thing four years ago with its EPYC, ThreadRipper, and later Ryzen processor families. AMD’s latest EPYC processors feature up to eight chiplets, each with up to eight cores, and 32 megabytes of level 3 (L3) cache for a total of 64 cores, 128 threads, and 256 megabytes L3 cache.
Chiplet architectures have a number of advantages over monolithic designs, which will likely help Intel compete against AMD. The modularity also allows chipmakers to increase core counts dramatically since adding cores can be achieved by adding more tiles to the processor package.
And thanks to improving interconnect technologies like AMD’s Infinity Fabric and Intel’s EMIB, chipmakers have been able to minimize latency challenges associated with die-to-die communications.
According to Biswas, Sapphire Rapid’s compute tiles will have full access to all resources — including cache, memory, and input/output (I/O) functionality — on all tiles. This means any one core will have access to all of the resources on the chip and are not limited to what’s built into the tile.
So while Intel is taking a cue from AMD, it appears Intel’s chips won’t face the same kind of cache limitations as seen with EPYC, which despite having as much as 256 megabytes of L3 cache, only 32 megabytes are available to a core.