Chip roadmap for the next decade
Chip roadmap for the next decade
Imec, the world's advanced semiconductor research company, recently shared its sub-1nm silicon and transistor roadmap at the ITF World Event in Antwerp, Belgium. The roadmap gives us an idea of the timeline by 2036 when the company will be working on the next major process node and transistor architecture in its labs with industry giants such as TSMC, Intel, Nvidia, AMD, Samsung and ASML, among many others . The company also outlined a shift to what it calls CMOS 2.0, which would involve breaking down a chip's functional units, such as L1 and L2 caches, into a more advanced 3D design than today's chiplet-based approaches.
As a reminder, 10 angstroms equals 1 nanometer, so Imec's roadmap includes sub-"1nm" process nodes. The roadmap outlines that standard FinFET transistors will continue to 3nm before transitioning to the new gate-all-around (GAA) nanosheet design, which will enter high-volume production in 2024. Imec has drawn a roadmap for 2nm and A7 (0.7nm) Forksheet designs, followed by breakthrough designs such as CFETs and atomic channels for A5 and A2, respectively.
Moving to these smaller nodes has become more expensive over time, and the standard approach of building monolithic chips using a single large chip has given way to chiplets. Chiplet-based designs break down various chip functions into different chips that are wired together, allowing the chip to function as a cohesive unit—albeit with trade-offs.
Imec's vision for the CMOS 2.0 paradigm involves breaking chips down into smaller parts, separating cache and memory into their own cells with different transistors, which are then stacked in a 3D arrangement on top of other chip functions. This approach will also rely heavily on the backside powered network (BPDN), which routes all power through the backside of the transistors.
Let's take a closer look at the imec roadmap and the new CMOS 2.0 approach.
As you can see in the picture above, the industry faces seemingly insurmountable challenges as nodes progress, but the need for more computing power, especially for machine learning and artificial intelligence, is exponential level growth. This need is not easy to meet. Costs are skyrocketing, while power consumption in high-end chips is steadily increasing—power scaling remains a challenge, as CMOS operating voltages stubbornly refuse to go below 0.7 volts, and the continued need to scale to larger chips creates power and cooling challenges that Completely new circumvention solutions will be required.
While transistor counts continue to double on the predictable path of Moore's Law, other fundamental problems increasingly become a problem with each new generation of chips, such as the limitations of interconnect bandwidth that lags severely behind the computing power of modern CPUs and GPUs, thereby hinders performance and limits the effectiveness of those extra transistors.
imec transistor and process node roadmap
Faster, denser transistors are the priority though, and the first wave of these transistors will follow Gate All Around (GAA)/Nanosheet devices debuting at the 2nm node in 2024, displacing the devices powering today's advanced technology of Tri-Gate FinFETs. GAA transistors enable transistor density and performance improvements such as faster transistor switching while using the same drive current as multiple fins. Leakage is also significantly reduced because the channel is completely surrounded by the gate, and adjusting the thickness of the channel can optimize power consumption or performance.
We've seen different variations of this transistor technology adopted by several chipmakers. Industry TSMC plans to mass-produce its N2 node with GAA in 2025, so it will be the last to feature the new transistors. Intel's four-layer RibbonFET using the "intel 20A" process node features four stacked nanosheets, each completely surrounded by a gate, and will debut in 2024. Samsung is the first to produce GAA for shipping products, but low-volume SF3E pipe-clean nodes will not see mass production. Instead, the company will launch its advanced node for high-volume manufacturing in 2024.
As a reminder, 10 Angstroms (A) equals 1 nanometer. That means the A14 is 1.4nm, the A10 is 1nm, and we'll be in the sub-1nm era with the A7 in the 2030 time frame. But keep in mind that these metrics often don't match the actual physical size on the chip.
Imec expects to forksheet transistors starting at 1nm (A10) all the way to the A7 node (0.7nm). As you can see on the second slide, the design stacks NMOS and PMOS separately, but uses a dielectric barrier to separate them, allowing for higher performance and/or better density.
Complementary FET (CFET: Complementary FET) transistors will further shrink the footprint when they first appear at the 1nm node (A10) in 2028, allowing denser libraries of standard cells. Eventually, we will see versions of CFETs with atomic channels, further improving performance and scalability. CFET transistors (you can read more about them here) stack N-type and PMOS devices together to achieve higher densities. CFETs should mark the end of nanosheet device scaling, and the end of the visible roadmap.
However, other important technologies will be required to break performance, power and density scaling barriers, which imec envisions will require new CMOS 2.0 paradigms and system technology co-optimization (SCTO).
STCO and Backside Powering
System technology co-optimization (STCO: system technology co-optimization) requires rethinking the design process by modeling the requirements of the system and the target application, and then using this knowledge to inform the design decisions that create the chip. This design approach often results in "breaking up" the functional units that are normally part of a monolithic processor, such as power supply, I/O, and cache, and splitting them into separate units that target all Optimizing the transistors of each cell type for the desired performance characteristics then also drives up the cost.
One of the goals of fully decomposing a standard chip design is to split cache/memory into their own distinct layers in a 3D stack design (more on that below), but this requires reducing the complexity at the top of the chip stack. Retrofitting the Back End of Line (BEOL) process, with a focus on connecting transistors together and enabling communication (signaling) and power delivery, is key to this effort.
Unlike today's designs that transfer power from the top of the chip down to the transistors, backside power distribution networks (BPDN: backside power distribution networks) use TSVs to route all power directly to the back of the transistors, thereby separating power transfer from the data that remains inside it. The transport interconnect separates the normal location on the other side. Separating power circuits and data transfer interconnects improves voltage drop characteristics, enabling faster transistor switching while enabling denser signal routing on top of the chip. There are also benefits to signal integrity, as simplified routing allows for faster connection of resistors and capacitors.
Moving the power supply network to the bottom of the die allows for easier wafer-to-wafer bonding on top of the die, unlocking the potential to stack logic on memory. Imec even envisions possibly moving other functions to the backside of the wafer, such as global interconnects or clock signals.
Intel has announced its own version of BPDN technology, called PowerVIA, which will debut in 2024 with the Intel 20A node. Intel will reveal more details about the technology at an upcoming VLSI event. At the same time, TSMC also announced the introduction of BPDN into its mass-produced N2P nodes in 2026, so this technology will lag behind Intel for quite a while. There are also rumors that Samsung will adopt the technology at its 2nm node.
CMOS 2.0: The road to true 3D chips
CMOS 2.0 is imec's latest vision for future chip design, covering full 3D chip design. We've seen memory stacking with AMD's second generation 3D V-Cache, stacking L3 memory on top of the processor for increased memory capacity, but imec envisions the entire cache hierarchy contained within its own layer, with L1, L2 and The L3 cache is stacked vertically on its own die above the transistors that make up the processing core. Each level of cache will be created using the transistors that are best suited for the task, which means older nodes of SRAM, as SRAM starts to scale significantly slower,
This is becoming more and more important. The shrinking size of SRAM causes the cache to take up a higher percentage of the die, which increases the cost per MB and discourages chipmakers from using larger caches. So the cost reduction that comes from moving the cache of the 3D stack to a lower density node could also result in a larger cache than we've seen in the past. When implemented properly, 3D stacking can also help alleviate latency issues associated with larger caches.
These CMOS 2.0 technologies will utilize 3D stacking techniques such as wafer-to-wafer hybrid bonding to form direct die-to-die 3D interconnects.
As you can see in the feature above, Imec also has a 3D-SOC roadmap outlining the continued shrinking of interconnects that combine 3D designs to enable faster, denser interconnects in the future. These advances will be made possible in the coming years by using newer types of interconnects and processing methods.