Search Content

Use the search bar above, explore content using the categories below, or log in to find your favorites.

ST Microelectronics Showcases AI Data Center Power Architecture

ST Microelectronics Showcases AI Data Center Power Architecture

At APEC, ST Microelectronics highlighted how power architecture is evolving to meet the demands of next-generation artificial intelligence data centers. In a discussion at the ST booth, Paolo Sandry of ST Microelectronics walked through the company’s approach to supporting new infrastructure built around higher power levels, tighter space constraints, and the need for better efficiency.

One of the big themes was the shift toward an 800V DC bus architecture for AI data centers. As compute demands continue to rise, traditional approaches to power delivery are being pushed harder, especially in server environments packed with GPUs and other high-performance processors. ST’s focus is on helping designers address those challenges with reference designs and power conversion solutions that improve density and efficiency while fitting into the physical limits of modern server systems.

Why 800V DC Is Getting Attention

According to Sandry, the new AI infrastructure model begins with a megawatt-class sidecar that converts three-phase AC power into 800V DC. That 800V DC bus is then distributed across individual server racks. From there, infrastructure designers can choose between different power conversion strategies depending on how they want to build out the system.

One option is a centralized architecture, where the 800V-to-54V conversion happens in a centralized power supply unit. After that, local conversion at the server level handles the step down from 54V to 12V, and then from 12V to the GPU or other processors.

The second option is a distributed architecture. In this setup, each server board has its own power distribution board, allowing the 800V-to-54V conversion to happen closer to the load. The system then continues with additional conversion stages down to the levels required by CPUs, GPUs, and other xPU devices.

Centralized vs. Distributed Power Delivery

Sandry explained that the choice between these approaches is not always made by the component engineer alone. Instead, it is often an infrastructure-level decision based on how conservative or aggressive a company wants to be when adopting 800V bus bar distribution.

For some organizations, a more traditional implementation path may make sense, using the benefits of 800V where practical while delaying a full transition. For others, moving to a more distributed design can make better use of available space and power density, especially when working with very thin server trays or blade-style hardware where every millimeter counts.

That makes form factor a major part of the conversation. In AI servers, space is limited, and designers are often trying to fit high-performance power conversion hardware into trays already crowded with multiple GPUs. ST’s reference designs are aimed at helping solve that challenge by balancing performance, size, and efficiency.

Power Density and Efficiency Are Critical

A major takeaway from the discussion was just how important efficiency becomes at these power levels. When systems are operating at the scale of a 500kW to 1MW server rack, even a small improvement in efficiency can translate into meaningful savings in total cost of ownership.

That is why ST is placing a strong emphasis on both power density and efficiency performance. Designers need solutions that not only fit into physically constrained server hardware, but also reduce wasted energy and help manage the thermal and operational costs that come with large-scale AI computing.

In the reference design shown at the booth, ST demonstrated conversion stages that included gallium nitride on both the primary and secondary sides, along with integrated magnetics using flux cancellation techniques. The company also showed an intermediate bus conversion stage and traditional multiphase power conversion for processors such as GPUs and CPUs.

Reference Designs for a Fast-Moving Market

ST emphasized that these are reference designs, not one-size-fits-all final products. The exact form factor may vary depending on the server architecture, but the goal is to provide a design foundation that delivers the power density needed for modern AI servers.

That approach gives infrastructure designers and system builders a starting point as they evaluate how to support the growing demands of AI workloads. Whether the path forward is a centralized or distributed power delivery model, the pressure is on to create solutions that are compact, efficient, and scalable.

ST’s Message at APEC

ST’s presentation at APEC made it clear that AI data centers are driving a major rethink in power distribution. Moving to 800V DC architectures, increasing the use of GaN-based conversion stages, and focusing on power density are all part of the industry’s response to ever-higher compute requirements.

For engineers and infrastructure designers, the challenge is no longer just delivering power. It is delivering massive amounts of power efficiently, within tight physical constraints, and in a way that supports the next wave of AI hardware.

To learn more, visit ST.com.

Up Next