Edge Infrastructure Review

Lambda doubles down on NVIDIA stack with 10,000+ Blackwell GPUs and CPO networking push

Lambda doubles down on NVIDIA stack with 10,000+ Blackwell GPUs and CPO networking push

Lambda recently announced it’s becoming a launch partner for the NVIDIA Vera CPU platform and NVIDIA STX.

The GPU-native AI infrastructure provider will deploy NVIDIA Quantum-X800 infiniBand photonics co-packaged optics in an AI factory with 10,000+ NVIDIA Blackwell Ultra GPUs. 

Lambda’s bare metal instances made its way out of the lab and into the core cloud offering, giving users direct access to hardware while avoiding virtualization overhead for distributed AI training workloads.

Designed for launching thousands of parallel AI environments, the NVIDIA Vera CPU platform enables maximally high memory bandwidth which optimizes reinforcement learning and agentic AI workloads.

The NVIDIA STX is a modular architecture for AI storage that augments inference, analytics, and training with next-gen hardware optimized KV-cache management.

Co-Packaged Optics (CPO) networking enables faster, cost-efficient AI infrastructure suitable for large-scale AI factories, alleviating major efficiency bottlenecks found in current approaches.

“The race to build AI factories isn’t won on GPU counts alone,” says Dave Salvator, director of accelerated computing at NVIDIA. “Network architecture is what determines whether those systems can perform at scale. Getting this right is what allows AI infrastructure to power services used by hundreds of millions of people around the world.“​​​​​​​​​​​​​​​​

Lambda oversees one of the largest deployments of NVIDIA Quantum-X800 CPO switches, highlighting how critical network architecture is when scaling AI systems.

These announcements further bolster Lambda’s AI infrastructure platform, which empowers frontier labs, enterprises, and hyperscalers with proven and energy-efficient workhorses built for reliability at scale.

Lambda continues its mission to make AI compute ubiquitous, leveraging a decade-long collaboration with NVIDIA to advance its Superintelligence Cloud platform.

Related Posts

Article Topics

 | 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Company

Sponsored Links

Avassa: Empowers companies to bridge the gap between modern containerized applications development and operations and distributed edge infrastructure. https://avassa.io/

DataBank: We believe there is a different edge to be served - the “middle edge" - that will become the first step for many in their journey to the edge. https://www.databank.com/

Latitude.sh: Where the power of bare metal meets the flexibility of the cloud. Deploy physical servers across 23 global locations in as little as 5 seconds. https://www.latitude.sh/

NodeWeaver: Minimizes the total lifecycle cost of deploying, managing, and operating edge compute by addressing the main drivers of cost and complexity.​ https://www.nodeweaver.eu/

OnLogic: A global industrial PC manufacturer and solution provider focused on hardware for IoT and edge AI, designing highly-configurable computers engineered for reliability. https://www.onlogic.com/

Zenlayer: A massively distributed edge cloud service provider operating over 270 PoPs around the world, with expertise in fast-growing emerging markets. https://www.zenlayer.com/

Latest News