CoreWeave sets AI infrastructure benchmark with NVIDIA GB300 NVL72 rollout

CoreWeave became the first AI GPU cloud provider to deploy NVIDIA GB300 NVL72 systems, offering significant performance improvements for AI workloads.
The NVIDIA GB300 NVL72 delivers up to 10x better user responsiveness, 5x higher energy efficiency, and 50x increased output for reasoning model inference compared to previous architectures.
CoreWeave collaborated with Dell, Switch, and Vertiv for the deployment, integrating it with their cloud-native software stack, including Kubernetes-based services. The company recently integrated hardware-level data and cluster health events into the Weights & Biases platform, which it acquired earlier in 2025.
“CoreWeave is constantly working to push the boundaries of AI development further, deploying the bleeding-edge cloud capabilities required to train the next generation of AI models,” says Peter Salanki, co-founder and chief technology officer at CoreWeave. “We’re proud to be the first to stand up this transformative platform and help innovators prepare for the next exciting wave of AI.”
CoreWeave has a history of first-to-market AI infrastructure offerings, including NVIDIA H200 GPUs and GB200 NVL72 systems.
In June 2025, CoreWeave, NVIDIA, and IBM achieved a record MLPerf Training benchmark using NVIDIA GB200 Grace Blackwell Superchips. CoreWeave is the only hyperscaler to receive the highest Platinum rating from SemiAnalysis’s GPU Cloud ClusterMAX Rating System.
Since 2017, CoreWeave has expanded its data center footprint across the US and Europe and recently announced its intentions to acquire Core Scientific in an all-stock $9B transaction. The company also announced it has also become the first cloud platform to make NVIDIA RTX PRO 6000 Blackwell Server Edition instances generally available.
Gcore expands LATAM edge to meet surging demand for real-time gaming
Article Topics
AI infrastructure | AI/ML | Core Scientific | CoreWeave | GPU cloud | hyperscale | Nvidia GB300
Comments