New Arrival

NVIDIA Blackwell B200
Tensor Core GPU

Next-generation generative AI performance with the Blackwell platform, engineered for multi-trillion parameter model training and inference.

Architecture

Blackwell

Process

TSMC 4NP

Transistors

208 Billion

Memory

192GB HBM3e

NVIDIA Blackwell B200

Technical Infrastructure

Comprehensive performance metrics and architectural specifications.

Parameter Value
Brand NVIDIA
Products Status New
Application Server
Tensor Performance 20 PFLOPS FP4
TDP 1000W
Memory 192GB HBM3e
Memory Bandwidth 8 TB/s
Interconnect NVLink 5.0
Series NVIDIA Blackwell
Model NVIDIA B200
Form Factor SXM
Deployment Enterprise Cluster

*Configuration and allocation depend on the latest supply schedule.

View Detailed Architecture Whitepaper open_in_new

Precision Engineering

Explore every detail of the hardware that powers the modern AI era.

Blackwell Package

Blackwell Package

Multi-die compute package built for next-generation transformer models.

Cluster Integration

Cluster Integration

Purpose-built for high-bandwidth cluster fabrics and dense AI pods.

Architecture Detail

Architecture Detail

Optimized signal routing for extreme throughput and scaling.

Ready to Transform Your Infrastructure?

Our engineering team is ready to assist you in configuring the ideal deployment for your specific workload.

Explore Infrastructure Family

NVIDIA H100 SXM
AI Accelerators In Stock

NVIDIA H100 SXM

View System Specs arrow_forward
NVIDIA H200 SXM
AI Accelerators In Stock

NVIDIA H200 SXM

View System Specs arrow_forward