Available In Stock

NVIDIA H200 SXM
Tensor Core GPU

Built for memory-intensive AI workloads with expanded HBM3e capacity and improved throughput for trillion-token pipelines.

Architecture

Hopper

Process

4nm Custom

Transistors

80 Billion

Memory

141GB HBM3e

NVIDIA H200 SXM

Technical Infrastructure

Comprehensive performance metrics and architectural specifications.

Parameter Value
Brand NVIDIA
Products Status Available
Application Server
Architecture Hopper
Memory 141GB HBM3e
Memory Bandwidth 4.8 TB/s
Tensor Performance 1.9 PFLOPS FP8
Interconnect NVLink 4.0
TDP 700W
Form Factor SXM
Use Case Memory-intensive AI workloads
Deployment Enterprise cluster

*Lead time depends on regional stock allocation.

View Detailed Architecture Whitepaper open_in_new

Precision Engineering

Explore every detail of the hardware that powers the modern AI era.

HBM3e Density

HBM3e Density

Expanded memory footprint for large context and retrieval workloads.

Cluster Memory Scaling

Cluster Memory Scaling

Designed for memory-heavy training clusters and analytics nodes.

Hopper Platform

Hopper Platform

Continuity for enterprise deployments requiring mature software stacks.

Ready to Transform Your Infrastructure?

Our engineering team is ready to assist you in configuring the ideal deployment for your specific workload.

Explore Infrastructure Family

NVIDIA H100 SXM
AI Accelerators In Stock

NVIDIA H100 SXM

View System Specs arrow_forward
NVIDIA Blackwell B200
AI Accelerators New Arrival

NVIDIA Blackwell B200

View System Specs arrow_forward