Dedicated Reserve Compute

Dedicated AI Compute for Large-Scale AI Projects

High-performance NVIDIA infrastructure for sustained workloads.
For teams requiring 128+ GPUs for training, tuning, or high-throughput compute.

Trusted by leading AI labs and fast-scaling startups like these:

Offerings

We own the right GPUs for your workload.

Deploy Today
Contact sales for long-term contract
Contact sales for long-term contract
Contact sales for long-term contract
Contact sales for long-term contract
01
NVIDIA HGX H100
02
4× faster performance
03
Cloud DGX GB200
04
NVIDIA HGX B300
05
NVIDIA HGX GB300

High-performing teams choose dedicated clusters

Dedicated performance, predictable pricing.
Reserve what you need and scale as you grow. Ideal for long-term training, tuning, and large-scale AI workloads.
Full isolation, enterprise-grade reliability.
Your reserved AI compute is physically isolated for better performance and control. Built-in redundancy and world-class uptime keep runs complete without disruption.
Bare Metal speed with no virtualized overhead.
Run native workflows directly on our owned hardware. No compatibility issues maximize throughput for high-efficiency parallel jobs. Ingest and digest data at lightning-fast speeds.
Instance
GPU
GPU Memory
VCPUS
Storage
Network Bandwidth

NVIDIA HGX H100

GPU
HGX H100
GPU Memory
640 GB HBM3 total
VCPUs
2x Intel Xeon 8480C PCIe Gen5 CPUs
Storage
OS: 2× 1.92 TB NVMe M.2 SSDs (RAID 1); Data: 8× 3.84 TB NVMe U.2 SSDs (RAID 0)
Network Bandwidth
3.2 TB/s

NVIDIA HGX B200

GPU
HGX B200
GPU Memory
192GB HBM3e
VCPUs
2x Intel Xeon 6 Performance 6767P
Storage
OS: 2x960GB M.2 (RAID 1); Data: 4x3.84TB NVMe (15.36TB Total)
Network Bandwidth
0.8 TB/s

NVIDIA HGX GB200*

GPU
GB200NVL72
GPU Memory
186GB HBM3E | 8TB/s
VCPUs
72-core Arm Grace CPU acts as vCPUs
Network Bandwidth
Contact Sales

NVIDIA HGX B300*

GPU
HGX B300
GPU Memory
Up to 2.3 TB total
VCPUs
2× Intel Xeon CPUs (exact model unspecified)
Storage
OS: 2× 1.9 TB NVMe M.2 SSDs; Internal: 8× 3.84 TB NVMe E1.S SSDs
Network Bandwidth
1.6 TB/s

NVIDIA HGX GB300*

GPU
GB300NVL72
GPU Memory
20.1 TB HBM3e total
VCPUs
72x NVIDIA Blackwell Ultra GPUs (2592 CPU cores)
Storage
Not Available
Network Bandwidth
7.2 TB/s
*Liquid-cooled system

Need reliable, affordable compute?

Get bare metal access, clear pricing, and exceptional customer service for your most demanding workloads.

Get Started

Save ~30% below hyperscalers

No surprise ingress, egress, or support charges.
We own the hardware, you control the budget.

Dedicated
Reserve

Contact for pricing
checkmark
6+ month contracts with friendly payment terms.
checkmark
32–8000+ H100 GPUs in
a single configuration.
checkmark
3200 Gbps InfiniBand.

On-Demand, 3200 Gbps InfiniBand

$2.49/hr
checkmark
Pay only for what you need.
checkmark
32–8000+ H100 GPUs in
8–1016 HGX H100 GPUs for
your bursts and experimentation.
checkmark
Self-serve. Up within 15 mins.
No minimum terms.

On-Demand, Ethernet

$1.99/hr
checkmark
Pay only for what you need.
checkmark
1–1016 HGX H100 GPUs for your bursts and experimentation.
checkmark
Self-serve. Up within 15 mins.
No minimum terms.

Build on dedicated H100s at Voltage Park

Go from 64 up to 4,064 NVIDIA HGX H100 GPUs per cluster. 
Ideal for massive parallel processing, model training and 
high-performance computing tasks.