LEVIATHAN SYSTEMS
CURRENTLY DEPLOYING

NVIDIA GB300 NVL72 Infrastructure & Deployment_

Leviathan Systems is actively deploying GB300 NVL72 infrastructure for hyperscale AI training facilities in the United States. This is current, hands-on deployment experience with the latest generation of NVIDIA's rack-scale AI systems.

What Is the NVIDIA GB300 NVL72?_

The NVIDIA GB300 NVL72 is the Blackwell Ultra evolution of the GB200 NVL72. It replaces the B200 GPU with the B300 GPU, upgrading memory from 192 GB to 288 GB HBM3e per GPU through 12-high HBM stacks (vs. 8-high on B200). The result is 1.5x more AI performance than the GB200 NVL72 in the same rack footprint.

Leviathan Systems is actively deploying GB300 NVL72 infrastructure for hyperscale AI training facilities in the United States. This is current, hands-on deployment experience with the latest generation of NVIDIA's rack-scale AI systems.

GB300 NVL72 Infrastructure Specifications_

SpecificationGB300 NVL72
GPUs per Rack72 Blackwell Ultra B300 GPUs
CPUs per Rack36 Grace CPUs
Memory per GPU288 GB HBM3e (12-high stacks)
Rack Power~120 kW (same envelope as GB200)
InterconnectNVLink 5.0, 1.8 TB/s per GPU, 130 TB/s total
Compute Performance~1.1 exaFLOPS FP4 per rack
Weight~1,360 kg (3,000 lbs)
Cooling100% direct liquid cooling (mandatory)
Power ArchitectureSupports new 800V DC distribution

What Changes from GB200 to GB300?_

The physical rack form factor is identical. The GB300 NVL72 fits in the same space, uses the same cooling infrastructure, and presents the same mechanical interface as the GB200 NVL72. The changes are inside the compute trays: each tray now holds four B300 GPUs (vs. two B200 GPUs in GB200), paired with two Grace CPUs.

Memory Density

288 GB HBM3e per GPU means 20.7 TB total GPU memory per rack (vs. 13.8 TB for GB200). Larger models can be trained on fewer racks.

800V DC Power Support

The GB300 introduces support for 800V DC power distribution, which reduces conductor losses and cable mass. This is forward-looking -- most current facilities still run 415V AC -- but new builds should plan for it.

Same Cooling Requirements

The thermal envelope is unchanged. CDU capacity, manifold design, and facility water loop specifications carry over from GB200 deployments.

Active Deployment Experience_

ACTIVE> DEPLOYMENT_IN_PROGRESS

Hyperscale AI Training Facility — Austin, Texas

Leviathan Systems is currently deploying GB300 NVL72 infrastructure for a hyperscale AI training facility in Texas. This is an active, large-scale project involving structured cabling, rack integration, network testing, and commissioning. The GB300 is the latest NVIDIA platform to enter production, and our team is among the first in the US with hands-on deployment experience at scale.

This direct experience means our documentation, processes, and deployment methodology reflect the real-world requirements of GB300 infrastructure -- not theoretical specifications from a product brief.

PlatformGB300 NVL72
Power120 kW / rack
CoolingDirect Liquid
LocationAustin, TX

How Leviathan Deploys GB300_

01Site Assessment

Verify power capacity, cooling infrastructure, and floor loading for 120 kW racks.

02Rack Integration

Server placement, NVLink 5.0 domain wiring, power cabling, and PDU connections.

03Liquid Cooling

CDU connection, manifold routing, quick-disconnect installation, leak detection, pressure testing.

04Test & Commission

OTDR fiber testing, interconnect verification, POST checks, thermal validation, and handoff.

Ready to Deploy Your GPU Infrastructure?_

Tell us about your project. We’ll respond within 48 hours with a scope assessment and timeline estimate.