Better Together: How Voltage Park Built NVIDIA SuperPODs for Cursor’s RL Training

Scaling agentic AI for the next era of agentic coding demands infrastructure that supports reinforcement-learning pipelines and fast experimentation cycles. When Cursor needed a partner to design, deploy, and operate the compute foundation for their next venture, they trusted Voltage Park to architect a customized NVIDIA HGX B200 SuperPOD that didn’t require a provider-defined one-size-fits-all software environment.
Voltage Park’s partnership with Cursor began more than a year ago with 128 HGX H100 GPUs. It has now grown into multiple InfiniBand-interconnected Large SuperPODs spanning across the Hopper and Blackwell hardware generations. Our teams along with NVIDIA and Dell, engineered a fleet specifically for the reinforcement-learning workloads Cursor runs. Jointly architected, Voltage Park operates the customized AI infrastructure, provides infrastructure observability, and 24/7 support for the environment.
“Our collaboration is grounded in an established trust, technical excellence, and a shared vision of a world where people and AI seamlessly work together. When we want to push what is possible, Voltage Park says ‘yes’ where others may hesitate, or say no. This foundation is solid, and we are excited to keep building together.”
- Federico Cassano, Research Lead at Cursor
Why Cursor wanted custom SuperPODs
Composer, Cursor’s agentic model for software coding, depends on reinforcement-learning runs. As the model evolves, those RL workloads grow more compute-intensive, and the infrastructure has to be:
- Built for tight-loop rapid iteration, deployment, and cross-layer troubleshooting
- Bare metal for direct control over runtime behavior, scheduling, and system-level debugging
- Free from provider-imposed software.
Most off-the-shelf clusters slow down RL runs with:
- Virtualization layers (e.g., VM-first abstractions)
- Provider-owned, opinionated orchestration that constrains runtime and scheduling
- Cloud-imposed software stacks that limit observability and low-level control.
The customized NVIDIA SuperPODs fulfilled all the requirements without any of the friction. This gives Cursor a stable, scalable platform for end-to-end tuning and debugging.
The benefits of Voltage Park
Voltage Park combines NVIDIA fleet ownership with hands-on cluster operations at SuperPOD scale. We built an engineering and support team that brings deep experience in designing, deploying, and running GPU clusters. Our commitment to security, with certifications such as ISO 27001, SOC 2 Type II, and HIPAA, means our infrastructure is built for responsible innovation, and undergoes rigorous audits to make sure it stays that way.
The Voltage Park and Cursor relationship matters too. Both teams have a high level of trust that was forged by working side-by-side over thousands of hours.
“We co-designed this new cluster together from the ground up. We were able to choose all the pieces, and that’s one thing other neoclouds rarely allow.”
- Federico Cassano
The power of a purpose-built partnership
This collaboration represents a model of AI infrastructure rarely seen today:
- A provider willing to co-design a compute ecosystem around frontier research
- A hardware stack chosen specifically for new RL workloads
- A jointly engineered cluster that supports the next version of a public, fast-evolving agent model.
This effort produced a custom B200 training system in less than three (3) months that supports Cursor’s next phase of reinforcement-learning-driven development. With our strategic partner, Voltage Park has refined a repeatable approach for designing and deploying customer-specific AI infrastructure.
Ready to work with us? Tell us what you're building.
%201.avif)


