Fovus Delivers OpenMM Molecular Dynamics Simulations for as Low as $6.59/µs, as fast as 3359 ns/day

by

Fovus Delivers OpenMM Molecular Dynamics Simulations for as Low as $6.59/µs, as fast as 3359 ns/day

Fengbo Ren Avatar

OpenMM is a high-performance toolkit for molecular simulations that is widely used in research domains such as computational chemistry, structural biology, and drug discovery. Known for its flexibility, GPU acceleration, and Python interface, OpenMM makes it easy to prototype new simulation methods or run standard molecular dynamics (MD) simulations at scale. However, despite its strengths, running OpenMM efficiently in the cloud, and especially at large scale, remains a challenge.

However, running OpenMM on cloud GPUs presents significant scalability and cost challenges. GPUs are among the most in-demand resources in the cloud, making them difficult to procure, expensive to use, and challenging to scale efficiently. Due to high demand from AI, high-performance computing (HPC), and other compute-intensive workloads, cloud providers often face GPU shortages, leading to limited availability and long provisioning times. Scaling up GPU resources dynamically can be unpredictable, as instances may not be readily available when needed in any particular cloud region and availability zone. Additionally, OpenMM achieves optimal performance through tight CPU-GPU integration, offloading intensive molecular dynamics calculations to the GPU while still relying on the CPU for task orchestration, data management, and I/O. Selecting the appropriate combination of GPU and CPU resources is therefore critical. Yet, the wide variety of cloud GPU offerings, flexible system configurations, and complex pricing models make it difficult for computational scientists to identify the optimal setup. Without careful hardware selection or a well-planned HPC strategy, OpenMM simulations can be cost-prohibitive at scale due to the high price of cloud-based GPUs.

​​Enter Fovus: Intelligent, Serverless HPC for OpenMM

Fovus is an AI-powered, serverless HPC platform delivering intelligent, scalable, and cost-efficient supercomputing power at the scientists’ and engineers’ fingertips. Fovus removes the infrastructure headaches by delivering OpenMM-optimized HPC as a fully autonomous, serverless platform. Whether running a single simulation or thousands in parallel, Fovus intelligently manages cloud logistics and optimizes performance and cost so that you can focus entirely on your science.

Free Automated Benchmarking

Fovus automatically benchmarks your OpenMM workloads for free, comparing multiple HPC strategies, such as GPU types, CPU and memory configurations, and more. This gives you visibility into how different cloud compute strategies affect your specific OpenMM simulations, so you can make informed decisions before committing to large-scale production runs.

AI-driven Strategy Optimization

Based on the benchmarking data, Fovus uses AI to determine the best cloud strategy to meet your objective, whether that’s minimizing runtime, cost, or both. Fovus does this without human intervention, making large-scale OpenMM simulations smarter and more efficient.

Dynamic Multi-Cloud-Region Auto-Scaling

Fovus dynamically allocates spot GPUs across multiple cloud providers and regions. This ensures that your OpenMM simulations always run on the best available hardware, minimizing queue time and maximizing scalability.

Intelligent Spot Instance Utilization

Spot pricing can reduce costs dramatically, but reliability has always been a concern. Fovus uses predictive spot intelligence and automatic checkpointing to mitigate the risk of interruptions. If a spot instance terminates, your OpenMM job resumes seamlessly on another one, preserving data integrity.

Continuous Improvement

As the cloud evolves, so do your simulations. Fovus continuously rebenchmarks and adapts your strategy in real-time to ensure your workloads always run on the most efficient and cost-effective infrastructure available.

Serverless HPC Model

No cluster setup. No cloud configuration. Just submit your OpenMM job via CLI or the Fovus web UI, and everything else — provisioning, optimization, scaling, recovery — is handled automatically. You pay only for runtime, not idle infrastructure.

Benchmarking OpenMM on Fovus

Protein Systems Benchmarked

TestMolecule SystemDescriptive NameNumber of Atoms
System 1 (small)gbsaDihydrofolate Reductase (DHFR) – Implicit2,489
System 2 (medium)pmeDihydrofolate Reductase (DHFR) – Explicit-PME23,558
System 3 (medium)amoebapmeAMOEBA Dihydrofolate Reductase (DHFR)23,558
System 4 (large)apoa1rfApolipoprotein A1 – RF92,224
System 5 (large)apoa1pmeApolipoprotein A1 – PME92,224

To demonstrate the power of Fovus for OpenMM workloads, we benchmarked multiple protein systems of varying complexity, ranging from 2,500 to over 90,000 atoms. These tests reflect real-world molecular dynamics simulations. 

Each system was simulated with OpenMM 8.1.1 using the NVIDIA OpenMM Docker image, with checkpointing enabled to best leverage spot instances and with GPU acceleration via CUDA:

docker run \
    --rm \
    --gpus all \
    -v $PWD:/host_pwd \
    -w /usr/local/openmm/examples/ \
    nvcr.io/nvidia/openmm:8.1.1 \
    python benchmark.py --platform=CUDA --test=$TEST_NAME --seconds=60

Each simulation was deployed on Fovus using the optimal spot GPU for the objective — minimizing cost, minimizing time, or balancing both — determined by Fovus’ AI-powered HPC strategy engine. In the event of a spot instance interruption, Fovus seamlessly resumes the simulation from the last checkpoint using spot-to-spot failover. All runs used datacenter-grade GPUs with ECC memory to ensure scientific integrity.

For each protein system, benchmarking was conducted three times on Fovus, each with a different objective specified for HPC strategy optimization:

  1. Minimizing Cost: Prioritize cost over performance. Get the most cost-efficient strategy.
  2. Minimizing Cost and Time: Prioritize cost and time minimization equally. Optimizing for both cost-efficiency and speed.
  3. Minimizing Time: Prioritize performance over cost. Get the fastest strategy.

Three key performance metrics were analyzed:

  • $/µs (dollars per microsecond): Evaluates the unit cost of running a 1-µs long simulation in simulated time.
  • ns/day (nanoseconds per day): Measures simulation speed, indicating how much simulated time can be computed in one real-time day.
  • ns/$ (nanoseconds per dollar): Assesses cost efficiency, representing how much simulated time can be computed for each dollar spent.

Together, these metrics clearly show the performance and cost efficiency of running GPU-accelerated OpenMM simulations on Fovus.

Benchmarking Results

Below are the performance and cost-efficiency results achieved on Fovus under each objective:

Objective 1: Minimizing Costs
System$/µsns/dayns/$
System 1: gbsa (small)$6.591378.4151.6
System 2: pme (medium)$14.62621.668.4
System 3: amoebapme (medium)$875.8710.41.1
System 4: apoa1rf (large)$46.19196.821.6
System 5: apoa1pme (large)$70.39129.114.2
Objective 2: Balancing Costs and Time
System$/µsns/dayns/$
System 1: gbsa (small)$9.112108.4109.7
System 2: pme (medium)$16.571159.860.4
System 3: amoebapme (medium)$949.9620.21.1
System 4: apoa1rf (large)$44.19434.822.6
System 5: apoa1pme (large)$66.20290.215.1
Objective 3: Minimizing Time
System$/µsns/dayns/$
System 1: gbsa (small)$14.953358.766.9
System 2: pme (medium)$24.502048.840.8
System 3: amoebapme (medium)$1,237.5240.60.8
System 4: apoa1rf (large)$49.591012.320.2
System 5: apoa1pme (large)$66.06760.015.1

Summary of Results

The benchmarking results demonstrate Fovus’s ability to deliver high-performance, cost-effective GPU computing power for OpenMM simulations, while optimizing for the best performance-cost tradeoffs achievable in the cloud, according to user preferences. Key takeaways include:

  • Cost Efficiency: For users focused on cost, Fovus delivers OpenMM simulations for as low as $6.59/µs, with cost efficiency reaching up to 151.6 ns/$.
  • Speed: For users seeking fast turnaround, Fovus enables simulation speeds up to 3358.7 ns/day,  ideal for tight deadlines and iterative workflows.
  • Balanced Performance: When optimizing for both cost and time, Fovus achieves the best of both worlds, intelligently allocating resources for maximum return on investment (ROI).
  • Sustainability: As new GPUs and cloud regions become available, Fovus continuously re-optimizes your strategies without manual intervention.

These results highlight Fovus’s ability to deliver scalable, reliable, and high-performance OpenMM simulations, even for large and computationally intense molecular systems.

Focus on Discovery, Not Infrastructure

OpenMM gives researchers the tools to model the molecular world. Fovus gives them the computational engine to do it at scale, without distraction. No manual tuning. No babysitting clusters. Just results.

Fovus enables scientists and engineers to offload infrastructure concerns and zero in on innovation. With Fovus, you can run more simulations, explore more compounds, and get to discovery faster.

Try OpenMM on Fovus today; your first runs are free.