Skip to main content

Marketplace

C3 aggregates GPU capacity from multiple data centers. When you submit a job, we find available compute at competitive rates—no need to manage cloud accounts or hunt for capacity yourself.

Available GPUs

GPUVRAMBest for
A100 80GB80GBLarge models, multi-GPU training
A100 40GB40GBStandard deep learning workloads
RTX 409024GBDevelopment, inference, smaller models

More GPU types are added as we onboard new providers.

Pricing

You're billed per second of actual compute time—not for time spent queuing or provisioning. Check your balance and rates:

c3 account balance
c3 pricing

How jobs run

  1. QUEUED — Job submitted, waiting for a GPU
  2. PROVISIONING — Spinning up a VM (2-5 min cold start, ~30s from warm pool)
  3. PREPARING — Downloading your code and mounting datasets
  4. RUNNING — Your script is executing
  5. UPLOADING — Saving results to cloud storage
  6. COMPLETED — Done. Download results with c3 pull

Warm pool

Popular GPU types are kept warm, so repeat jobs start in ~30 seconds instead of minutes. The first job of the day may take longer while a fresh VM provisions.

Providers

Jobs currently run on Hyperstack. We're onboarding additional providers to increase capacity and reduce prices.