Quickstart
C3 brings your Slurm workflow to the cloud. Write the same #SBATCH scripts you already use, and C3 handles provisioning GPUs from multiple data centers—so you get compute when you need it, at competitive prices.
Install the CLI
curl -fsSL https://raw.githubusercontent.com/samleeney/c3/main/install.sh | sh
Then authenticate:
c3 login
This opens your browser to sign in. Credentials are stored locally in ~/.c3/.
Run your first job
Clone the examples repo and submit a GPU benchmark:
git clone https://github.com/samleeney/c3-examples
cd c3-examples/jax-matmul
c3 deploy job.sbatch
The job script looks familiar if you've used Slurm:
#!/bin/bash
#SBATCH --job-name=jax-matmul
#SBATCH --gres=gpu:1
#SBATCH --time=00:10:00
#C3 OUTPUT ./results
python train.py
C3 reads your #SBATCH directives, provisions a GPU from the marketplace, runs your script, and uploads the results.
Check status and download results
Monitor your job:
c3 squeue
JOB ID STATUS SUBMITTED
job_abc123 RUNNING 2024-01-15 10:30:00
Once complete, pull the results:
c3 pull job_abc123
Your output files are downloaded to ./job_abc123/.
Next steps: Learn about submission scripts for advanced options, data management for working with datasets, or browse the marketplace to see available GPUs and pricing.