Skip to content
codingbutvibes
015 minFree Preview

Rent a real H100 and run your first AI workload in 90 seconds

Free tier: $1 credit, zero commitment, instant GPU

What you'll learn

  • Sign up and claim your $1 free GPU credit
  • Deploy an RTX 4090 pod with the PyTorch template in under a minute
  • Run real CUDA code in Jupyter against a real datacenter GPU
  • Stop the pod so you don't accidentally burn $8 overnight

GPU cloud sounds like a devops nightmare. RunPod makes it feel like ordering an Uber.

Go to runpod.io and sign up. New accounts get $1 in free credit automatically — enough to run a GPU for about an hour or do this lesson 50 times over. No credit card required to start.

On the dashboard, click Pods in the left sidebar, then Deploy. You'll see a grid of GPUs: RTX 4090 (~$0.34/hr), A40 (~$0.40/hr), A100 80GB (~$1.19/hr), H100 (~$2.79/hr). Click RTX 4090 — it's the best bang-for-buck for this lesson.

Under Template pick `RunPod PyTorch 2.1`. This is a prebuilt Docker image with CUDA, PyTorch, and Jupyter already installed. Leave everything else default. Click Deploy On-Demand.

In about 40 seconds your pod is live. Click Connect and then Connect to Jupyter Lab. A fresh Jupyter notebook opens in your browser, running on a real 4090.

In a new cell, paste this:

import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"GPU: {torch.cuda.get_device_name(0)}")
print(f"VRAM: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB")

x = torch.randn(10000, 10000).cuda() result = (x @ x).sum() print(f"Result: {result.item():.2f}") ```

Shift+Enter. You just did a 10k x 10k matrix multiply on an NVIDIA RTX 4090 in a datacenter somewhere — from your browser, in under two minutes, for about three cents.

Important: head back to the Pods dashboard and click Stop on your pod. On-demand pods bill by the second whether you're using them or not. Running a 4090 overnight is $8. Always stop.

That's the free-tier taste. You can run one-off experiments in Jupyter all day long. What you can't do yet: package your model as an API, scale to zero when idle, or serve real users. That's what the rest of the course builds.

Next up: Run Stable Diffusion as an API that scales to zero

Unlock with RunPod Pay-as-you-go (~$0.50/hr GPU)
CodingButVibes