Create, start, stop, and SSH into hardware-isolated microVMs with simple shell commands. Persistent disks. DHCP networking. Systemd lifecycle. On your Proxmox host.

$ curl -fsSL https://raw.githubusercontent.com/linuxdevel/firecracker-farm/main/install.sh | sudo bash
fc-web01 2 vCPU · 2G fc-app01 4 vCPU · 4G fc-db01 2 vCPU · 8G KVM KVM KVM vmbr0 LAN · DHCP

Features

Everything you need to run a fleet of microVMs on your Proxmox host.

Persistent Disks

Per-instance writable rootfs that survives stop/start cycles. Cloned and resized from an Ubuntu 24.04 cloud template.

DHCP Networking

Each VM gets a tap device bridged to your LAN. Automatic IP assignment via DHCP with a deterministic MAC address.

Systemd Lifecycle

Managed via systemd template units. VMs persist across host reboots with automatic enable and start.

SSH Access

fc-ssh auto-resolves the guest IP from the ARP table. No manual IP tracking required.

Configurable Resources

Set disk size, vCPUs, and memory per VM at create time. Sensible defaults for everything.

One-Command Install

curl | sudo bash installs everything: farm tools, Firecracker binaries, guest kernel, and host config.

Unprivileged Jailer

Firecracker runs as a dedicated system user (UID 999), not root. The jailer drops privileges before exec, limiting blast radius.

Why Firecracker?

Firecracker is an open-source virtual machine monitor (VMM) built by AWS for running multi-tenant container and serverless workloads. It uses Linux KVM to create lightweight microVMs that provide the security and isolation of traditional VMs with the speed and resource efficiency of containers. Each microVM runs its own kernel with a minimal device model, giving you a smaller attack surface than Docker and faster boot times than traditional VMs.

Aspect Docker Container Firecracker microVM Traditional VM
Isolation Shared kernel (namespaces + cgroups) Dedicated kernel + KVM hardware boundary Dedicated kernel + full hypervisor
Boot time ~500ms ~125ms 10 – 45s
Memory overhead ~10 MB ~5 MB 128 – 512 MB
Attack surface Large (shared kernel syscalls) Minimal (reduced device model) Large (full QEMU device emulation)
Escape risk Container escapes are common CVEs KVM boundary + minimal VMM Low but heavy footprint
Root filesystem Layered, ephemeral by default Persistent raw ext4 Persistent disk image
Networking Bridge / NAT / overlay Tap + bridge (LAN native) Bridge / NAT
Production-proven: Firecracker was built by AWS for Lambda and Fargate. It processes millions of workloads per second in production. Developed and maintained under the Apache 2.0 license.

Architecture

Each VM runs in its own jailer sandbox as an unprivileged user with KVM isolation, connected to your LAN via tap devices.

Proxmox Host myvm Ubuntu 24.04 rootfs.raw · 20G cloud-init seed web01 Ubuntu 24.04 rootfs.raw · 30G cloud-init seed db01 Ubuntu 24.04 rootfs.raw · 50G cloud-init seed jailer + KVM firecracker jailer + KVM firecracker jailer + KVM firecracker fc-myvm0 fc-web010 fc-db010 vmbr0 (Linux bridge) systemd lifecycle LAN + DHCP 192.168.1.0/24

Example: OpenClaw AI Sandbox

OpenClaw is an open-source personal AI assistant that runs on your own devices and connects to the channels you already use (WhatsApp, Telegram, Slack, Discord, and more). It includes an agent runtime that can execute code on your behalf — by default inside a Docker container. With firecracker-farm, you can replace that Docker sandbox with a hardware-isolated Firecracker microVM, giving the AI code execution environment full KVM-level isolation from your host.

KVM-isolated code execution — AI-generated code runs inside a dedicated VM, not a shared-kernel container.

No container escapes — Even malicious code cannot cross the KVM hardware boundary.

Separate kernel — Guest kernel exploits don't affect the host. The attack surface is minimal.

Persistent environment — The sandbox disk survives restarts, enabling stateful dev environments.

# 1. Create a sandbox VM with 2 vCPUs and 2GB RAM
fc-create openclaw-sandbox --guest-user ops --ssh-key-file ~/.ssh/id_ed25519.pub --disk-size 30G --vcpus 2 --memory 2g

# 2. Start the sandbox
fc-start openclaw-sandbox

# 3. SSH in and install Node.js (OpenClaw requires Node 22.16+ or 24)
fc-ssh openclaw-sandbox -- sudo apt-get update
fc-ssh openclaw-sandbox -- sudo apt-get install -y ca-certificates curl gnupg
fc-ssh openclaw-sandbox -- 'curl -fsSL https://deb.nodesource.com/setup_24.x | sudo bash -'
fc-ssh openclaw-sandbox -- sudo apt-get install -y nodejs

# 4. Install OpenClaw inside the VM
fc-ssh openclaw-sandbox -- sudo npm install -g openclaw@latest

# 5. Run the OpenClaw onboarding wizard (sets up gateway, channels, skills)
fc-ssh openclaw-sandbox -- openclaw onboard --install-daemon

# 6. Start the OpenClaw gateway
fc-ssh openclaw-sandbox -- openclaw gateway --port 18789 --verbose

# 7. Check the sandbox IP (use this in your OpenClaw client config)
fc-status openclaw-sandbox

Docker Sandbox

  • Shared host kernel
  • Namespace-only isolation
  • Container escape CVEs affect host
  • Kernel exploits reach the host

Firecracker Sandbox

  • Dedicated guest kernel
  • KVM hardware isolation
  • Minimal VMM attack surface
  • ~125ms boot, ~5MB overhead

Quick Start

From zero to a running microVM in four steps.

1

Install

curl -fsSL https://raw.githubusercontent.com/linuxdevel/firecracker-farm/main/install.sh | sudo bash
2

Build Template

sudo bash -c 'source /opt/firecracker-farm/lib/image.sh && fc_image_build_template'
3

Create & Start a VM

# Interactive — prompts for username, SSH key, and disk size:
fc-create myvm

# Or provide everything on the command line:
fc-create myvm --guest-user ops --ssh-key-file ~/.ssh/id_ed25519.pub --disk-size 20G

fc-start myvm
4

SSH In

# Wait ~30s for cloud-init + DHCP, then:
fc-ssh myvm
5

Manage

fc-list                            # List all VMs with status, IP, uptime
fc-status myvm                     # Detailed VM info
fc-stop myvm                       # Stop the VM
fc-destroy myvm --yes             # Permanently remove VM, disk, and logs

Roadmap

Planned features — not yet implemented.

Network Isolation

Isolated fc-br0 bridge with no direct internet access. Default-deny outbound firewall via nftables on a dedicated LXC gateway.

Transparent TLS Proxy

mitmproxy intercepts all HTTP/HTTPS traffic transparently. Automatic CA certificate injection into every VM via cloud-init.

Credential Rewriting

Inject real API keys at the proxy layer so secrets never touch the VM. Per-VM, per-domain credential mapping.

Domain Allowlists

Per-VM or group-based domain allowlists with glob support. Blocked requests get a clear 403 response.

Web Management GUI

Dark-themed dashboard for managing allowlists, credentials, firewall rules, and live traffic logs. Runs inside the LXC on port 8443.

Live Traffic Log

Real-time WebSocket-powered request log with color-coded status codes. Filter by VM or domain for instant visibility.

Proxmox Host fc-web01 10.99.0.51 fc-sandbox 10.99.0.52 fc-db01 10.99.0.53 fc-br0 (isolated 10.99.0.0/24) LXC Gateway mitmproxy + nftables + dnsmasq Web GUI :8443 vmbr0 (LAN) Internet 🔒 no direct route

FAQ

Common questions about running Firecracker on Proxmox.

Why don't Firecracker VMs show up in the Proxmox web GUI?

Proxmox’s web interface (PVE) is hardcoded to manage two types of guests: QEMU/KVM virtual machines (via qm and configs in /etc/pve/qemu-server/) and LXC containers (via pct and configs in /etc/pve/lxc/).

Firecracker is a completely separate Virtual Machine Monitor (VMM). Even though it uses the same underlying /dev/kvm hardware virtualization as QEMU, Proxmox has no awareness of Firecracker processes. To Proxmox, a running Firecracker microVM looks like an ordinary background Linux process — the firecracker binary managed by a systemd service — consuming CPU and RAM on the host.

Proxmox does not have a plugin architecture for alternative hypervisors, so there is no supported way to inject Firecracker VMs into the PVE interface without modifying Proxmox source code (which would break on every update).

Planned workaround: A future version of firecracker-farm will include its own web management GUI (running inside a proxy LXC gateway on port 8443) to provide a dedicated dashboard for monitoring and managing your microVMs. In the meantime, use fc-list and fc-status <name> from the command line to view your running VMs.