Create, start, stop, and SSH into hardware-isolated microVMs with simple shell commands. Persistent disks. DHCP networking. Systemd lifecycle. On your Proxmox host.
Everything you need to run a fleet of microVMs on your Proxmox host.
Per-instance writable rootfs that survives stop/start cycles. Cloned and resized from an Ubuntu 24.04 cloud template.
Each VM gets a tap device bridged to your LAN. Automatic IP assignment via DHCP with a deterministic MAC address.
Managed via systemd template units. VMs persist across host reboots with automatic enable and start.
fc-ssh auto-resolves the guest IP from the ARP table. No manual IP tracking required.
Set disk size, vCPUs, and memory per VM at create time. Sensible defaults for everything.
curl | sudo bash installs everything: farm tools, Firecracker binaries, guest kernel, and host config.
Firecracker runs as a dedicated system user (UID 999), not root. The jailer drops privileges before exec, limiting blast radius.
Firecracker is an open-source virtual machine monitor (VMM) built by AWS for running multi-tenant container and serverless workloads. It uses Linux KVM to create lightweight microVMs that provide the security and isolation of traditional VMs with the speed and resource efficiency of containers. Each microVM runs its own kernel with a minimal device model, giving you a smaller attack surface than Docker and faster boot times than traditional VMs.
| Aspect | Docker Container | Firecracker microVM | Traditional VM |
|---|---|---|---|
| Isolation | Shared kernel (namespaces + cgroups) | Dedicated kernel + KVM hardware boundary | Dedicated kernel + full hypervisor |
| Boot time | ~500ms | ~125ms | 10 – 45s |
| Memory overhead | ~10 MB | ~5 MB | 128 – 512 MB |
| Attack surface | Large (shared kernel syscalls) | Minimal (reduced device model) | Large (full QEMU device emulation) |
| Escape risk | Container escapes are common CVEs | KVM boundary + minimal VMM | Low but heavy footprint |
| Root filesystem | Layered, ephemeral by default | Persistent raw ext4 | Persistent disk image |
| Networking | Bridge / NAT / overlay | Tap + bridge (LAN native) | Bridge / NAT |
Each VM runs in its own jailer sandbox as an unprivileged user with KVM isolation, connected to your LAN via tap devices.
OpenClaw is an open-source personal AI assistant that runs on your own devices and connects to the channels you already use (WhatsApp, Telegram, Slack, Discord, and more). It includes an agent runtime that can execute code on your behalf — by default inside a Docker container. With firecracker-farm, you can replace that Docker sandbox with a hardware-isolated Firecracker microVM, giving the AI code execution environment full KVM-level isolation from your host.
KVM-isolated code execution — AI-generated code runs inside a dedicated VM, not a shared-kernel container.
No container escapes — Even malicious code cannot cross the KVM hardware boundary.
Separate kernel — Guest kernel exploits don't affect the host. The attack surface is minimal.
Persistent environment — The sandbox disk survives restarts, enabling stateful dev environments.
# 1. Create a sandbox VM with 2 vCPUs and 2GB RAM fc-create openclaw-sandbox --guest-user ops --ssh-key-file ~/.ssh/id_ed25519.pub --disk-size 30G --vcpus 2 --memory 2g # 2. Start the sandbox fc-start openclaw-sandbox # 3. SSH in and install Node.js (OpenClaw requires Node 22.16+ or 24) fc-ssh openclaw-sandbox -- sudo apt-get update fc-ssh openclaw-sandbox -- sudo apt-get install -y ca-certificates curl gnupg fc-ssh openclaw-sandbox -- 'curl -fsSL https://deb.nodesource.com/setup_24.x | sudo bash -' fc-ssh openclaw-sandbox -- sudo apt-get install -y nodejs # 4. Install OpenClaw inside the VM fc-ssh openclaw-sandbox -- sudo npm install -g openclaw@latest # 5. Run the OpenClaw onboarding wizard (sets up gateway, channels, skills) fc-ssh openclaw-sandbox -- openclaw onboard --install-daemon # 6. Start the OpenClaw gateway fc-ssh openclaw-sandbox -- openclaw gateway --port 18789 --verbose # 7. Check the sandbox IP (use this in your OpenClaw client config) fc-status openclaw-sandbox
From zero to a running microVM in four steps.
curl -fsSL https://raw.githubusercontent.com/linuxdevel/firecracker-farm/main/install.sh | sudo bash
sudo bash -c 'source /opt/firecracker-farm/lib/image.sh && fc_image_build_template'
# Interactive — prompts for username, SSH key, and disk size: fc-create myvm # Or provide everything on the command line: fc-create myvm --guest-user ops --ssh-key-file ~/.ssh/id_ed25519.pub --disk-size 20G fc-start myvm
# Wait ~30s for cloud-init + DHCP, then: fc-ssh myvm
fc-list # List all VMs with status, IP, uptime fc-status myvm # Detailed VM info fc-stop myvm # Stop the VM fc-destroy myvm --yes # Permanently remove VM, disk, and logs
Planned features — not yet implemented.
Isolated fc-br0 bridge with no direct internet access. Default-deny outbound firewall via nftables on a dedicated LXC gateway.
mitmproxy intercepts all HTTP/HTTPS traffic transparently. Automatic CA certificate injection into every VM via cloud-init.
Inject real API keys at the proxy layer so secrets never touch the VM. Per-VM, per-domain credential mapping.
Per-VM or group-based domain allowlists with glob support. Blocked requests get a clear 403 response.
Dark-themed dashboard for managing allowlists, credentials, firewall rules, and live traffic logs. Runs inside the LXC on port 8443.
Real-time WebSocket-powered request log with color-coded status codes. Filter by VM or domain for instant visibility.
Common questions about running Firecracker on Proxmox.
Proxmox’s web interface (PVE) is hardcoded to manage two types of guests:
QEMU/KVM virtual machines (via qm and configs in
/etc/pve/qemu-server/) and LXC containers (via
pct and configs in /etc/pve/lxc/).
Firecracker is a completely separate Virtual Machine Monitor (VMM). Even though it
uses the same underlying /dev/kvm hardware virtualization as QEMU,
Proxmox has no awareness of Firecracker processes. To Proxmox, a running Firecracker
microVM looks like an ordinary background Linux process — the
firecracker binary managed by a systemd service — consuming CPU
and RAM on the host.
Proxmox does not have a plugin architecture for alternative hypervisors, so there is no supported way to inject Firecracker VMs into the PVE interface without modifying Proxmox source code (which would break on every update).
fc-list and fc-status <name>
from the command line to view your running VMs.