Public beta · No credit card required

Faster, safer, cheaper sandboxes for AI agents.

Platform designed for autonomous agent workloads.

Starts in 20 ms.

Billed per second from $0.041 per vCPU-hour.

Read the whitepaper
COLD BOOT
20 ms
Time from create() to first executable instruction inside the guest. Measured on representative hardware.
ISOLATION
4 layers
KVM hardware boundary, credential proxy, network policy, HMAC audit chain.
PRICE
$0.041 / hr
1 vCPU + 1 GiB. Billed per second. No minimum.
from isorun import Sandbox

with Sandbox("python") as sb:
    r = sb.exec("python3 -c 'print(sum(range(10**7)))'")
    print(r.stdout) # 49999995000000
cold boot: 20 msbilled per second
● connected
Primitives

Six primitives. None optional.

Each primitive contributes to one or more of the three constraints (cost, latency, isolation) that bind agent workloads simultaneously. The properties are mutually reinforcing — six of seven is not 86% of seven. Read the full derivation →

Hardware-enforced isolation

Each sandbox executes inside its own guest kernel under KVM. The boundary is enforced by the CPU's virtualization extensions, not by namespaces and cgroups inside a shared host kernel.

Any OCI image

Pull any image from Docker Hub. First boot is approximately 30 seconds while the image is optimized. Every subsequent boot is under 10 ms at p50.

Streaming exec

stdout and stderr arrive incrementally via callback as the command runs, not after it exits. Required for agent loops that observe partial output before deciding next steps.

Snapshot and restore

Checkpoint a running VM in 23 ms. Restore the same state as a new sandbox in time comparable to a cold boot. Speculative agent retries become a routine primitive operation.

Detachable sessions

Long-running sessions survive client disconnect. The VM continues to execute, state persists, the agent reattaches when ready. Session timeout is configurable.

Per-second billing

destroy() returns measured cpu_ms, mem_peak_bytes, and uptime_ms. The advertised price is the price; there are no separate metering line items for memory, network, or platform fee.

Security layers

Four mechanisms. Mutually reinforcing.

Hardware isolation. Credential injection at the host edge. Default-deny network egress. A tamper-evident audit chain. All four on by default; all four on the API surface.

Layer 1

KVM hardware boundary

Each sandbox executes inside its own guest kernel under hardware-assisted virtualization. The trust base shrinks from a multi-million-line kernel to a small VMM.

# every sandbox = real Linux VM
sb = Sandbox("python")
sb.exec("uname -a")
# Linux 6.x · own kernel · KVM
Layer 2

Out-of-sandbox credential injection

API keys never enter the guest's address space. The real credential is injected at the network layer when a matching outbound request is observed; the env var inside the VM is a placeholder.

# real key never enters the VM
sb = Sandbox("python",
  credentials={"openai": key}
)
Layer 3

Default-deny network egress

All outbound traffic is denied by default. Permitted traffic is added by an explicit allow-list — named profile or structured policy of domains, CIDRs, and per-endpoint method/path rules.

# air-gapped — zero egress
sb = Sandbox("python",
  network_profile="locked-down"
)
Layer 4

Tamper-evident audit log

Every command execution, file operation, network request, and credential-proxied call is signed with an HMAC chained to the previous entry. Modification of any entry breaks the chain at every subsequent entry.

# tamper-evident HMAC chain
assert sb.audit.verify_all()
ev = sb.audit.last("exec")
# {cmd, exit, sig, prev_hash}
Why all four

Each mechanism depends on the others to function as advertised. Without P3 (network), P1 and P2 do not prevent data exfiltration — hostile code can still call out from inside an otherwise-isolated VM. Without P1 (KVM), P4 is weakened — hostile code in the guest can rewrite the audit log before it flushes. Six of seven is not 86% of seven; it has a structural failure mode in the dimension it omitted, and the omitted dimension determines the failure mode that will eventually be discovered. Whitepaper §4 →

Five client libraries. Four framework integrations.

The SDK surface is small enough to learn in 5 minutes and complete enough that you do not need to drop into the HTTP API for any production use case.

Native SDKs
PY
Python
pip install isorun
from isorun import Sandbox
TS
TypeScript
npm install isorun
import { Sandbox } from 'isorun'
GO
Go
go get github.com/isorun-ai/isorun-go
import "github.com/isorun-ai/isorun-go"
RB
Ruby
gem install isorun
require 'isorun'
JA
Java
ai.isorun:isorun:1.0.0
import ai.isorun.Sandbox;
Framework integrations

LangChain

Implements the LangChain Tool interface. Calls to .invoke() route to Sandbox.exec inside an isorun VM and return the structured result.

pip install isorun-langchain

CrewAI

Implements the CrewAI Tool interface. Add to any crew's tool list; tasks that need code execution run inside an isorun sandbox transparently.

pip install isorun-crewai

OpenAI Agents

Function tool registered with the OpenAI Agents SDK runtime. The model calls the tool directly; each call provisions or reuses a sandbox by session ID.

pip install isorun-openai

MCP server

Speaks the Model Context Protocol. Connect Claude Desktop, Cursor, Zed, or any MCP-compatible client; the server exposes sandbox primitives as MCP tools.

npx isorun-mcp
Integration patterns

Two patterns. Six representative workloads.

The whitepaper distinguishes Pattern A (embedded in a customer-facing product, high volume) from Pattern B (driven from an internal agent loop, lower volume). Both use the same primitives.

Code interpreter

User-supplied Python on user-supplied data. The chatbot calls Sandbox.exec with the snippet; results stream back over the same client. Pandas, matplotlib, and scipy are pre-warmed in the standard image.

sb.exec("python3 analyze.py data.csv")

Coding agents

The agent operates inside a full Linux environment with bash, package manager, and git. Installation, build, test, and commit run as a developer would run them — without leaving the sandbox.

sb.exec("pip install -r req.txt && pytest")

Browser and computer use

A real desktop session reachable over VNC. The agent drives Playwright, pyautogui, or any X11 client to interact with web pages and applications. Display dimensions and DPI are configurable per session.

sb.exec("playwright open https://...")

Eval pipelines

Checkpoint once after warm-up. Restore N parallel branches from the same snapshot. Each branch is independent; the snapshot is read-only and reusable across runs without reboot.

Sandbox.restore(snap) # × N branches

Long-running sessions

Sessions can run for hours without an active client. The session timeout is set per sandbox at creation; the SDK reattaches to existing sessions by ID. Memory and filesystem state persist across reattachment.

Sandbox("python", sandbox_timeout=86400)

Untrusted code execution

User-submitted scripts run inside the same KVM boundary as agent code. The locked-down network profile and credential proxy apply by default; add an explicit allow-list to permit specific egress.

Sandbox("python", network_profile="locked-down")
Benchmarks

Cold boot, snapshot, isolation, price.

Every public competitor compared on the same axes. We publish p99.9 alongside p50 because tail latency is the real cost. Methodology and reproduction script in the whitepaper.

MetricIsorunE2BDaytonaModalInstaVMFreestyle
Cold boot20 ms~150 ms~90 ms~300 ms185 ms (P95)~700 ms
Snapshot & restore23 ms<500 msLive forking
IsolationKVM microVMFirecrackerSysbox containergVisorKVM microVMNested virt
Credential proxyBuilt-inBuilt-in
HMAC audit trailBuilt-in
Network filteringPer-endpoint rulesBasic$500/mo tierDomains + CIDR
SDK languages5 (Py, TS, Go, Ruby, Java)23111
Base / minimum$0$150/mo Pro$50–$500/moTier-gated$100/mo Pro$50/mo Pro
Price (1 vCPU + 1 GiB)$0.041/hr$0.067 (+62%)$0.067 (+62%)$0.095 (+132%)$0.067 (+62%)$0.053 (+30%)

Cold boot is measured from create() request returning to the first executable instruction inside the guest, on representative hardware. Competitor numbers are from public landing pages and pricing pages, verified 2026-04-07. Methodology and reproduction script in the whitepaper.

Pricing

$0.041 per vCPU-hour. Billed per second. No minimum.

The advertised price is the price. No separate metering for memory, network egress, or platform fees. No monthly tier required to access individual features.

Sandbox · 1 vCPU · 1 GiB
$0.041/ hour

For comparison: $0.053/hr at Freestyle (the next-cheapest), $0.067/hr at E2B, Daytona, and InstaVM. Scales linearly with CPU count — 2 vCPU is $0.082, 4 vCPU is $0.164, 16 vCPU is $0.656.

  • Per-second billing
  • All features included
  • Credential proxy
  • Network policy + endpoint rules
  • HMAC audit trail
  • Checkpoint & fork
  • Any OCI image
  • EU + US regions
Real billing data — returned by destroy()
stats = sb.destroy()
# {cpu_ms: 1247, mem_peak_bytes: 184320000, uptime_ms: 3416}
Why $0.041

The execution environment runs on dedicated bare-metal hardware leased monthly from specialist providers, not on VMs from hyperscale cloud providers. A sandbox running on a hyperscaler's VM is being virtualized twice — once by the cloud provider, once by the sandbox runtime — which both costs CPU and constrains the available isolation primitives. Whitepaper §6 →

Dedicated capacity, custom images, or compliance review? hello@isorun.ai

Five lines of Python. 20 ms cold boot.

Public beta. No credit card required. The first sandbox runs in under a minute from sign-up to executable code.