Server Runtimes: Wasmtime, Wasmer, WasmEdge
This chapter surveys the server-side Wasm runtimes: how to pick one, embed one, and run trusted code safely.
The Players
Four runtimes cover almost all server-side Wasm.
Wasmtime
- From the Bytecode Alliance (Mozilla, Fastly, Intel, Microsoft, and others).
- Reference implementation of WASI.
- Rust-based, with embedding APIs for Rust, C, Python, Go, .NET.
- Fast, well-documented, active.
- First to ship Preview 2 and Component Model support.
Wasmer
- Commercial backing (Wasmer Inc.).
- Rust-based, with embedding APIs for many languages.
- Unique: WAPM (a Wasm package manager) and Wasmer Edge (a hosted deployment).
- Supports non-WASI Wasm with custom imports.
WasmEdge
- From the CNCF (Cloud Native Computing Foundation).
- Focus on cloud and edge workloads.
- Extensions for networking (TLS, HTTP), TensorFlow, key-value storage.
- Very fast for specific workloads thanks to AOT compilation.
wazero
- Pure Go. No CGO. Runs anywhere Go runs.
- Ideal for Go applications that want to embed Wasm without a C dependency.
- Narrower feature set than the Rust-based runtimes, but enough for most use cases.
Picking One
Short version:
- Default to Wasmtime. It's the reference, moves fast, great docs, widely used.
- Pick WasmEdge if you need TLS or bundled TensorFlow-style features out of the box.
- Pick wazero if you're embedding in Go and want zero CGO.
- Pick Wasmer if the package manager or the hosted platform fit your distribution story.
Performance differences are usually small for typical workloads. Fit with your stack and operational preferences matters more.
Running a Module from the CLI
All four runtimes have a similar CLI.
wasmtime run my_module.wasm
wasmer run my_module.wasm
wasmedge my_module.wasm
Add capabilities:
wasmtime --dir=. --env API_KEY=secret my_module.wasm
wasmer --dir=. --env API_KEY=secret my_module.wasm
Useful for quick tests and CLI tools. For real applications, you embed the runtime in your server.
Embedding Wasmtime in Rust
The common pattern: a Rust server that runs untrusted Wasm on demand.
Cargo.toml:
[dependencies]
wasmtime = "20"
wasmtime-wasi = "20"
anyhow = "1"
Minimal embedding:
use wasmtime::*;
use wasmtime_wasi::preview1::WasiP1Ctx;
use wasmtime_wasi::WasiCtxBuilder;
fn main() -> anyhow::Result<()> {
let engine = Engine::default();
let mut linker: Linker<WasiP1Ctx> = Linker::new(&engine);
wasmtime_wasi::preview1::add_to_linker_sync(&mut linker, |t| t)?;
let wasi = WasiCtxBuilder::new()
.inherit_stdio()
.build_p1();
let mut store = Store::new(&engine, wasi);
let module = Module::from_file(&engine, "my_module.wasm")?;
let instance = linker.instantiate(&mut store, &module)?;
let start = instance.get_typed_func::<(), ()>(&mut store, "_start")?;
start.call(&mut store, ())?;
Ok(())
}
What's happening:
- Create an
Engine(compiles and optimizes Wasm). - Create a
Linker(resolves imports). - Add WASI imports to the linker.
- Build a
WasiCtxwith the capabilities you allow. - Create a
Store(holds module instance state). - Load and instantiate the module.
- Call the module's
_startfunction (the WASI entry point).
WasiCtxBuilder is where you grant capabilities. inherit_stdio() means the module's stdout/stderr go to the host's. You can also grant filesystem access, env vars, and so on.
Scoped Capabilities for Multi-Tenant
For running untrusted code per request:
let wasi = WasiCtxBuilder::new()
.preopened_dir("/tmp/tenant-42/", "/")? // only this directory, mapped to guest root
.env("TENANT_ID", "42")?
.build_p1();
The module sees / as the tenant's sandbox. It can't reach outside. Different tenants get different WasiCtx instances, each scoped to their data.
This is the sandboxing guarantee that makes Wasm attractive for multi-tenant platforms: the OS isn't involved in isolation; the Wasm runtime is. No container overhead.
Memory Limits
By default, a module can grow memory up to 4 GB. That's a lot for untrusted code.
struct TenantLimits;
impl wasmtime::ResourceLimiter for TenantLimits {
fn memory_growing(
&mut self,
_current: usize,
desired: usize,
_maximum: Option<usize>,
) -> anyhow::Result<bool> {
Ok(desired < 100 * 1024 * 1024) // 100 MB cap
}
fn table_growing(
&mut self,
_current: u32,
desired: u32,
_maximum: Option<u32>,
) -> anyhow::Result<bool> {
Ok(desired < 10_000)
}
}
Cap memory per tenant. Otherwise one bad module eats your host's RAM.
CPU Time Limits
Runaway modules can loop forever. Wasmtime supports epoch-based interruption:
let mut config = Config::default();
config.epoch_interruption(true);
let engine = Engine::new(&config)?;
let mut store = Store::new(&engine, wasi);
store.set_epoch_deadline(1); // interrupt after 1 epoch tick
// Separate thread bumps the epoch every N ms
let engine_clone = engine.clone();
std::thread::spawn(move || {
loop {
std::thread::sleep(std::time::Duration::from_millis(100));
engine_clone.increment_epoch();
}
});
The module gets a trap (runtime error) when its deadline expires. Good for killing runaways without forcibly tearing down your host.
Wasmer has metering for similar purposes, using a per-instruction cost model.
Ahead-of-Time Compilation
Compiling Wasm is fast but not free. For cold-start-sensitive workloads (edge functions, serverless), pre-compile:
wasmtime compile my_module.wasm -o my_module.cwasm
.cwasm is native code for a specific CPU. Loading it is instant; can't ship across CPU architectures without recompiling.
Most production deployments of serverless Wasm pre-compile during deploy.
Use Cases
Real server-side Wasm deployments:
- Shopify Functions: merchants write Rust, compile to Wasm, Shopify runs their code for each checkout with scoped capabilities.
- Fastly Compute@Edge: JS/Rust/Go compiled to Wasm runs at Fastly's edge nodes.
- Cloudflare Workers: Wasm alongside V8 JavaScript isolates.
- Envoy proxy: Wasm extension points for request processing.
- Istio: same, on the service-mesh layer.
- Postgres extensions: some teams run Wasm for custom SQL functions instead of native
.sofiles.
The common thread: untrusted code that needs low start-up and low overhead.
Common Pitfalls
Not limiting memory. Untrusted module eats all RAM. Always cap.
Not limiting CPU. Untrusted module loops forever. Always have a deadline or metering.
Sharing Engine across threads. Safe. Share it; it's cheap.
Sharing Store across threads. Not safe. One Store per instance, one instance per thread (or use async with fuel-based yielding).
Ignoring AOT. Cold-start benchmarks look bad; pre-compile for a big speedup.
Hard-coding paths for tenant isolation. Use preopened_dir per tenant; don't give the module the real filesystem.
Next Steps
Continue to 10-component-model.md for the interoperability story across languages.