Best Practices: Performance, Size, and Anti-Patterns
This chapter collects the habits that keep Wasm modules small, fast, and debuggable, with the anti-patterns that cause pain.
Size
Wasm size matters because:
- Download time is end-user-visible latency.
- Parse and compile time scales with size.
- Caching fidelity suffers when every change invalidates a big file.
Release Profile
Always:
[profile.release]
opt-level = "z" # size
lto = true # link-time optimization
codegen-units = 1 # better inlining
panic = "abort" # no unwind
strip = true # remove debug symbols
The above cuts a typical Rust Wasm from ~1.5 MB to ~200 KB.
wasm-opt
Run after every release build:
wasm-opt -Oz -o out.wasm in.wasm
Another 10 to 30% off. If size is critical, try -O4 and compare.
Trim Allocations
std::format!, println!, and anything that pulls in string formatting adds tens of KB. For tight modules:
- Avoid
format!andto_stringon paths; usewrite!into a pre-allocated buffer. - Use
#[no_std]for pure computation modules (Chapter 4). - Replace
Stringwith&strwhere possible.
Trim Dependencies
Each dependency adds its own code. Audit with cargo-tree and drop what you don't need. Feature flags: enable only what you use.
[dependencies]
web-sys = { version = "0.3", features = ["console"] } # just console, not all of web-sys
Measure with twiggy
twiggy top out.wasm -n 20
Shows the biggest functions. Target the top few; the long tail rarely matters.
Performance
Minimize Boundary Crossings
Every call from JS into Wasm (or vice versa) has overhead: argument marshaling, type conversion, stack setup. A million small calls is slower than one call with a million-item batch.
Bad:
for (let i = 0; i < items.length; i++) {
results.push(wasm.processOne(items[i]));
}
Better:
const results = wasm.processAll(items);
One call, one marshal, one unmarshal. 10 to 100x faster for tight loops.
Use Typed Arrays for Bulk Data
Float32Array, Int32Array, and friends are passed by reference and shared with Wasm memory (for some runtimes) or copied once. Plain JS arrays are iterated element by element.
const data = new Float32Array(1000000);
// ... fill ...
wasm.processSignal(data); // fast: typed array
SIMD Where It Fits
Wasm SIMD (128-bit vectors) speeds up image processing, audio, ML inference. Compile with:
RUSTFLAGS="-C target-feature=+simd128" cargo build --target wasm32-unknown-unknown --release
And in code, the std::simd crate (nightly) or platform-specific intrinsics (stable).
SIMD is not magic: small arrays won't see benefit, poorly-aligned data suffers. Benchmark before and after.
Avoid Panics
In Rust, a panic unwinds the stack (or aborts if panic = "abort"). Either way, panics are slow and add binary size. Use Result and match errors explicitly. Enable panic = "abort" in release profile.
Pre-allocate Memory
If you know you'll process up to N MB of data, declare initial memory close to that. Avoids growth calls and buffer re-views on the host side.
# Cargo.toml (wasm-bindgen projects)
[package.metadata.wasm-pack.profile.release]
wasm-opt = ["-Oz", "--initial-memory=16777216"] # 16 MB initial
Debugging
Wasm debugging is improving but still not great. Techniques:
console.log from Rust
use wasm_bindgen::prelude::*;
#[wasm_bindgen(inline_js = "export function log_str(s) { console.log(s); }")]
extern "C" {
fn log_str(s: &str);
}
fn debug(msg: &str) {
log_str(msg);
}
Or the simpler version via web-sys:
web_sys::console::log_1(&JsValue::from_str(msg));
Panic Hook for Nicer Errors
Default panic output in Wasm is useless. Install a hook:
use wasm_bindgen::prelude::*;
#[wasm_bindgen(start)]
pub fn main() {
console_error_panic_hook::set_once();
// ...
}
With the console_error_panic_hook crate. Now panics print to the JS console with a stack trace.
Source Maps
Chromium and Firefox support DWARF debug info in Wasm:
# Build with debug info kept (or use a debug build)
cargo build --target wasm32-unknown-unknown
Open DevTools, set breakpoints in Rust source, step through. The experience is decent in Chrome 2023+ and Firefox 2024+.
Unit Test Outside Wasm
Most bugs are in your Rust logic, not the Wasm boundary. Test as a regular Rust crate:
cargo test
Only reach for wasm-bindgen-test when you need browser APIs in the test.
Inspect the WAT
When something surprising happens, wasm2wat is often faster than a debugger. Grep for your function name, read the instructions, understand what got generated.
Security
Validate Untrusted Input
Wasm modules you don't control can have bugs. Validate all data coming out of Wasm before you trust it.
Capability Scoping
For server-side Wasm, grant the minimum capabilities. Never preopened_dir("/") in production; scope to the specific tenant directory.
Resource Limits
Memory cap, CPU deadline (epoch or fuel), table size cap. Chapter 9 covers this in Wasmtime.
Supply Chain
Treat Wasm dependencies like any other dependency. A malicious crate in your build pipeline can ship malicious Wasm. cargo-audit, version pinning, reproducible builds.
Common Anti-Patterns
Treating Wasm as a JS Replacement
Wasm excels at CPU-bound work. For event handling, DOM manipulation, or string-heavy UI logic, JS is usually faster to write and run. Don't port your whole UI to Wasm.
Overusing wasm-bindgen
For modules with two integer functions, the 100 KB of wasm-bindgen glue is silly. Strip down to bare Rust.
Ignoring Startup Cost
A 5 MB Wasm file is 5 MB every cold page load. Lazy-load, cache aggressively (long Cache-Control on versioned URLs), and measure actual user-perceived latency.
Copying Huge Buffers Needlessly
If you have a 50 MB image in JS and pass it to Wasm, copying is slow. For truly large buffers, design around the memory the module exports: write directly into Wasm memory from JS.
Forgetting to Export Memory
Without export "memory", the host can't read or write. Add it always.
Forever-Looping Modules in Production
Untrusted code without a CPU deadline can hang your host. Always set limits on the Wasm side if you're running code you don't control.
Shipping Debug Builds
Debug Wasm is huge and slow. Always --release.
Not Versioning Wasm Files
Ship greet.wasm without a content hash in the URL. Clients cache the old one forever. Use greet.<hash>.wasm and long Cache-Control headers.
The One-Page Checklist
Before shipping a Wasm module:
- Built with
--release. - Size-optimized (
opt-level = "z",lto,strip,panic = "abort"). - Ran through
wasm-opt -Oz. - Checked with
twiggyfor obvious bloat. - Served with a content-addressed URL and long cache headers.
- Lazy-loaded, not blocking first paint.
- Exports
memory(if host needs to read/write). - Panics produce useful errors (panic hook installed).
- Capabilities scoped minimally (for server-side).
- Resource limits set (for server-side, multi-tenant).
Where to Go From Here
You have the fundamentals, the toolchain, the browser and server patterns, and the habits. The next level is depth:
- Rust and WebAssembly book: the canonical next step for Rust-to-Wasm depth.
- Wasmtime docs: server-side runtime internals and embedding.
- Bytecode Alliance blog: where the next-gen Wasm features are designed in public.
- WebAssembly spec: the ground truth when something surprises you.
- Read real components. Open-source Wasm plugins (Envoy filters, Spin components, Fastly Compute@Edge samples) show patterns at production scale.
- Build a small tool. A CLI that processes files via WASI. A browser filter that runs faster than JS could. A plugin system for your own server. Shipping something forces the lessons to stick.
Wasm has been "the next big thing" for years. It's also been quietly growing into critical infrastructure the whole time. The sweet spot is there: fast, portable, sandboxed compute. The rest is detail.