Linear Memory: Moving Data Across the Boundary
This chapter covers Wasm's memory model and the patterns for passing strings, arrays, and structs between the host and the module.
What Linear Memory Is
A Wasm module has one linear memory: a contiguous array of bytes, addressable from 0 to size - 1. That's it. No malloc, no garbage collector, no structured objects. A flat block of bytes.
The module reads and writes it with load/store instructions. The host (JS, Rust, whoever's running the module) sees it as a plain byte buffer.
// From JS
const memory = instance.exports.memory;
const view = new Uint8Array(memory.buffer);
view[0] = 42;
Both sides see the same memory. Writes from one side are visible to the other.
Pages
Memory grows in 64 KB pages. A module declares an initial page count and a maximum:
(memory (export "memory") 1 10)
1 page initial, up to 10 pages max. The host can read memory.buffer.byteLength to see the current size (64 KB for 1 page, 128 KB for 2, etc.).
The module can grow memory with memory.grow:
(func $grow_one_page
i32.const 1 ;; pages to add
memory.grow
drop) ;; memory.grow returns previous size; we don't care
Growth is relatively cheap but not free. Plan initial size for the common case.
The Boundary Problem
Wasm functions take and return primitive values: i32, i64, f32, f64. That's it. No strings, no arrays, no structs.
So how do you pass a string from JS to Wasm?
Answer: you don't "pass" it. You write the bytes into Wasm's memory and pass a pointer (i32 offset) and length.
Same direction back: Wasm writes bytes into its memory, returns the pointer and length, the host reads from that region.
The whole boundary dance is: encode your data as bytes, put it in memory, pass ptr/len, decode on the other side.
Passing a String from JS to Wasm
Module side:
(module
(memory (export "memory") 1)
;; Count the bytes of a string passed in
(func (export "count_bytes") (param $ptr i32) (param $len i32) (result i32)
local.get $len))
Host side (JS):
const memory = instance.exports.memory;
const view = new Uint8Array(memory.buffer);
const text = "hello";
const bytes = new TextEncoder().encode(text);
// Place bytes at some offset
const ptr = 64;
view.set(bytes, ptr);
// Call with ptr, len
const count = instance.exports.count_bytes(ptr, bytes.length);
console.log(count); // 5
The module got (ptr=64, len=5). It doesn't "have" the string; it has coordinates into memory.
Returning a String from Wasm to JS
The reverse. Wasm writes bytes into memory, returns a pointer and length (often as two return values or via a shared scratch area).
// In Rust
#[no_mangle]
pub extern "C" fn get_greeting(out_ptr: *mut u8, out_cap: usize) -> usize {
let text = b"hello from wasm";
let copy_len = text.len().min(out_cap);
unsafe {
std::ptr::copy_nonoverlapping(text.as_ptr(), out_ptr, copy_len);
}
text.len()
}
Host side:
const view = new Uint8Array(instance.exports.memory.buffer);
const outPtr = 256;
const outCap = 128;
const writtenLen = instance.exports.get_greeting(outPtr, outCap);
const text = new TextDecoder().decode(view.slice(outPtr, outPtr + writtenLen));
console.log(text); // "hello from wasm"
Notice the pattern: the host provides a buffer; the module writes into it. This avoids the module allocating memory the host then has to free.
Who Allocates?
Three options for ownership, each with trade-offs.
Host Allocates, Host Frees
Host picks a region of memory, passes ptr+cap to the module, module writes into it. Host reads. Simple for the host, but the module can't return variable-length results easily.
Module Allocates (malloc-style), Host Frees
The module exports alloc(len) -> ptr and free(ptr) functions. To pass a string in, the host calls alloc, writes into memory at the returned offset, calls the real function, calls free.
#[no_mangle]
pub extern "C" fn alloc(size: usize) -> *mut u8 {
let mut buf = Vec::with_capacity(size);
let ptr = buf.as_mut_ptr();
std::mem::forget(buf); // don't drop; host owns it now
ptr
}
#[no_mangle]
pub unsafe extern "C" fn dealloc(ptr: *mut u8, size: usize) {
let _ = Vec::from_raw_parts(ptr, 0, size); // let Rust drop it
}
Host:
const ptr = instance.exports.alloc(bytes.length);
view.set(bytes, ptr);
const result = instance.exports.do_thing(ptr, bytes.length);
instance.exports.dealloc(ptr, bytes.length);
This is what wasm-bindgen does under the hood. Powerful, but you need to remember to free.
Arena / Bump Allocator
Module maintains an arena. Host writes into it. Module processes. Host reads results. Arena gets reset at the end of a logical "request".
Best for per-request processing where the module has no long-lived state.
Layout Matters
When you pass structured data, the host and module need to agree on byte layout: field order, padding, endianness.
Wasm is always little-endian. Rust's #[repr(C)] guarantees field order matches C conventions. Use it.
#[repr(C)]
struct Point {
x: i32,
y: i32,
}
Host side, reading a Point:
const view = new DataView(instance.exports.memory.buffer);
const x = view.getInt32(ptr, true); // little-endian
const y = view.getInt32(ptr + 4, true);
The 4-byte offset matches #[repr(C)] field order and i32 alignment.
Multi-Memory (Preview)
Wasm 2.0 allows multiple memories per module. Useful for:
- Separating code and data.
- Having a "scratch" memory you grow and discard freely.
- Running multiple modules that each need their own heap.
Most runtimes now support it, but wasm-bindgen and most tooling still assume a single memory. Treat multi-memory as future-facing for now.
The 4 GB Cap
A single linear memory is capped at 4 GB (memory64 proposal extends to 64-bit, but it's not universally supported). For most use cases this is plenty; for large-data workflows, plan your data layout carefully.
Common Pitfalls
Holding JS Uint8Array views across a memory.grow. After growth, the old ArrayBuffer is detached. Views become unusable. Always re-create views after any call that might grow memory.
Passing JS objects. You can't. Objects aren't bytes. Serialize (JSON, custom binary format) first.
Forgetting endianness. Wasm is little-endian. On the rare big-endian host, you handle it; in practice, every machine Wasm runs on is little-endian.
Allocating in the module and never freeing. Memory leak. Leaks won't crash the module (Wasm just grows), but pages pile up and you'll hit the cap.
Assuming zero-init. Freshly allocated memory from the host's perspective is zero; memory reused after free is not. Don't assume.
Copying when you could map. If the host and module share memory, you don't need to copy. Just agree on the offset and read/write in place.
Next Steps
Continue to 04-rust-to-wasm.md to produce Wasm from real Rust code.