Threading the needle with�concurrency and parallelism�in the Component Model
Luke Wagner
Fastly
WHY
?
native code
💰
💰
cost savings
portability
x86
ARM
RISCV
plugins
⏱️
cold start
❄️
⏱️
microVM
engine
guest
code
guest
code
size
public�APIs
private�APIs
public�APIs
filter
🏎️
…
COMPONENT MODEL
WHY
?
A component is a new binary format being standardized at the W3C
…that contains �and links together a set �of wasm modules
…and defines how how they interact with �the outside world.
6 reasons…
COMPONENT MODEL
WHY
?
JS dev wanting to use wasm:
component
⚙️
🧙
Glue Code for Free
typed call
Web�APIs
Platform dev embedding wasm plugins:
COMPONENT MODEL
WHY
?
⚒️
⚒️
API 2
⚒️
⚒️
⚒️
API 3
⚒️
⚒️
⚒️
⚒️
API 4
⚒️
⚒️
API 1
⚒️
⚒️
⚒️
⚒️
⚒️
⚙️
API 1
API 2
API 3
API 4
IDL
⚒️
⚒️
⚒️
⚒️
polish
SDKs for Free
WIT
Platform dev embedding wasm plugins (#2):
COMPONENT MODEL
platform�impl
Virtual Platform Layering
!?
platform interface
private�APIs
WIT
wasm�engine
private
APIs
enforces
💥
🚀
WHY
?
Software architect:
Modularity without Microservices
module A
module B
module C
microservice A
microservice B
HTTP
microservice C
HTTP
official
global state
unofficial
🏎️
🏎️
choose�you own�adventure
microservice
component A
component B
microservice
component C
HTTP
WIT
💪
💪
🏎️
💪
💪
component A
component B
component C
WIT
WIT
💪
💪
COMPONENT MODEL
WHY
?
“The modular monolith”
“The microservice�architecture”
“The strongly modular�monolith”
🤔
component
😎
COMPONENT MODEL
Secure Polyglot Packages
WHY
?
Browser Agnostic Binaries
⚙️
wasm�engine
Devs producing wasm:
OCI
COMPONENT MODEL
WHY
?
Glue Code for Free
SDKs for Free
Virtual Platform Layering
Modularity without Microservices
Browser Agnostic Binaries
Secure Polyglot Packages
A bit of WIT
🔜 concurrency types
futures and streams
async passing of values+handles
value types
numbers, lists, records, variants, etc.
passed by value (copy/immutable)
resource types
abstract types with explicit lifetimes
passed by handle (owned/borrowed)
interface http {
resource request {
headers: func() -> list<tuple<string,string>>;
body: func() -> tuple<stream<u8>,� future<option<trailers>>>;
…
🔜 concurrent execution
step 1: async
step 2: threads
Step 1: async
cooperative
only switch execution at explicit yield points
don’t force multi-threading if you don’t need it
High-level concurrency properties
colorless
sync functions can call async functions and vice versa
avoids problems described in “What Color Is Your Function?”
structured
there’s a well-defined (cross-component) call stack
useful for debugging/profiling/tracing purposes
Coming soon (H1 2025)
as part of broader WASI 0.3.0 release
backwards-compatible with WASI 0.2
async fn handle(in: Request) -> Response
⚙️
async function handle(in: Request) -> Response
func handle(in Request) Response
…
interface handler {
handle: async func(in: request) -> response;
}
WIT
Step 2: threads
Why?
Parallelism
*
contention
bottlenecks
implement
Concurrency
runtime
workers
goroutines
Just Work
threads
…
Binaryen
Asyncify
🏎️
( )
1
Step 2: threads
Disclaimer: the following plans are still in flux and may change or be fatally flawed
Step 2: threads
app
(memory … shared)
(func …)
(func …)
Component Model:
Core WebAssembly
dlopen 😡
Standard proposals:
O(M×N)
O(M)
app
(memory … shared)
(func (shared …) …)
(func (shared …) …)
dlopen 🙂
atomic instructions (load*, store*, rmw*, wait, notify, fence)
(memory … shared) + memory model
allow shared on everything, incl. func and table definitions
(table (ref (shared func)) shared)
(table funcref)
(import “...” “thread.spawn_indirect”� (shared (func (param $funcptr i32) (param $v i32)� (result i32))))
wasm runtime
(table funcref)
core module
using built-ins
JS polyfill
worker
worker
worker
call_indirect
Step 2: threads
Core WebAssembly
Standard proposals:
Is that it?
What happens to the:
concurrency we got from async when we add threads?
Component Model:
Structured concurrency
(with async)
call
call
call
(snapshot
when g2
created)
components:
runtime:
call
call
Task
f
call
supertask
g1
call
call
call
g2
call
call
call
Per-export-call runtime-managed state:
supertask
🪲
g
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
f
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
callstack?
✅ cross-language� async callstacks
Structured concurrency (with async + threads)
f
components:
runtime:
thread.spawn
call
call
call
g1
call
call
call
g2
call
call
call
g
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
f
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
(module
(func …)
(func …)
(func …)
)
call
call
call
🪲
callstack?
supertask
supertask
💡�Define threads to be�contained by the async task�that created them.
✅ cross-language� async callstacks
Cooperative concurrency
(with async)
Advantages:
Only switch at explicit yield points
async function foo() {
…
bar();
…
await …
…
}
🔀
Disadvantages:
Cooperative concurrency (with async + threads)
But what if:
shared functions are non-cooperative:
thread.spawn_indirect <shared>?
⇒ (<shared>? (func (param i32 i32) (result i32))))
💡Allow threads to be non-cooperative or cooperative!
switch threads only at explicit yield points
Colorless concurrency
(with async)
🤔 waaaaait, what about…
foo: func() -> string;
WIT
component B
call
foo: async func() -> string;
WIT
hint!
component A
implement
async export ABI
sync export ABI
async import ABI
sync import ABI
ok!
export async function …
export function …
let result = await foo();
let result = foo();
Colorless concurrency (with async + threads)
Not a new idea:
💡 Extend what we do for async!
(module
foo: shared func() -> string;
WIT
Is shared a “color” in
?
shared is a “color” for core
wasm (for good reason!)
?
foo: shared func() -> string;
No (not even a hint)
shared, async
shared, sync
nonshared, async
nonshared, sync
(memory $m1 1)� (memory $m2 1 shared)
(func $f1� i32.load $m1 ✅� i32.load $m2 ✅� )
(func $f2 (shared …)� i32.load $m1 ✘� i32.load $m2 ✅
call $f1 ✘ � )�)
In shared-everything-threads:
↳ 4 ABI options:
Big picture: spectrum of concurrency
fully synchronous
(e.g. C making blocking calls to read()/write())
async and threaded
(e.g. async Rust�using Tokio, async JS running in Workers)
threaded
(e.g., C using pthreads)
async
(e.g., async JS,�Python, C#, Rust)
Degree of concurrency kept an implementation detail of each component
Pay as you go (trading off simplicity ⇒ performance)
Conclusion