1 of 23

Threading the needle with�concurrency and parallelism�in the Component Model

Luke Wagner

Fastly

2 of 23

WHY

?

native code

💰

💰

cost savings

portability

x86

ARM

RISCV

plugins

⏱️

cold start

❄️

⏱️

microVM

engine

guest

code

guest

code

size

public�APIs

private�APIs

public�APIs

filter

🏎️

3 of 23

COMPONENT MODEL

WHY

?

A component is a new binary format being standardized at the W3C

…that contains �and links together a set �of wasm modules

…and defines how how they interact with �the outside world.

6 reasons…

4 of 23

COMPONENT MODEL

WHY

?

JS dev wanting to use wasm:

component

⚙️

🧙

Glue Code for Free

typed call

Web�APIs

5 of 23

Platform dev embedding wasm plugins:

COMPONENT MODEL

WHY

?

⚒️

⚒️

API 2

⚒️

⚒️

⚒️

API 3

⚒️

⚒️

⚒️

⚒️

API 4

⚒️

⚒️

API 1

⚒️

⚒️

⚒️

⚒️

⚒️

⚙️

API 1

API 2

API 3

API 4

IDL

⚒️

⚒️

⚒️

⚒️

polish

SDKs for Free

WIT

6 of 23

Platform dev embedding wasm plugins (#2):

COMPONENT MODEL

platform�impl

Virtual Platform Layering

!?

platform interface

private�APIs

WIT

wasm�engine

private

APIs

enforces

💥

🚀

WHY

?

7 of 23

Software architect:

Modularity without Microservices

module A

module B

module C

microservice A

microservice B

HTTP

microservice C

HTTP

official

global state

unofficial

🏎️

🏎️

choose�you own�adventure

microservice

component A

component B

microservice

component C

HTTP

WIT

💪

💪

🏎️

💪

💪

component A

component B

component C

WIT

WIT

💪

💪

COMPONENT MODEL

WHY

?

“The modular monolith”

“The microservice�architecture”

“The strongly modular�monolith”

8 of 23

🤔

component

😎

COMPONENT MODEL

Secure Polyglot Packages

WHY

?

Browser Agnostic Binaries

⚙️

wasm�engine

Devs producing wasm:

OCI

9 of 23

COMPONENT MODEL

WHY

?

Glue Code for Free

SDKs for Free

Virtual Platform Layering

Modularity without Microservices

Browser Agnostic Binaries

Secure Polyglot Packages

10 of 23

A bit of WIT

🔜 concurrency types

futures and streams

async passing of values+handles

value types

numbers, lists, records, variants, etc.

passed by value (copy/immutable)

resource types

abstract types with explicit lifetimes

passed by handle (owned/borrowed)

interface http {

resource request {

headers: func() -> list<tuple<string,string>>;

body: func() -> tuple<stream<u8>,� future<option<trailers>>>;

🔜 concurrent execution

step 1: async

step 2: threads

11 of 23

Step 1: async

cooperative

only switch execution at explicit yield points

don’t force multi-threading if you don’t need it

High-level concurrency properties

colorless

sync functions can call async functions and vice versa

avoids problems described in “What Color Is Your Function?

structured

there’s a well-defined (cross-component) call stack

useful for debugging/profiling/tracing purposes

Coming soon (H1 2025)

as part of broader WASI 0.3.0 release

backwards-compatible with WASI 0.2

async fn handle(in: Request) -> Response

⚙️

async function handle(in: Request) -> Response

func handle(in Request) Response

interface handler {

handle: async func(in: request) -> response;

}

WIT

12 of 23

Step 2: threads

Why?

  • pipelines
  • map-reduce
  • query languages

Parallelism

*

contention

bottlenecks

implement

Concurrency

runtime

workers

goroutines

Just Work

threads

Binaryen

Asyncify

🏎️

( )

1

13 of 23

Step 2: threads

Disclaimer: the following plans are still in flux and may change or be fatally flawed

14 of 23

Step 2: threads

app

(memory … shared)

(func …)

(func …)

Component Model:

  • thread creation built-ins

Core WebAssembly

  • “threads” (shipping)

dlopen 😡

Standard proposals:

O(M×N)

O(M)

app

(memory … shared)

(func (shared …) …)

(func (shared …) …)

dlopen 🙂

atomic instructions (load*, store*, rmw*, wait, notify, fence)

(memory … shared) + memory model

  • shared-everything-threads

allow shared on everything, incl. func and table definitions

(table (ref (shared func)) shared)

(table funcref)

(import “...” “thread.spawn_indirect”� (shared (func (param $funcptr i32) (param $v i32)� (result i32))))

wasm runtime

(table funcref)

core module

using built-ins

JS polyfill

worker

worker

worker

  • thread creation instructions

call_indirect

15 of 23

Step 2: threads

Core WebAssembly

  • “threads” (shipping)

Standard proposals:

  • shared-everything-threads
  • thread creation instructions

Is that it?

What happens to the:

  • structured
  • cooperative
  • colorless

concurrency we got from async when we add threads?

Component Model:

  • thread creation built-ins

16 of 23

Structured concurrency

(with async)

call

call

call

(snapshot

when g2

created)

components:

runtime:

call

call

Task

f

call

supertask

g1

call

call

call

g2

call

call

call

Per-export-call runtime-managed state:

  • borrows?
  • returned?
  • supertask?

supertask

🪲

g

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

f

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

callstack?

✅ cross-language� async callstacks

17 of 23

Structured concurrency (with async + threads)

f

components:

runtime:

thread.spawn

call

call

call

g1

call

call

call

g2

call

call

call

g

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

f

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

(module

(func …)

(func …)

(func …)

)

call

call

call

🪲

callstack?

supertask

supertask

💡�Define threads to be�contained by the async task�that created them.

✅ cross-language� async callstacks

18 of 23

Cooperative concurrency

(with async)

Advantages:

  • Easier to understand the possible interleavings
  • Cooperative tasks can be small and switch quickly
  • Avoid overhead of locks/atomics

Only switch at explicit yield points

async function foo() {

bar();

await

}

🔀

Disadvantages:

  • Poorly-behaved code can block other tasks
  • Tasks can’t execute in parallel

19 of 23

Cooperative concurrency (with async + threads)

  • I want to run threads w/o Asyncify ASAP
    • without depending on shared-everything-threads
  • …or I don’t care about the parallelism
    • and I’d rather get the benefits of cooperativity

But what if:

shared functions are non-cooperative:

thread.spawn_indirect <shared>?

⇒ (<shared>? (func (param i32 i32) (result i32))))

💡Allow threads to be non-cooperative or cooperative!

switch threads only at explicit yield points

  • when waiting on I/O or explicitly ‘yield’ call
  • worst case: inject ‘yield’ at compile time
  • Interleave arbitrarily ⇒ Parallelism!
  • Depend on shared-everything-threads
    • Likely 1+ years out
  • I might want cooperative threads

20 of 23

Colorless concurrency

(with async)

🤔 waaaaait, what about…

foo: func() -> string;

WIT

component B

call

foo: async func() -> string;

WIT

hint!

component A

implement

async export ABI

sync export ABI

async import ABI

sync import ABI

ok!

  1. async → sync → block!
    • suspend sync, resume async!
  2. async → suspended sync!
    • async sees backpressure!

export async function

export function

let result = await foo();

let result = foo();

21 of 23

Colorless concurrency (with async + threads)

Not a new idea:

  • MS COM Multithreaded → Single-Threaded Apartment calls

💡 Extend what we do for async!

(module

foo: shared func() -> string;

WIT

Is shared a “color” in

?

shared is a color” for core

wasm (for good reason!)

?

  • shared → unshared
    • lock unshared’s component instance
  • shared → locked unshared
    • shared sees backpressure!

foo: shared func() -> string;

No (not even a hint)

shared, async

shared, sync

nonshared, async

nonshared, sync

(memory $m1 1)� (memory $m2 1 shared)

(func $f1� i32.load $m1 ✅� i32.load $m2 ✅� )

(func $f2 (shared …)� i32.load $m1 ✘� i32.load $m2 ✅

call $f1 ✘ � )�)

In shared-everything-threads:

↳ 4 ABI options:

22 of 23

Big picture: spectrum of concurrency

fully synchronous

(e.g. C making blocking calls to read()/write())

async and threaded

(e.g. async Rust�using Tokio, async JS running in Workers)

threaded

(e.g., C using pthreads)

async

(e.g., async JS,�Python, C#, Rust)

Degree of concurrency kept an implementation detail of each component

Pay as you go (trading off simplicity ⇒ performance)

23 of 23

Conclusion

  • The addition of threads seeks to preserve:
    • the structured async call stack;
    • the option of cooperative concurrency;
    • the colorlessness of concurrency.
  • Concurrency is coming to the Component Model
    • Soon (in 0.3.0): async, futures and streams
    • Soon after (in 0.3.x): cooperative threads ⇒ concurrency (without Asyncify)
    • Once browsers ship shared functions: non-cooperative threads ⇒ parallelism
  • Get involved in the Component Model:
    • Intro: component-model.bytecodealliance.org
    • Spec: github.com/webassembly/component-model
    • Impl: github.com/orgs/bytecodealliance/projects/16