1 of 88

High Performance Browser Networking

what every web developer should know about networking and browser performance

Ilya Grigorik - @igrigorik

igrigorik@google.com

2 of 88

Our agenda for the day...

  • Connection speed
    • Latency and bandwidth
    • Bandwidth doesn’t matter (much)
  • Transports: TCP, TLS
    • Connection setup + congestion control
    • Protocol optimizations
  • Wireless networking
    • Implications of shared channel
    • Impact of battery life
    • Performance best practices
  • Optimizing application delivery
    • The good and the bad of HTTP 1.x
    • Brief introducing to HTTP 2.0

3 of 88

Web performance this, …, that ...

Networks will just get faster and all of our problems will go away. Right?

… sadly, nope.

4 of 88

  • ISPs and carriers love to advertise bandwidth as “speed”.
    • ISP sells the “last mile” connection; ISP can’t guarantee end-to-end throughput.�
  • When was the last time you saw an ISP advertise latency? Right, never...

Speed is a feature (Chapter 1)

5 of 88

Speed of light is not fast enough!

  • 42 ms RTT for a single packet between NYC and San Francisco
  • 56 ms RTT for a single packet between NYC and London

Speed is a feature (Chapter 1)

Route

Distance

Time�light in vacuum

Time�light in fiber

Round-trip time �(RTT) in fiber

New York to San Francisco

4,148 km

14 ms

21 ms

42 ms

New York to London

5,585 km

19 ms

28 ms

56 ms

New York to Sydney

15,993 km

53 ms

80 ms

160 ms

Equatorial circumference

40,075 km

133.7 ms

200 ms

200 ms

  • Oh, and these are great-circle distances. The actual RTT’s are even higher!
  • And then there is the “last-mile latency” problem...

6 of 88

Last mile is slow!

Fiber-to-the-home services provided 18 ms round-trip latency on average, while cable-based services averaged 26 ms, and DSL-based services averaged 43 ms. This compares to 2011 figures of 17 ms for fiber, 28 ms for cable and 44 ms for DSL.

Measuring Broadband America - July 2012 - FCC

7 of 88

Last mile == Transatlantic hop? Wat!? Yeah...

Now you know why carriers don’t advertise these numbers…

  • If you look hard enough, you can find them in their FAQ’s.
  • Tip… $> traceroute destination.com

Speed is a feature (Chapter 1)

(26 + 28) * 2

108 ms RTT!

8 of 88

Worldwide: ~100 ms

US: ~50~60 ms

Average RTT to Google in 2012 was…

and sadly it’s not improving much

9 of 88

4G will save us all! Says marketing guy...

"Users of the Sprint 4G network can expect to experience average speeds of 3 Mbps to 6 Mbps download and up to 1.5 Mbps upload with an average latency of 150 ms. On the Sprint 3G network, users can expect to experience average speeds of 600 Kbps - 1.4 Mbps download and 350 Kbps - 500 Kbps upload with an average latency of 400 ms."

Mobile Networks (Chapter 7)

10 of 88

(everytime I see a “4G speed” advertisement)

11 of 88

Wireless last mile” is, painful...

LTE

HSPA+

HSPA

EDGE

GPRS

AT&T core network latency

40-50 ms

50-200 ms

150-400 ms

600-750 ms

600-750 ms

Mobile Networks (Chapter 7)

12 of 88

OK. Latency is a problem.

but bandwidth is important too, how we’re doing there?

13 of 88

Brazil: ~2.5Mbps, Australia: ~5Mbps, US, UK: ~8Mbps

State of the Internet - Akamai - 2007-2013

14 of 88

Good news, mobile BW is improving rapidly!

That’s what the ads are all about. �Yes, last-mile latency is also much better (~50 ms), but that’s still worse than most wired connections!

Generation

Data rate

2G

100 – 400 Kbit/s

3G

0.5 – 5 Mbit/s

4G

1 – 50 Mbit/s

Mobile Networks (Chapter 7)

15 of 88

Bandwidth + Latency ≈ Performance

but what’s the actual relationship between these factors?

16 of 88

Components of an HTTP request

  • DNS lookup to resolve the hostname to IP address
  • New TCP connection requires a handshake roundtrip to the server
  • TLS handshake (not shown) with up to two extra server roundtrips�
  • HTTP request requires minimum of a one roundtrip to the server
    • Plus server processing time

17 of 88

Our pages consist of dozens of assets

  • 52 requests
  • 4+ seconds

Huh?

… (snip 30 requests) ...

18 of 88

Latency vs. Bandwidth impact on Page Load Time

To speed up the Internet at large, we should look for more ways to bring down RTT. What if we could reduce cross-atlantic RTTs from 150 ms to 100 ms? This would have a larger effect on the speed of the internet than increasing a user’s bandwidth from 3.9 Mbps to 10 Mbps or even 1 Gbps.” - Mike Belshe

Single digit % perf improvement after�5 Mbps

Linear improvement in page load time!

19 of 88

  • Video streaming is bandwidth limited.
  • Web browsing is latency limited.

Why? To answer that, let’s dig a bit deeper...

20 of 88

Optimizing transport performance

aka, optimizing TCP and TLS...

21 of 88

SYN, SYN-ACK, ACK

  • Every TCP connection begins with a handshake (SYN > SYN-ACK > ACK)
  • No data can be sent until the handshake is complete (one RTT)
    • e.g. 56 ms in example above

22 of 88

TCP Slow-Start (congestion control)

  • Server is only allowed to send 4 segments (cwnd = 4)
  • Once segments are acked, we double the window to 8 packets. Rinse, repeat...

23 of 88

Let’s do science to it…

  • RTT: roundtrip time between client and server
  • cwnd: initial congestion window size
  • N: number of segments we want to transfer
    • segment: ~1460 bytes.

How long would it take to transfer 64 KB?

(on a new TCP connection)

Answer: 224 milliseconds + handshake RTT = 280 ms

Ouch.

24 of 88

Let’s fetch a 20 KB HTML file...

  • RTT: 56 ms
  • Bandwidth: 5 Mbps
  • cwnd: 4
  • 56 ms for TCP handshake
  • 40 ms for request processing
  • 4x segments
  • 8x segments
  • 3x segments

236 milliseconds!

25 of 88

Let’s fetch same file over existing TCP connection.

  • RTT: 56 ms
  • Bandwidth: 5 Mbps
  • cwnd: 11
  • no TCP handshake...
  • 40 ms for request processing
  • 11 segments

96 milliseconds - 275% improvement!

Re-use existing TCP connections!

Also, notice that we’re (still) nowhere close to our 5 Mbps “bandwidth ceiling”.

26 of 88

Most of the HTTP data flows consist of small, bursty data transfers, whereas TCP is optimized for long-lived connections and bulk data transfers. Network roundtrip time is the limiting factor in TCP throughput and performance in most cases. Consequently, latency is the performance bottleneck for HTTP and most web applications delivered over it.

27 of 88

“Connection view” tells the story...

  • 30 connections
    • DNS lookups
    • TCP handshakes
    • …�
  • Blue: download time

We’re not BW limited, we’re literally idling, waiting on the network to deliver resources.

Mystery solved...

28 of 88

  • Upgrade to Linux 3.2+
    • Increases cwnd to 10!
    • Proportional rate reduction
  • Disable “slow-start after idle”
  • Ensure TCP window scaling is enabled

  • Re-use established TCP connections
    • HTTP keepalive (default on HTTP 1.1)
  • Position servers closer to the user
    • Lower RTTs! Use a CDN.
  • Compress transferred data

Optimizing for TCP (Chapter 2)

29 of 88

Now let’s make it secure...

TLS is an unoptimized frontier for most sites!

30 of 88

SYN, SYN-ACK, ACK, … ClientHello, ServerHello, ...

  • First, TCP handshake RTT!
  • Then…
    • ClientHello
    • ServerHello
    • ChangeCipher (client)
    • ChangeCipher (server)

  • 3 RTTs to establish tunnel!

You might want to consider re-using TLS connections! ��Just saying...

31 of 88

Session resumption removes one RTT!

  • Session Identifier (old)
    • Client sends session ID.
  • Session Ticket (new)
    • “Client cookie”

Verify that your server (at least) supports session ticket resumption.��

32 of 88

“Early termination” is a big win for TLS

  • Nearby edge server terminates TCP
    • TLS negotiation is with the edge
    • Applicable for static and dynamic content

  • Edge then routes to origin...
    • Via a “warm” connection

Early termination helps non-encrypted traffic also, but it is especially useful for accelerating TLS connections!

* Talk to your CDN provider, or, you can roll your own.. Spin up some “cloud servers” and you’re in business.

33 of 88

  • Start by optimizing TCP!
    • IW10, proportional rate reduction, ...
  • Upgrade to latest OpenSSL libraries
    • Big performance difference (try, $> openssl speed rsa)
  • Enable TLS resumption
    • Session Tickets, Session Identifiers
  • Optimize TLS record size
    • Reduce to single network segment
  • And a lot more...
    • Disable TLS compression (attack vector)
    • Configure SNI support (please!)
    • Configure OSCP stapling
    • Use HSTS to avoid costly HTTPS redirects

34 of 88

Measuring network performance

Gathering data from real users, on real networks, with real devices...

35 of 88

Navigation Timing (W3C)

36 of 88

Navigation Timing (W3C)

37 of 88

Available in...

  • IE 9+
  • Firefox 7+
  • Chrome 6+
  • Android 4.0+

38 of 88

Real User Measurement (RUM) with Google Analytics

<script>� _gaq.push(['_setAccount','UA-XXXX-X']);� _gaq.push(['_setSiteSpeedSampleRate', 100]); // #protip� _gaq.push(['_trackPageview']);</script>

Google Analytics > Content > Site Speed

  • Automagically collects this data for you - defaults to 1% sampling rate
  • Maximum sample is 10k visits/day
  • You can set custom sampling rate

You have all the power of Google Analytics! Segments, conversion metrics, ...

39 of 88

Full power of Google Analytics to segment, filter, compare, ...

40 of 88

Head into the Technical reports to see the histograms and distributions!

Averages are misleading...

41 of 88

Case study: igvita.com server response times

Content > Site Speed > Page Timings > Performance

Bimodal response time distribution?

Theory: user cache vs. database cache vs. full recompute

42 of 88

2G, 3G, 4G...

have fundamentally different architecture at the radio layer

43 of 88

Design criteria #1: "Stable" performance + scalability

  • Control over network performance and resource allocation
  • Ability to manage 10~100's of active devices within single cell
  • Coverage of much larger area

44 of 88

Design criteria #2: Battery life optimization

  • Radio is the second most expensive component (after screen)
  • Limited amount of available energy (as you are well aware)

45 of 88

Design criteria #1: "Stable" performance + scalability

Design criteria #2: Maximize battery life

Radio Resource Controller (RRC)

46 of 88

Radio Resource Controller

  • Phone: Hi, I want to transmit data, please?
  • RRC: OK.
      • Transmit in [x-y] timeslots
      • Transmit with Z power
      • Transmit with Q modulation�

... (some time later) ...

  • RRC: Go into low power state.

RRC

All communication and power management is centralized and managed by the RRC.

Mobile Networks (Chapter 7)

47 of 88

Control and User-plane latencies

RRC

I want to

send data!

1

2

1-X RTT's of negotiations

3

Application data

Control-plane

User-plane

LTE

HSPA+

3-3.5G

Idle to connected latency

< 100 ms

< 100 ms

< 2500 ms

User-plane latency

< 5 ms

< 10 ms

< 50 ms

  • There is a one time cost for control-plane negotiation�
  • User-plane latency is the one-way latency between packet availability in the device and packet at the base station

Same process happens for incoming data, just reverse steps 1 and 2

48 of 88

Anticipate RRC latency...

Plan for "first packet" delay

  • 100's to 1000's of milliseconds of delay for first packet
  • Adjust UX accordingly!

Much of the "variability" is explained by RRC delays

  • you can't predict it, but assume you will incur it

HSPA�(200 ms RTT)

HSPA+ / LTE�(80 ms RTT)

Control plane

(200-2500 ms)

(50-100 ms)

DNS lookup

200 ms

80 ms

...

...

...

RRC

49 of 88

Watch those energy tails!

Wasted energy

3G state machine�

    • DCH = Active
    • FACH = Low power
    • IDLE = ...

Every data transfer, both big and small, will:

  • cycle radio into high power
  • timer driven transition back to idle state

Mobile Networks (Chapter 7)

50 of 88

Minimize periodic data transfers

  • avoid polling, use adaptive strategy
  • coalesce requests, defer requests
  • leverage (smart) server push

Prefetch data

  • turn off the radio, keep it idle

It's all about the battery...

Pandora beacons: 0.2% total bytes == 46% battery

51 of 88

  • 5 Wh battery capacity
  • 5 Wh * 3600 J/Wh = 18000 J battery capacity

Hands on example....

  • 10 Joules of consumed energy per "cycle"
    • idle → high-power → low-power → idle
  • 60 minutes * 10 Joules = 600 Joules of consumed energy per hour
  • 3% of battery capacity per hour! (600 J / 18000J)

52 of 88

Measuring energy & radio...

Ping, ping, ping, ... where'd my battery go?

Watch out for...

  • "real-time" analytics
  • "real-time" comments
  • "real-time" <widget>...
  • Record live trace on the phone, or import a pcap!
  • Battery + radio models for 3G and 4G
  • Performance linter... caching, compression, etc.
  • ...

Mobile Networks (Chapter 7)

53 of 88

* The radio is never fully "off"... It wakes up periodically to listen to broadcasts and notifications!

Physical connectivity (Radio)

Transport connectivity (TCP / UDP)

The radio can be off* while the transport is idle.

54 of 88

window.setInterval(keepAlive(), 1000);

Carrier timeouts: 5-30 minutes.�

  • eliminate unnecessary keep-alive beacons...
  • are you sure it's not your own servers forcing timeouts? :-)

55 of 88

Not so good news everybody! ....

HSPA+ will be the dominant network type of the next decade!

  • Latest HSPA+ releases are comparable to LTE in performance

  • 3G networks will be with us for at least another decade

  • LTE adoption in US and Canada is way ahead of the world-wide trends

Mobile Networks (Chapter 7)

56 of 88

  • It's a multi-generation future: 2G, 3G, 4G
    • users migrate between G's all the time... plan for it!�
  • Bandwidth and latency is highly variable
    • burst your data, and return to idle...

  • Connectivity is intermittent, errors will happen!
    • have an offline mode - i.e. use a local cache
    • use a smart backoff algorithm, please!

Design for variable network performance & availability

var backoff = backoff.strategy({

randomisationFactor: 0,

initialDelay: 60,

maxDelay: 600

});

backoff.failAfter(10);

57 of 88

Hypertext Transfer Protocol

The good, the bad, … and the bright future ahead (aka, 2.0)

58 of 88

$> telnet igvita.com 80

Connected to 173.230.151.99

GET /archive

Hypertext delivery with HTTP 0.9! - eom.

(connection closed)

HTTP 0.9 is the ultimate MVP - one line, plain-text “protocol” to test drive the “www idea”.

59 of 88

$> telnet ietf.org 80

Connected to 74.125.xxx.xxx

GET /rfc/rfc1945.txt HTTP/1.0

User-Agent: CERN-LineMode/2.15 libwww/2.17b3

Accept: */*

HTTP/1.0 200 OK

Content-Type: text/plain

Content-Length: 137582

Last-Modified: Wed, 1 May 1996 12:45:26 GMT

Server: Apache 0.84

4 years of rapid iteration later… eom.

(connection closed)

HTTP 1.0 is an informational RFC - documents “common usage” of HTTP found in the wild.

60 of 88

$> telnet google.com 80

Connected to 74.125.xxx.xxx

GET /index.html HTTP/1.1

Host: website.org

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4)... (snip)

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Encoding: gzip,deflate,sdch

Accept-Language: en-US,en;q=0.8

Cookie: __qca=P0-800083390... (snip)

HTTP/1.1 200 OK

Connection: keep-alive

Transfer-Encoding: chunked

Server: nginx/1.0.11

Content-Type: text/html; charset=utf-8

Date: Wed, 25 Jul 2012 20:23:35 GMT

Expires: Wed, 25 Jul 2012 20:23:35 GMT

Cache-Control: max-age=0, no-cache

100

<!doctype html>

(snip)

HTTP 1.1 ships as RFC standard in 1999 - hyper{t̶e̶x̶t̶}media all the things!

61 of 88

@igrigorik

15 years later!

HTTPbis is getting dangerously close to closing all the outstanding 1.1 bugs. :)

In the meantime a few things have happened since...

Geocities ftw!�(circa 1997-2000)

Then AJAX happened.

(circa 2002-2005)

WebGL

WebRTC

Web Audio

WebSockets

Mobile web

(today)

62 of 88

For all its awesome, HTTP 1.X has a lot of perf problems...

@igrigorik

  • Limited parallelism (~6)
  • Client-side request queuing
  • Browser queueing heuristics
  • High protocol overhead
    • ~800 bytes + cookies

  • Competing TCP flows
  • Spurious retransmissions
  • Extra memory / FD resources
  • Handshake overhead
  • Slow-start overhead

63 of 88

Dozens of competing TCP flows...

  • Most stuck in Slow Start
  • Poor congestion control
  • Poor link utilization
  • Setup latency + overhead
  • Extra sockets
  • Extra memory

In short, there is no upside to this. But that’s what we have to work with...

64 of 88

E.g. Etsy over-sharding...

  • img[0-3].etsystatic.com
  • ~30 connections

Too much of a “good thing” can cause harm. In this case, parallel flows are competing and cause a significant amount of spurious retransmissions (duplicate bytes on the client).

65 of 88

Where there’s a will, there’s a way...

we’re an inventive bunch, so we came up with some “optimizations” (read, “hacks”)

66 of 88

  • Concatenating files (JavaScript, CSS)
    • Reduces number of downloads and latency overhead
    • Less modular code and expensive cache invalidations (e.g. app.js)
    • Slower execution (must wait for entire file to arrive)
  • Spriting images
    • Reduces number of downloads and latency overhead
    • Painful and annoying preprocessing and expensive cache invalidations
    • Have to decode entire sprite bitmap - CPU time and memory
  • Resource inlining
    • Eliminates the request for small resources
    • Resource can’t be cached, inflates parent document
    • 30% overhead on base64 encoding
  • Domain sharding
    • TCP Slow Start? Browser limits, Nah... 15+ parallel requests -- Yeehaw!!!
    • Causes congestion and unnecessary latency and retransmissions

67 of 88

… why not fix HTTP instead?

68 of 88

  • Improve end-user perceived latency
  • Address the "head of line blocking"
  • Not require multiple connections
  • Retain the semantics of HTTP/1.1

HTTP 2.0 work kicked off in Jan 2012.

"HTTP 2.0 is a protocol designed for low-latency transport of content over the World Wide Web"

69 of 88

HTTP 2.0 in one slide…

  • One TCP connection

  • Request = Stream
    • Streams are multiplexed
    • Streams are prioritized

  • Binary framing layer
    • Prioritization
    • Flow control
    • Server push

  • Header compression

@igrigorik

70 of 88

“... we’re not replacing all of HTTP — the methods, status codes, and most of the headers you use today will be the same. Instead, we’re redefining how it gets used “on the wire” so it’s more efficient, and so that it is more gentle to the Internet itself ....”

- Mark Nottingham (HTTPbis chair)

71 of 88

Binary framing crash course in one slide...

  • Length-prefixed frames
  • All frames have same 8-byte header

  • Type indicates … type of frame:
    • DATA, HEADERS, PRIORITY, PUSH_PROMISE, …
  • Each frame may have custom flags
    • e.g. END_STREAM
  • Each frame carries a 31-bit stream identifier
    • After that, it’s frame specific payload

@igrigorik

frame = buffer.read(8)

if frame_i_care_about

do_something_smart

else

buffer.skip(frame.length)

end

72 of 88

Basic data flow in HTTP 2.0...

  • Streams are multiplexed by splitting communication into frames
    • e.g. HEADERS, DATA, etc.�
  • Frames are interleaved
    • Frames can be prioritized
    • Frames can be flow controlled�

E.g. Please send critical.css with priority 1, please! kittens.jpg with priority 10.

@igrigorik

73 of 88

Connection + stream flow-control!

Stream flow-control enables fine-grained resource control between streams. E.g…

  • T(0): I am willing to receive 64 KB of kittens.jpg.
  • T(0): I am willing to receive 500KB of critical.js
  • T(n): Ok, now send the remainder of kittens.jpg.

Client controls how and when the stream and connection window is incremented!

@igrigorik

74 of 88

Server push… is replacing inlining.

Inlining is server push. Except, HTTP 2.0 server push is cacheable!

  • One request, multiple responses.
  • Push redirects, cache invalidations, … lots of new and exciting possibilities!

@igrigorik

75 of 88

HTTP 2.0 provides header compression!

  • Both sides maintain “header tables”
  • New requests “toggle” or “insert” new values into the table

@igrigorik

* as low as 8 bytes for an identical request.

  • New header set is a “diff” of the previous set of headers
  • Repeat request (polling) with exact same headers incurs no overhead

Byte cost of new stream: 8 bytes! *

76 of 88

tl;dr: framing, mux, prioritization, flow control

same HTTP protocol semantics, complete performance reboot!

77 of 88

Benefits

Opportunities

@igrigorik

  • Unshard, un-concat, un-sprite, …
    • Modular code, modular assets
    • Improved cache performance
    • Improved page load times

  • Layer other browser transports
    • XHR / SSE: work out of the gate
    • (TODO) WebSockets over HTTP 2.0: free mux, flow control, prioritization

  • We need smarter servers!
    • Support for prioritization, flow control, and resource push!

  • One RPC stack to rule them all
    • Standardized, high-performance RPC protocol stack (internal, external)

  • min(request overhead) = 8 bytes
  • max(parallelism) = 100~1000 streams
  • max(client queueing latency) = 0 ms

  • Stream multiplexing
  • Stream prioritization
  • Stream flow control
  • Server push

  • Plus all the usual HTTP goodies!
    • Browser caching, authentication, content negotiation, content-encoding, transfer encoding, encryption, …

* or today, via SPDY.

Available in your browser circa ~2014! *

78 of 88

HTTP 2.0 draft 6 implementations in Firefox, Chrome, IE11

has SPDY support (stepping stone). Plus client/server

implementations in C, JavaScript (node.js), Ruby, Perl, etc.

HTTP 2.0 is coming to a client / server near you in 2014.

github.com/igrigorik/http-2

79 of 88

  • Start by optimizing TCP!
    • IW10, proportional rate reduction, …
  • Ensure you have keep-alive enabled!
      • Use HSTS to avoid costly HTTPS redirects
  • Optimize your TLS deployments
    • Record and cert sizes, session resumption, ...
  • Compress and eliminate
    • Enable gzip (please!) and fetch what you need
  • Cache
    • Too many sites are missing cache headers to this day
    • Expiry / max-age and revalidation (Etag, Last-Modified)
  • Investigate SPDY / HTTP 2.0

80 of 88

Are we fast yet?

Slides @ bit.ly/hpbn-talk

Twitter igrigorik

Email igrigorik@google.com

P.S. Check out the book for (much) more!

81 of 88

Not fast enough? Ok, let’s talk

82 of 88

WiFi performance

requires its own (unique) set of optimizations...

83 of 88

Wireless LAN, aka Wi-Fi...

  • Wireless extension to existing LAN infrastructure
  • Same protocol stack, new physical medium
  • First commercial success ~1999 with 802.11b (11 Mbps)�
  • Target: desktop, laptop

WiFi (Chapter 6)

84 of 88

  • Limited amount of shared spectrum
  • Multiple parties who want to transmit and receive

How do we schedule communication?

WiFi (Chapter 6)

85 of 88

Carrier Sense Multiple Access + Collision Detection

  • Is anyone talking?
    • No: begin transmission
    • Yes: wait until they finish
      • Collision: stop, sleep for rand() with backoff, retry

There is not much more to it...

WiFi (Chapter 6)

86 of 88

Wi-Fi channels 101

Non-overlapping channels: 1, 6, 11

  • Wi-Fi is a victim of its own success�
  • Any user, on any network (in the same channel) can and will affect your latency and throughput!�
  • 10-20+ overlapping networks in same channel
    • shared access medium

WiFi (Chapter 6)

87 of 88

Real-world Wi-Fi performance: 2.4 Ghz vs 5 Ghz (YMMV)

Real-world 1st hop latency

Median (ms)

95% (ms)

99% (ms)

2.4 Ghz

6.22

34.87

58.91

5 Ghz

0.90

1.58

7.89

Sample data from my own home Wi-Fi network...

Upgraded router, removed ~50 ms of latency on

first hop!

88 of 88

Adapt to variable bandwidth

    • Adaptive bitrate streaming
      • 5-10 second chunks of video content
      • Adapt, do not predict!

Adapt to variable latency and jitter

    • High variability for first hop for every packet
      • Jitter in packet arrival times
      • Variability != packet loss
        • Link layer retransmissions and error correction...