1 of 72

Sri Raghavendra Educational Institutions Society (R)

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

www.skit.org.in

Title:Transport Layer

CO addressed: CO4

Course: Computer Networks

Presented by: Nagaraja M

Department:Electronics & Communication Engg

2 of 72

/skit.org.in

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

Goals:

  • understand principles behind transport layer services:
    • multiplexing, demultiplexing
    • reliable data transfer
    • flow control
    • congestion control

  • learn about Internet transport layer protocols:
    • UDP: connectionless transport
    • TCP: connection-oriented reliable transport
    • TCP congestion control

3 of 72

/skit.org.in

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

3.1 Introduction and Transport-Layer Services

A transport-layer protocol provides for logical communication between application processes running on different hosts.

As shown in Figure 3.1, transport-layer protocols are implemented in the end systems but not in network routers.

On the sending side, the transport layer converts the application-layer messages it receives from a sending application process into transport-layer packets, known as transport-layer segments.

Note that network routers act only on the network-layer

4 of 72

/skit.org.in

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

5 of 72

/skit.org.in

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

A Transport-layer protocol provides logical communication between processes running on different hosts, a Network-layer protocol provides logical communication between hosts.

Household Analogy:

12 kids in Ann’s house sending letters to 12 kids in Bill’s house:

Hosts =

Houses

Processes =

Kids/cousins

Application messages =

Letters in envelopes

Transport-layer protocol =

Ann and Bill who mux/demux to in-house siblings

Network-layer protocol =

Postal service

6 of 72

/skit.org.in

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

Internet transport-layer protocols

  • reliable, in-order delivery (TCP)
    • congestion control
    • flow control
    • connection setup

  • unreliable, unordered delivery: UDP

application

transport

network

data link

physical

7 of 72

/skit.org.in

(Approved by AICTE, Accredited by NAAC, Affiliated to VTU, Karnataka)

Sri Krishna Institute of Technology

3.2 Multiplexing and Demultiplexing

At the destination host, the transport layer receives segments from the network layer just below. The transport layer has the responsibility of delivering the data in these segments to the appropriate application process running in the host.

The job of gathering data chunks at the source host from different sockets, encapsulating each data chunk with header information to create segments, and passing the segments to the network layer is called multiplexing.

At the receiving end, the transport layer examines these fields to identify the receiving socket and then delivers the segment to correct socket. This process is called demultiplexing.

8 of 72

Multiplexing/demultiplexing

3-8

10/3/2023

process

socket

use header info to deliver

received segments to correct

socket

demultiplexing at receiver:

handle data from multiple

sockets, add transport header (later used for demultiplexing)

multiplexing at sender:

transport

application

physical

link

network

P2

P1

transport

application

physical

link

network

P4

transport

application

physical

link

network

P3

9 of 72

9

10/3/2023

10 of 72

  • A process (as part of a network application) can have one or more sockets, doors through which data passes from the network to the process and through which data passes from the process to the network.

  • As shown in above Figure, the transport layer in the receiving host does not actually deliver data directly to a process, but instead to an intermediary socket.

10

10/3/2023

11 of 72

How demultiplexing works

Figure 3.3 Source and destination port-number fields in a transport-layer segment

11

10/3/2023

12 of 72

  • Each port number is a 16-bit number, ranging from 0 to 65535.
  • The port numbers ranging from 0 to 1023 are called well-known port numbers and are restricted, which means that they are reserved for use by well-known application protocols.
  • such as HTTP (which uses port number 80) and FTP (which uses port number 21).
  • The list of well-known port numbers is given in RFC 1700 and is updated at http://www.iana.org [RFC 3232].

12

10/3/2023

13 of 72

  • Connectionless Multiplexing and Demultiplexing

  • Java program running in a host can create a UDP socket with the line
    • DatagramSocket mySocket = new DatagramSocket();
  • Python program running in a host can create a UDP socket with the line
    • clientSocket = socket(socket.AF_INET, socket.SOCK_DGRAM)
  • When a UDP socket is created in this manner, the transport layer automatically assigns a port number to the socket.
  • In particular, the transport layer assigns a port number in the range 1024 to 65535 that is currently not being used by any other UDP port in the host.
    • DatagramSocket mySocket = new DatagramSocket(19157); // Java
    • clientSocket.bind((‘’, 19157)) //Python

In this case, the application assigns a specific port number-namely 19157 to the UDP socket

13

10/3/2023

14 of 72

What is the purpose of the source port number?

  • As shown in Figure 3.4, in the A-to-B segment the source port number serves as part of a “return address”—when B wants to send a segment back to A, the destination port in the B-to-A segment will take its value from the source port value of the A-to-B segment.

14

10/3/2023

15 of 72

Connection-Oriented Multiplexing and Demultiplexing

  • One difference between a TCP socket and a UDP socket is that a TCP socket is identified by a four-tuple:
  • (source IP address, source port number, destination IP address, destination port number).
  • Thus, when a TCP segment arrives from the network to a host, the host uses all four values to direct (demultiplex) the segment to the appropriate socket.

15

10/3/2023

16 of 72

  • The TCP server application has a “welcoming socket,” that waits for connection establishment requests from TCP clients (see Figure 2.29) on port number 12000.
  • The TCP client creates a socket and sends a connection establishment request segment with the lines:

clientSocket = socket(AF_INET, SOCK_STREAM)

clientSocket.connect((serverName,12000))

16

10/3/2023

17 of 72

  • The transport layer at the server notes the following four values in the connection-request segment:

(1) the source port number in the segment,

(2) the IP address of the source host,

(3) the destination port number in the segment, and

(4) its own IP address.

This four values are illustrated in this diagram

17

10/3/2023

18 of 72

  • Server B will still be able to correctly demultiplex the two connections having the same source port number, since the two connections have different source IP addresses.

18

10/3/2023

19 of 72

3.3 Connectionless Transport: UDP

  • UDP takes messages from the application process, attaches source and destination port number fields for the multiplexing/de multiplexing service, adds two other small fields, and passes the resulting segment to the network layer.
  • The network layer encapsulates the transport-layer segment into an IP datagram and then makes a best-effort attempt to deliver the segment to the receiving host.
  • If the segment arrives at the receiving host, UDP uses the destination port number to deliver the segment’s data to the correct application process.

  • Note that with UDP there is no handshaking between sending and receiving transport-layer entities before sending a segment. For this reason, UDP is said to be connectionless.

19

10/3/2023

20 of 72

  • Why an application developer would ever choose to build an application over UDP rather than over TCP. Isn’t TCP always preferable, since TCP provides a reliable data transfer service, while UDP does not?
  • Many applications are better suited for UDP for the following reasons:
    • UDP immediately pass segment, where as TCP not
    • No connection establishment
    • No connection state. (like- buffers, congestion-control parameters, and sequence and acknowledgment number parameters)
    • Small packet header overhead: The TCP segment has 20 bytes of header overhead in every segment, whereas UDP has only 8 bytes of overhead.

20

10/3/2023

21 of 72

3.3.1 UDP Segment Structure

21

10/3/2023

22 of 72

  • The UDP header has only four fields, each consisting of two bytes

1) Source port number field

2) Destination port number field

3) Checksum is used by the receiving host to check whether errors have been introduced into the segment.

4) The length field specifies the length of the UDP segment, including the header, in bytes

22

10/3/2023

23 of 72

3.3.2 UDP Checksum

  • The UDP checksum provides for error detection. That is, the checksum is used to determine whether bits within the UDP segment have been altered (for example, by noise in the links or while stored in a router) as it moved from source to destination.

  • UDP at the sender side performs the 1s complement of the sum of all the 16-bit words in the segment, with any overflow encountered during the sum being wrapped.

Example, suppose that we have the following three 16-bit words:

    • 0 1 1 0 0 1 1 0 0 1 1 0 0 0 0 0
    • 0 1 0 1 0 1 0 1 0 0 1 1 0 1 0 1
    • 1 0 0 0 1 1 1 1 0 0 0 0 1 1 0 0

23

10/3/2023

24 of 72

  • The sum of first two of these 16-bit words is

0 1 1 0 0 1 1 0 0 1 1 0 0 0 0 0

0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

-----------------------------------

1 0 1 1 1 0 1 1 1 0 1 1 0 1 0 1

  • Adding the third word to the above sum gives

1 0 1 1 1 0 1 1 1 0 1 1 0 1 0 1

1 0 0 0 1 1 1 1 0 0 0 0 1 1 0 0

-----------------------------------

0 1 0 0 1 0 1 0 1 1 0 0 0 0 1 0

24

10/3/2023

25 of 72

  • the 1s complement of the sum 0100101011000010 is 1011010100111101, which becomes the checksum.

  • At the receiver, all four 16-bit words are added, including the checksum.
  • If no errors are introduced into the packet, then clearly the sum at the receiver will be 1111111111111111.
  • If one of the bits is a 0, then we know that errors have been introduced into the packet.

25

10/3/2023

26 of 72

3.4 Principles of Reliable Data Transfer

  • Figure 3.8 illustrates the framework for our study of reliable data transfer.
  • The service abstraction provided by the upper-layer entities is that of a reliable channel through which data can be transferred.
  • With a reliable channel, no transferred data bits are corrupted (flipped from 0 to 1, or vice versa) or lost, and all are delivered in the order in which they were sent.
  • This is precisely the service model offered by TCP to the Internet applications that invoke it.

26

10/3/2023

27 of 72

27

10/3/2023

28 of 72

28

10/3/2023

29 of 72

29

10/3/2023

30 of 72

30

10/3/2023

Reliable data transfer: getting started

send

side

receive

side

rdt_send(): called from above, (e.g., by app.). Passed data to

deliver to receiver upper layer

udt_send(): called by rdt,

to transfer packet over

unreliable channel to receiver

rdt_rcv(): called when packet arrives on rcv-side of channel

deliver_data(): called by rdt to deliver data to upper

31 of 72

31

10/3/2023

32 of 72

  • Figure 3.8(b) illustrates the interfaces for our data transfer protocol.

  • The sending side of the data transfer protocol will be invoked from above by a call to rdt_send().
  • It will pass the data to be delivered to the upper layer at the receiving side.

  • On the receiving side, rdt_rcv() will be called when a packet arrives from the receiving side of the channel.
  • When the rdt protocol wants to deliver data to the upper layer, it will do so by calling deliver_data().

32

10/3/2023

33 of 72

3.4.1 Building a Reliable Data Transfer Protocol

  • Reliable Data Transfer on a perfectly reliable channel: rdt1.0
  • The finite-state machine (FSM) definitions for the rdt1.0 sender and receiver are shown in Figure 3.9.
  • The FSM in Figure 3.9(a) defines the operation of the sender, while the FSM in Figure 3.9(b) defines the operation of the receiver.

33

10/3/2023

Figure 3.9 rdt1.0 – A protocol for a completely reliable channel

34 of 72

34

10/3/2023

Figure 3.9 rdt1.0 – A protocol for a completely reliable channel

35 of 72

  • The event causing the transition is shown above the horizontal line labeling the transition, and the actions taken when the event occurs are shown below the horizontal line.
  • When no action is taken on an event, or no event occurs and an action is taken, we’ll use Λ the symbol below or above the horizontal.

  • The sending side of rdt simply accepts data from the upper layer via the rdt_send(data) event, creates a packet containing the data (via the action make_pkt(data)) and sends the packet into the channel.
  • On the receiving side of rdt, receives a packet from the underlying channel via the rdt_rcv(packet) event, removes the data from the packet (via the action extract (packet, data)) and passes the data up to the upper layer (via the action deliver_data(data)).

35

10/3/2023

36 of 72

  • Reliable Data Transfer over a Channel with Bit Errors: rdt2.0

36

10/3/2023

In a computer network setting, reliable data transfer protocols based retransmission are known as ARQ (Automatic Repeat reQuest) protocols.

Three additional protocol capabilities are required in ARQ protocols to handle the presence of bit errors:

  • Error detection
  • Receiver feedback
  • Retransmission

37 of 72

37

10/3/2023

38 of 72

38

10/3/2023

39 of 72

TCP: Overview RFCs: 793,1122,1323, 2018, 2581

39

10/3/2023

  • full duplex data:
    • bi-directional data flow in same connection
    • MSS: maximum segment size
  • connection-oriented:
    • handshaking (exchange of control msgs) inits sender, receiver state before data exchange
  • flow controlled:
    • sender will not overwhelm receiver
  • point-to-point:
    • one sender, one receiver
  • reliable, in-order byte steam:
    • no “message boundaries”
  • pipelined:
    • TCP congestion and flow control set window size

40 of 72

40

10/3/2023

41 of 72

3.5.2 TCP segment structure

41

10/3/2023

source port #

dest port #

32 bits

application

data

(variable length)

sequence number

acknowledgement number

receive window

Urg data pointer

checksum

F

S

R

P

A

U

head

len

not

used

options (variable length)

URG: urgent data

(generally not used)

ACK: ACK #

valid

PSH: push data now

(generally not used)

RST, SYN, FIN:

connection estab

(setup, teardown

commands)

# bytes

rcvr willing

to accept

counting

by bytes

of data

(not segments!)

Internet

checksum

(as in UDP)

42 of 72

42

10/3/2023

43 of 72

  • Sequence Numbers and Acknowledgment Numbers

43

10/3/2023

The acknowledgment number that Host A puts in its segment is the sequence number of the next byte Host A is expecting from Host B.

44 of 72

  • Host A rcvd all bytes 0 to 535 (0-999) from B
  • A is waiting for 536 byte to rcv
  • Host A puts ack 536

44

10/3/2023

45 of 72

3.5.3 Round-Trip Time Estimation and Timeout

45

10/3/2023

What is the length of the timeout interval?

Soln:

The timeout should be larger than the connection’s round-trip time (RTT), that is, the time from when a segment is sent until it is acknowledged. Otherwise, unnecessary retransmissions would be sent.

  • Estimating the Round-Trip Time

0.125

46 of 72

46

10/3/2023

Fig: RTT samples and RTT estimates

47 of 72

47

10/3/2023

3.5.4 Reliable Data Transfer

  • TCP creates rdt service on top of IP’s unreliable service
    • pipelined segments
    • cumulative acks
    • single retransmission timer
  • retransmissions triggered by:
    • timeout events
    • duplicate acks

48 of 72

48

10/3/2023

49 of 72

TCP sender events:

49

10/3/2023

data rcvd from app:

  • create segment with seq #
  • seq # is byte-stream number of first data byte in segment
  • start timer if not already running
    • think of timer as for oldest unacked segment
    • expiration interval: TimeOutInterval

timeout:

  • retransmit segment that caused timeout
  • restart timer

ack rcvd:

  • if ack acknowledges previously unacked segments
    • update what is known to be ACKed
    • start timer if there are still unacked segments

50 of 72

TCP: retransmission scenarios

50

10/3/2023

lost ACK scenario

Host B

Host A

Seq=92, 8 bytes of data

ACK=100

Seq=92, 8 bytes of data

X

timeout

ACK=100

premature timeout

Host B

Host A

Seq=92, 8 bytes of data

ACK=100

Seq=92, 8

bytes of data

timeout

ACK=120

Seq=100, 20 bytes of data

ACK=120

SendBase=100

SendBase=120

SendBase=120

SendBase=92

51 of 72

TCP: retransmission scenarios

51

10/3/2023

X

cumulative ACK

Host B

Host A

Seq=92, 8 bytes of data

ACK=100

Seq=120, 15 bytes of data

timeout

Seq=100, 20 bytes of data

ACK=120

52 of 72

52

10/3/2023

Fig: Retransmission due to a lost acknowledgment

53 of 72

53

10/3/2023

Premature timeout

54 of 72

54

10/3/2023

A cumulative acknowledgment avoids retransmission of the

first segment

55 of 72

55

10/3/2023

Fast Retransmit

Doubling the Timeout Interval

Each time TCP retransmits, it sets the next timeout interval to twice the previous value, rather than deriving it from the last EstimatedRTT

Retransmitting the missing segment before that segment’s timer expires.

56 of 72

56

10/3/2023

57 of 72

57

10/3/2023

3.5.5 Flow Control

  • A speed-matching service.

  • TCP provides a flow-control service to its applications to eliminate the possibility of the sender overflowing the receiver’s buffer.

58 of 72

58

10/3/2023

LastByteRead

LastByteRcvd

59 of 72

  • Because TCP is not permitted to overflow the allocated buffer, we must have

LastByteRcvd – LastByteRead <= RcvBuffer

  • The receive window, denoted rwnd is set to the amount of spare room in the buffer:

rwnd = RcvBuffer – [LastByteRcvd – LastByteRead]

59

10/3/2023

60 of 72

60

10/3/2023

3.5.6 TCP Connection Management

Let’s first take a look at how a TCP connection is established.

  • Suppose a process running in one host (client) wants to initiate a connection with another process in another host (server). The client application process first informs the client TCP that it wants to establish a connection to a process in the server.
  • Step-1
  • Step-2
  • Step-3

61 of 72

61

10/3/2023

62 of 72

62

10/3/2023

63 of 72

63

10/3/2023

Life cycle of a TCP connection

  • During the life of a TCP connection, the TCP protocol running in each host makes transitions through various TCP states.
  • Figure 3.41 illustrates a typical sequence of TCP states that are visited by the client TCP.
  • The client TCP begins in the CLOSED state.

64 of 72

64

10/3/2023

65 of 72

65

10/3/2023

66 of 72

66

10/3/2023

Principles of TCP Congestion Control

The Causes and the Costs of Congestion

Scenario 1: Two Senders, a Router with Infinite Buffers

67 of 72

67

10/3/2023

68 of 72

68

10/3/2023

Scenario 2: Two Senders and a Router with Finite Buffers

69 of 72

69

10/3/2023

70 of 72

70

10/3/2023

Network-Assisted Congestion-Control Example:

ATM ABR Congestion Control

Asynchronous transfer mode (ATM) available bit-rate (ABR)

ATM ABR—a protocol that takes a network-assisted approach toward congestion control.

  • explicit forward congestion indication (EFCI) bit.
  • congestion indication (CI) bit
  • no increase (NI) bit

71 of 72

71

10/3/2023

72 of 72

72

10/3/2023

Slow Start