1 of 129

UNIT-4

  1. Transport Layer: Process to Process Communication ,
  2. User Datagram Protocol(UDP),
  3. Transmission Control Protocol (TCP),
  4. SCTP Congestion Control;
  5. Quality of Service,
  6. QoS improving techniques: Leaky Bucket and Token Bucket algorithm.

2 of 129

1.Transport Layer: Process to Process Communication

  • The transport layer is responsible for process-to-process delivery-the delivery of a packet, part of a message, from one process to another.
  • Two processes communicate in a client/server relationship.

  1. The data link layer is responsible for delivery of frames between two neighboring nodes over a link. This is called node-to-node delivery.
  2. The network layer is responsible for delivery of datagrams between two hosts. This is called host-to-host delivery.
  3. Communication on the Internet is not defined as the exchange of data between two nodes or between two hosts.
  4. Real communication takes place between two processes (application programs).

3 of 129

4 of 129

1.1 Port Numbers:

  • Port number it is way of identifying used for sender and receiver information is called as port number.
  • At the transport layer, we need a transport layer address, called a port number, to choose among multiple processes running on the destination host. The destination port number is needed for delivery; the source port number is needed for the reply.
  • Well-known ports (0-1023):Assigned by IANA (Internet Assigned Numbers Authority)
  • Registered ports (1024-49151):Registered with IANA
  • Dynamic port (49152-65535):Used for custom application

5 of 129

6 of 129

7 of 129

1.2 Socket Addresses:

  • Process-to-process delivery needs two identifiers, IP address and the port number, at each end to make a connection. The combination of an IP address and a port number is called a socket address. The client socket address defines the client process uniquely just as the server socket address defines the server process uniquely.
  • A transport layer protocol needs a pair of socket addresses: the client socket address and the server socket address. These four pieces of information are part of the IP header and the transport layer protocol header. The IP header contains the IP addresses; the UDP or TCP header contains the port numbers.

8 of 129

9 of 129

1.3 Multiplexing and Demultiplexing

Multiplexing:

  • At the sender site, there may be several processes that need to send packets. However, there is only one transport layer protocol at any time. This is a many-to-one relationship and requires multiplexing. The protocol accepts messages from different processes, differentiated by their assigned port numbers. After adding the header, the transport layer passes the packet to the network layer.

Demultiplexing:

  • At the receiver site, the relationship is one-to-many and requires demultiplexing. The transport layer receives datagrams from the network layer. After error checking and dropping of the header, the transport layer delivers each message to the appropriate process based on the port number.

10 of 129

11 of 129

1.4 Connectionless versus Connection-oriented Service

A transport layer protocol can either be connectionless or connection-oriented.

Connectionless Service

  • In a connectionless service, the packets are sent from one party to another with no need for connection establishment or connection release.
  • The packets are not numbered; they may be delayed or lost or may arrive out of sequence.
  • There is no acknowledgment either. We will see shortly that one of the transport layer protocols in the Internet model, UDP, is connectionless.

Connection Oriented Service

  • In a connection-oriented service, a connection is first established between the sender and the receiver.
  • Data are transferred.
  • At the end, the connection is released. We will see shortly that TCP and SCTP are connection-oriented protocols.

12 of 129

1.5 Reliable versus Unreliable:

Reliable (Connection-Oriented): A communication method that ensures data is delivered accurately, in the correct order, and without duplication or loss.

Characteristics:

  • 1. Guaranteed delivery
  • 2. Error-free transmission
  • 3. Sequential delivery
  • 4. Acknowledgment mechanisms
  • 5. Retransmission on failure

Examples:

  • 1. TCP (Transmission Control Protocol)
  • 2. HTTP/HTTPS (Hypertext Transfer Protocol)
  • 3. FTP (File Transfer Protocol)
  • 4. SSH (Secure Shell)
  • 5. SMTP (Simple Mail Transfer Protocol)

13 of 129

Unreliable (Connectionless): A communication method that does not guarantee data delivery, order, or accuracy.

Characteristics:

1. Best-effort delivery

2. No guarantee of delivery

3. No error correction

4. No sequential delivery

5. No acknowledgment mechanisms

Examples:

1. UDP (User Datagram Protocol)

2. DNS (Domain Name System)

3. DHCP (Dynamic Host Configuration Protocol)

4. SNMP (Simple Network Management Protocol)

5. Online gaming protocols (e.g., UDP-based)

14 of 129

2.User Datagram Protocol(UDP)

  • The User Datagram Protocol (UDP) is called a connectionless, unreliable transport protocol.
  • It has very limited error checking capability.
  • It is very simple protocol and it can be used with minimum overhead.
  • UDP takes less time as compared to TCP or SCTP (stream control transmission control protocol).
  • It is a good protocol for data flowing in one direction
  • It is simple and suitable for query based communication.

15 of 129

2.1 Well-Known Ports for UDP

  • IANA (Internet Assigned Numbers Authority)
  • Below table shows some well-known port numbers used by UDP. Some port numbers can be used by both UDP and TCP.

16 of 129

2.2 User Datagram or UDP Header format

  • UDP packets are called as user datagrams, have a fixed-size header of 8 bytes.
  • Below Figure shows the format of a user datagram.

17 of 129

  • Source port number: It is 16-bit information is used to identify the source port of the packet
  • Destination port number:. It is 16-bit information which is used to identify application-level service on the destination machine.
  • Length: It is 16-bit field that specifies the entire length of the UDP packet that includes the header also. The minimum value would be 8-byte as the size of the header is 8 bytes.

  • Checksum: This field is used to detect errors over the entire user datagram (header plus data).

18 of 129

2.3 UDP Operation (Services)

1.Connectionless Services

  • The UDP is a connectionless protocol as it does not create a virtual path to transfer the data. It does not use the virtual path, so packets are sent in different paths between the sender and the receiver, which leads to the loss of packets or received out of order.

2.Flow and Error Control

  • There is no flow control and hence no window mechanism. The receiver may overflow with incoming messages. There is no error control mechanism in UDP except for the checksum. This means that the sender does not know if a message has been lost or duplicated. When the receiver detects an error through the checksum, the user datagram is silently discarded. The lack of flow control and error control means that the process using UDP should provide these mechanisms.

19 of 129

3.Encapsulation and Decapsulation:

  • To send a message from one process to another, the UDP protocol encapsulates and decapsulates messages in an IP datagram.

4.Queuing

20 of 129

  • At the client site, when a process starts, it requests a port number from the operating system.
  • Some implementations create both an incoming and an outgoing queue associated with each process.
  • Other implementations create only an incoming queue associated with each process.
  • Note that even if a process wants to communicate with multiple processes, it obtains only one port number and eventually one outgoing and one incoming queue.
  • The queues opened by the client are, in most cases, identified by empherically port numbers.
  • The queues function as long as the process is running. When the process terminates, the queues are destroyed.
  • The client process can send messages to the outgoing queue by using the source port number specified in the request.
  • UDP removes the messages one by one and, after adding the UDP header, delivers them to IP. An outgoing queue can overflow.

21 of 129

2.4 UDP applications

The following lists some uses of the UDP protocol:

  • UDP is suitable for a process that requires simple request-response communication with little concern for flow and error control. It is not usually used for a process such as FTP that needs to send bulk data.
  • UDP is suitable for a process with internal flow and error control mechanisms. For example, the Trivial File Transfer Protocol (TFTP) process includes flow and error control. It can easily use UDP.
  • UDP is a suitable transport protocol for multicasting. Multicasting capability is embedded in the UDP software but not in the TCP software.
  • UDP is used for management processes such as SNMP.
  • UDP is used for some route updating protocols such as Routing Information Protocol (RIP).

22 of 129

Advantages & Disadvantages

Advantages:

1. Fast transmission

2. Low overhead

3. Simple implementation

4. Suitable for real-time applications

5. Connectionless

Disadvantages:

1. No guarantee of delivery

2. No error correction

3. No retransmission

4. Packet loss or duplication possible

5. Limited reliability

23 of 129

3.Transmission Control Protocol(TCP )

  • TCP is called a connection-oriented, reliable transport protocol.
  • It adds connection- oriented and reliability features to the services of IP.

Topics discussed in this section:

TCP Features

TCP Services

Segment

A TCP Connection

Flow Control

Error Control

24 of 129

3.1 Features of TCP

  • TCP, like UDP, is a process-to-process (program-to-program) protocol.
  • TCP, therefore, like UDP, uses port numbers. Unlike UDP, TCP is a connection oriented protocol; it creates a virtual connection between two TCPs to send data.
  • In addition, TCP uses flow and error control mechanisms at the transport level.
  • In brief, TCP is called a connection-oriented, reliable transport protocol.
  • It adds connection-oriented and reliability features to the services of IP.

25 of 129

3.2 TCP Services

  • Process-to-Process Communication
  • Stream Delivery Service
  • Full-Duplex Communication
  • Connection-Oriented Service
  • Reliable Service

26 of 129

  • 1.Process-to-Process Communication:Like UDP, TCP provides process-to-process communication using port numbers. Below Table lists some well-known port numbers used by TCP.

27 of 129

2.Stream Delivery Service:

  • TCP, unlike UDP, is a stream-oriented protocol. In UDP, a process (an application program) sends messages, with predefined boundaries, to UDP for delivery. UDP adds its own header to each of these messages and delivers them to IP for transmission.
  • Each message from the process is calIed a user datagram and becomes, eventually, one IP datagram. Neither IP nor UDP recognizes any relationship between the datagrams.
  • TCP, on the other hand, allows the sending process to deliver data as a stream of bytes and allows the receiving process to obtain data as a stream of bytes.
  • TCP creates an environment in which the two processes seem to be connected by an imaginary "tube" that carries their data across the Internet.
  • The sending process produces (writes to) the stream of bytes, and the
  • receiving process consumes (reads from) them

28 of 129

Stream Delivery Services

29 of 129

3.Sending and Receiving Buffers:

30 of 129

TCP segments

31 of 129

4.Full-Duplex Communication:TCP offers full-duplex service, in which data can flow in both directions at the same time.Each TCP then has a sending and receiving buffer, and segments move in both directions.

5.Connection-Oriented Service:

TCP, unlike UDP, is a connection-oriented protocol. When a process at site A wants to send and receive data from another process at site B, the following occurs:

  • The two TCPs establish a connection between them.
  • Data are exchanged in both directions.
  • The connection is terminated.
  • Note that this is a virtual connection, not a physical connection.
  • The TCP segment is encapsulated in an IP datagram and can be sent out of order, or lost, or corrupted, and then resent.
  • Each may use a different path to reach the destination. There is no physical connection.

32 of 129

  • TCP creates a stream-oriented environment in which it accepts the responsibility of delivering the bytes in order to the other site.
  • The situation is similar to creating a bridge that spans multiple islands and passing all the bytes from one island to another in one single connection.

6.Reliable Service:TCP is a reliable transport protocol. It uses an acknowledgment mechanism to check the safe and sound arrival of data. We will discuss this feature further in the section on error control

33 of 129

3.2 TCP Features

  • Numbering System
  • Flow Control
  • Error Control
  • Congestion Control

34 of 129

23.34

TCP Services: Number System

  • There are two fields called the sequence number and the acknowledgment number.
  • These two fields refer to the byte number and not the segment number.
  • Byte Number TCP numbers all data bytes that are transmitted in a connection. Numbering is independent in each direction.
  • When TCP receives bytes of data from a process, it stores them in the sending buffer and numbers them.
  • The numbering does not necessarily start from 0.
  • Instead, TCP generates a random number between 0 and 232 - 1 for the number of the first byte.
  • For example, if the random number happens to be 1057 and the total data to be sent are 6000 bytes, the bytes are numbered from 1057 to 7056(1057+6000).

35 of 129

23.35

The following shows the sequence number for each segment:

Example 23.3

  • Sequence Number After the bytes have been numbered, TCP assigns a sequence number to each segment that is being sent.
  • The sequence number for each segment is the number of the first byte carried in that segment.
  • Example: Suppose a TCP connection is transferring a file of 5000 bytes. The first byte is numbered 10,001. What are the sequence numbers for each segment if data are sent in five segments, each carrying 1000 bytes?

36 of 129

23.36

The value in the sequence number field of a segment defines the

number of the first data byte �contained in that segment.

Note

37 of 129

23.37

The value of the acknowledgment field in a segment defines

the number of the next byte a party expects to receive.

The acknowledgment number is cumulative.

Note

38 of 129

23.38

Figure 23.16 TCP segment format

39 of 129

23.39

Figure 23.17 Control field

40 of 129

23.40

Table 23.3 Description of flags in the control field

41 of 129

23.41

Description of flags in the control field

  • Window size: This field defines the size of the window, in bytes, that the other party must maintain. Note that the length of this field is 16 bits, which means that the maximum size of the window is 65,535 bytes.
  • This value is normally referred to as the receiving window (rwnd) and is determined by the receiver.
  • The sender must obey the dictation of the receiver in this case.
  • Checksum: This 16-bit field contains the checksum. The calculation of the checksum for TCP follows the same procedure as the one described for UDP.
  • However, the inclusion of the checksum in the UDP datagram is optional, whereas the inclusion of the checksum for TCP is mandatory.
  • The same pseudo header, serving the same purpose, is added to the segment. For the TCP pseudo header, the value for the protocol field is 6.

42 of 129

23.42

Description of flags in the control field

  • Urgent pointer: This l6-bit field, which is valid only if the urgent flag is set, is used when the segment contains urgent data.
  • It defines the number that must be added to the sequence number to obtain the number of the last urgent byte in the data section of the segment. This will be discussed later in this chapter.
  • Options:There can be up to 40 bytes of optional information in the TCP header.
    • The options field contains extra information or features that can be included when needed. Some examples of optional information in the TCP header include:
    • Maximum Segment Size (MSS): Tells the server the maximum size of the packet's data payload.
    • TCP window scale: Increases the maximum window size from 65,535 bytes to 1 Gigabyte. Nop: Means "No Option" and is used to separate the different options in the TCP Option field.
    • The options field is not used in every packet, but is instead used selectively in specific scenarios. For example, it might be used in SYN packets or during periods of network congestion.

43 of 129

23.43

Connection establishment using three-way handshaking

44 of 129

23.44

A SYN segment cannot carry data, but it consumes one sequence number.

Note

45 of 129

23.45

A SYN + ACK segment cannot �carry data, but does consume one �sequence number.

Note

46 of 129

23.46

An ACK segment, if carrying no data, consumes no sequence number.

Note

47 of 129

23.47

Data transfer

48 of 129

23.48

Connection termination using three-way handshaking

49 of 129

23.49

The FIN segment consumes one sequence number if it does �not carry data.

Note

50 of 129

23.50

The FIN + ACK segment consumes �one sequence number if it �does not carry data.

Note

51 of 129

23.51

Half-close

52 of 129

23.52

Sliding window

53 of 129

23.53

A sliding window is used to make transmission more efficient as well as

to control the flow of data so that the destination does not become

overwhelmed with data. �TCP sliding windows are byte-oriented.

Note

54 of 129

23.54

What is the value of the receiver window (rwnd) for host A if the receiver, host B, has a buffer size of 5000 bytes and 1000 bytes of received and unprocessed data?

Example 23.4

Solution

The value of rwnd = 5000 − 1000 = 4000. Host B can receive only 4000 bytes of data before overflowing its buffer. Host B advertises this value in its next segment to A.

55 of 129

23.55

What is the size of the window for host A if the value of rwnd is 3000 bytes and the value of cwnd is 3500 bytes?

Example 23.5

Solution

The size of the window is the smaller of rwnd and cwnd, which is 3000 bytes.

56 of 129

23.56

Figure 23.23 shows an unrealistic example of a sliding window. The sender has sent bytes up to 202. We assume that cwnd is 20 (in reality this value is thousands of bytes). The receiver has sent an acknowledgment number of 200 with an rwnd of 9 bytes (in reality this value is thousands of bytes). The size of the sender window is the minimum of rwnd and cwnd, or 9 bytes. Bytes 200 to 202 are sent, but not acknowledged. Bytes 203 to 208 can be sent without worrying about acknowledgment. Bytes 209 and above cannot be sent.

Example 23.6

57 of 129

23.57

Example 23.6

58 of 129

23.58

Some points about TCP sliding windows:

The size of the window is the lesser of rwnd and� cwnd.

The source does not have to send a full window’s� worth of data.

The window can be opened or closed by the� receiver, but should not be shrunk.

The destination can send an acknowledgment at� any time as long as it does not result in a shrinking� window.

The receiver can temporarily shut down the� window; the sender, however, can always send a� segment of 1 byte after the window is shut down.

Note

59 of 129

23.59

ACK segments do not consume sequence numbers and are not acknowledged.

Note

60 of 129

23.60

In modern implementations, a retransmission occurs if the retransmission timer expires or three duplicate ACK segments have arrived.

Note

61 of 129

23.61

No retransmission timer is set for an ACK segment.

Note

62 of 129

23.62

Data may arrive out of order and be temporarily stored by the receiving TCP,

but TCP guarantees that no out-of-order segment is delivered to the process.

Note

63 of 129

23.63

Figure 23.24 Normal operation

64 of 129

23.64

Lost segment

65 of 129

23.65

The receiver TCP delivers only ordered data to the process.

Note

66 of 129

23.66

Fast retransmission

67 of 129

4.Stream Control Transmission Protocol (SCTP)

  • Stream Control Transmission Protocol (SCTP) is a New reliable, message-oriented transport layer protocol.
  • SCTP, is mostly designed for Internet applications that have recently been introduced.
  • These new applications, such as IUA(Internet User Authorization), ISDN(Integrated Services Digital Network) over IP, M2UA(Message Transfer Part Level 2 User Adaptation) and M3UA (telephony signaling), H.248 (media gateway control protocol-MGCP), H.323 (IP telephony), and SIP(Session Initiation Protocol) IP telephony, need a more sophisticated service than TCP can provide.
  • SCTP provides this enhanced performance and reliability.

68 of 129

Protocol

TCP (Transmission Control Protocol)

UDP (User Datagram Protocol)

SCTP (Stream Control Transmission Protocol)

Reliability

Reliable data delivery with error detection, retransmission, and acknowledgement mechanisms

Unreliable data delivery without error recovery or acknowledgement

Reliable data delivery with error detection, retransmission, and acknowledgement mechanisms

Connection Type

Connection-oriented

Connectionless

Connection-oriented

Ordering

Guarantees ordered delivery of data packets

Does not guarantee the ordered delivery of data packets

Guarantees ordered delivery of data packets

69 of 129

Protocol

TCP (Transmission Control Protocol)

UDP (User Datagram Protocol)

SCTP (Stream Control Transmission Protocol)

Speed

Slower due to reliability mechanisms

Faster due to minimal overhead

Comparable to TCP, slower than UDP due to additional functionality

Overhead

Higher overhead due to additional headers and control mechanisms

Lower overhead due to minimal headers and control mechanisms

Moderate overhead due to additional headers and control mechanisms

Applications

Web browsing, email transfer, file transfer (FTP)

Real-time communication, video streaming, online gaming, DNS

Telecommunications, voice and video over IP, signalling transport

70 of 129

Protocol

TCP (Transmission Control Protocol)

UDP (User Datagram Protocol)

SCTP (Stream Control Transmission Protocol)

Congestion Control

Implements congestion control mechanisms to optimize network performance

No congestion control mechanisms

Implements congestion control mechanisms to optimize network performance

Error Recovery

Detects and retransmits lost or corrupted packets

No error recovery mechanisms

Detects and retransmits lost or corrupted packets

71 of 129

Protocol

TCP (Transmission Control Protocol)

UDP (User Datagram Protocol)

SCTP (Stream Control Transmission Protocol)

Message-Oriented Delivery -No

Yes, supports message-oriented delivery

Multi-streaming (No)

Yes, supports the simultaneous transmission of multiple streams

Multi-homing (No)

Yes, supports multiple IP addresses for fault tolerance and resilience

72 of 129

4.1 SCTP: Services

  • Process-to-Process Communication
  • Multiple Streams
  • Multihoming
  • Full-Duplex Communication
  • Connection-Oriented Service
  • Reliable Service

73 of 129

4.2 Stream Control Transmission Protocol (SCTP): Features

  • Transmission Sequence Number
  • Stream Identifier
  • Stream Sequence Number
  • Packets
  • Acknowledgment Number
  • Flow Control
  • Error Control
  • Congestion Control

74 of 129

4.3 Stream Control Transmission Protocol (SCTP):Header

75 of 129

  • Source port address: This is a 16-bit field that defines the port number of the process sending the packet.
  • Destination port address: This is a 16-bit field that defines the port number of the process receiving the packet.
  • Verification tag: This is a number that matches a packet to an association. This prevents a packet from a previous association from being mistaken as a packet in this association. It serves as an identifier for the association; it is repeated in every packet during the association. There is a separate verification used for each direction in the association.
  • Checksum: This 32-bit field contains a CRC-32 checksum. Note that the size of the checksum is increased from 16 (in UDP, TCP, and IP) to 32 bits to allow the use of the CRC-32 checksum.

76 of 129

4.4 SCTP: 4-Way Handshake (or) SCTP Association

77 of 129

4.5 SCTP: Packet Header

  • The client sends the first packet, which contains an INIT chunk.
  • The server sends the second packet, which contains an INIT ACK chunk.
  • The client sends the third packet, which includes a COOKIE ECHO chunk. This is a very simple chunk that echoes, without change, the cookie sent by the server. SCTP allows the inclusion of data chunks in this packet.
  • The server sends the fourth packet, which includes the COOKIE ACK chunk that acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion of data chunks with this packet.

78 of 129

4.6 SCTP: Data Transfer

79 of 129

4.7 SCTP: Packet Transfer

1. The client sends the first packet carrying two DATA chunks with TSNs 7105 and 7106.

2. The client sends the second packet carrying two DATA chunks with TSNs 7107 and 7108.

3. The third packet is from the server. It contains the SACK chunk needed to acknowledge the receipt of DATA chunks from the client. Contrary to TCP, SCTP acknowledges the last in-order TSN received, not the next expected. The third packet also includes the first DATA chunk from the server with TSN 121.

4. After a while, the server sends another packet carrying the last DATA chunk with TSN 122, but it does not include a SACK chunk in the packet because the last DATA chunk received from the client was already acknowledged.

5. Finally, the client sends a packet that contains a SACK(Select Acknowledgment) chunk acknowledging the receipt of the last two DATA chunks from the server.

80 of 129

Categories:

  • Multihoming Data Transfer: Multihoming allows both ends to define multiple IP addresses for communication. However, only one of these addresses can be defined as the primary address; the rest are alternative addresses. The primary address is defined during association establishment. The primary address of an end is determined by the other end. In other words, a source defines the primary address for a destination.

Multistream Delivery: The delivery of the data chunks is controlled by SIs and SSNs(Stream Sequence Number). SCTP can support multiple streams, which means that the sender process can define different streams and a message can belong to one of these streams. Each stream is assigned a stream identifier (SI) which uniquely defines that stream

81 of 129

Fragmentation: Although SCTP shares this term with IP, fragmentation in IP and in SCTP belongs to different levels: the former at the network layer, the latter at the transport layer.

  • SCTP preserves the boundaries of the message from process to process when creating a DATA chunk from a message if the size of the message (when encapsulated in an IP datagram) does not exceed the MTU(Maximum Transfer Unit) of the path. The size of an IP datagram carrying a message can be determined by adding the size of the message, in bytes, to the four overheads: data chunk header, necessary SACK chunks, SCTP general header, and IP header.
  • If the total size exceeds the MTU, the message needs to be fragmented.

82 of 129

4.8 SCTP: Association Termination

  • In SCTP, like TCP, either of the two parties involved in exchanging data (client or server) can close the connection.
  • However, unlike TCP, SCTP does not allow a halfclose situation.
  • If one end closes the association, the other end must stop sending new data. If any data are left over in the queue of the recipient of the termination request, they are sent and the association is closed.
  • Association termination uses three packets, by client server and also several scenarios of association.

83 of 129

84 of 129

4.9 Flow Control: Receivers site

Receivers site:

  • The receiver has one buffer (queue) and three variables.
  • The queue holds the received data chunks that have not yet been read by the process.
  • The first variable holds the last TSN received, cumTSN.
  • The second variable holds the available buffer size, winsize.
  • The third variable holds the last accumulative acknowledgment, lastACK.

85 of 129

86 of 129

4.10 Flow Control: Sender site

1. A chunk pointed to by curTSN can be sent if the size of the data is less than or equal to the quantity rwnd(Receiver window) - inTransit.

  • After sending the chunk, the value of curTSN is incremented by 1 and now points to the next chunk to be sent.
  • The value of inTransit is incremented by the size of the data in the transmitted chunk.

2. When a SACK is received, the chunks with a TSN less than or equal to the cumulative TSN in the SACK are removed from the queue and discarded.

  • The sender does not have to worry about them any more. The value of inTransit is reduced by the total size of the discarded chunks.
  • The value of rwnd is updated with the value of the advertised window in the SACK.

87 of 129

88 of 129

24.88

5 .Congestion Control and Quality of Service

Data Transfer

The main focus of congestion control and quality of service is data traffic. In congestion control we try to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the traffic. So, before talking about congestion control and quality of service, we discuss the data traffic itself.

Traffic Descriptor�Traffic Profiles

Topics discussed in this section:

89 of 129

Traffic descriptors

90 of 129

Three traffic profiles

91 of 129

Types of Traffic:

1. Voice Traffic: Voice over Internet Protocol (VoIP), phone calls.

2. Video Traffic: Video streaming, conferencing, online movies.

3. Text Traffic: Email, messaging, chat applications.

4. File Traffic: File transfers, downloads, uploads.

5. Real-time Traffic: Online gaming, financial transactions.

Traffic Characteristics:

1. Volume: Amount of data transmitted.

2. Speed: Data transfer rate (bandwidth).

3. Latency: Packet transmission delay.

4. Jitter: Packet delay variation.

5. Packet Loss: Percentage of lost packets.

92 of 129

5.1 Congestion

  • Congestion Control when there is to much data packets sent at the same time causing packet loss ,delay and decreased performance
  • Congestion in a network may occur if the load on the network
  • The number of packets sent to the network is greater than the capacity of the network
  • The number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity.

Network Performance

Topics discussed in this section:

93 of 129

Congestion Control Introduction:

  • When too many packets are present in (a part of) the subnet, performance degrades. This situation is called congestion.
  • As traffic increases too far, the routers are no longer able to cope and they begin losing packets.
  • At very high traffic, performance collapses completely and almost no packets are delivered.
  • Reasons of Congestion:
    • Slow Processors.
    • High stream of packets sent from one of the sender.
    • Insufficient memory.
    • High memory of Routers also add to congestion as becomes un manageable and un accessible. (Nagle, 1987).
    • Low bandwidth lines.

94 of 129

General Principles of Congestion Control

  • Three Step approach to apply congestion control:
  • Monitor the system .
    • detect when and where congestion occurs.
  • Pass information to where action can be taken.
  • Adjust system operation to correct the problem.
  • How to monitor the subnet for congestion.
    1. percentage of all packets discarded for lack of buffer space,
    2. average queue lengths,
    3. number of packets that time out and are retransmitted,
    4. average packet delay
    5. standard deviation of packet delay (jitter Control).

95 of 129

Congestion Control Techniques:

1. Windowing: Regulates data transmission rate

2. Slow Start: Gradually increases transmission rate

3. Congestion Avoidance: Detects congestion, reduces transmission rate

4. Fast Retransmit: Quickly retransmits lost packets

5. Fast Recovery: Recovers from congestion

96 of 129

Queues in a router

97 of 129

24.97

Packet delay and throughput as functions of load

98 of 129

5.2 Congestion control

Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (removal).

Open-Loop Congestion Control�Closed-Loop Congestion Control

Topics discussed in this section:

99 of 129

24.99

Congestion control categories

100 of 129

Warning Bit or Backpressure:

  • DECNET(Digital Equipment Corporation to connect mini computers) architecture signaled the warning state by setting a special bit in the packet's header.
  • The source then cut back on traffic.
  • The source monitored the fraction of acknowledgements with the bit set and adjusted its transmission rate accordingly.
  • As long as the warning bits continued to flow in, the source continued to decrease its transmission rate. When they slowed to a trickle, it increased its transmission rate.
  • Disadvantage: Note that since every router along the path could set the warning bit, traffic increased only when no router was in trouble.

101 of 129

24.101

Backpressure method for alleviating congestion

102 of 129

Choke Packets:

  • The router sends a choke packet back to the source host, giving it the destination found on the path.
  • The original packet is tagged (a header bit is turned on) so that it will not generate any more choke packets farther along the path and is then forwarded in the usual way.
  • When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination by X percent.
  • See next figure, flow starts reducing from step 5.
  • Reduction from 25% to 50% to 75% and so on.
  • Router maintains threshold. And based on it gives
    • Mild Warning
    • Stern Warning
    • Ultimatum.
  • Variation: Use queue length or buffers instead of line utilization as trigger signal. This will reduce traffic. Chocks also increase traffic.

103 of 129

5.3 Congestion Prevention Policies

Policies that affect congestion.

5-26

104 of 129

  • Systems are designed to minimize congestion in the first place, rather than letting it happen and reacting after the fact.

Data Link Layer:

    • The retransmission policy is concerned with how fast a sender times out and retransmits. A jumpy sender that times out quickly and retransmits all outstanding packets using go back n will put a heavier load on the system than will a leisurely sender that uses selective repeat.
    • Out of order packets management depends upon buffering policy. If receivers routinely discard all out-of-order packets, these packets will have to be transmitted again later, creating extra load i.e prefer selective repeat instead of GoBackN.
    • Acknowledgement packets generate extra traffic. Solution? Piggyback.
    • Flow Control: Tight Flow and Loose Flow. Tight flow means smaller windows protocols, reduces data rate, helps fight congestion.

105 of 129

Network Layer:

    • choice between using virtual circuits and using datagram's affects congestion since many congestion control algorithms work only with virtual-circuit subnets.
    • Packet queuing and service policy relates to whether routers have one queue per input line, one queue per output line, or both. round robin or priority based?
    • Discard policy is the rule telling which packet is dropped when there is no space.
    • Routing policy we have already discussed a lot upon.
    • Packet lifetime management deals with how long a packet may live before being discarded.

Transport Layer:

    • Same issues occur as in the data link layer
    • If the timeout interval is too short, extra packets will be sent unnecessarily. If it is too long, congestion will be reduced but the response time will suffer whenever a packet is lost.

106 of 129

6. QUALITY OF SERVICE

  • Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We can informally define quality of service as something a flow seeks to attain.

or

  • Quality of Service (QoS) ensures a network provides guaranteed performance and reliability for critical applications.

107 of 129

Flow characteristics

108 of 129

Reliability

Error detection and correction: The Transport Layer employs mechanisms like checksums to detect errors in transmitted data. If errors are detected, it requests retransmission of the corrupted data.

Sequencing: It ensures that data packets are received in the correct order by numbering them and reordering them if necessary.

Acknowledgment: The receiver acknowledges the receipt of data packets, allowing the sender to know if the data was successfully delivered.

Flow control: It regulates the rate at which data is sent to prevent the receiver from being overwhelmed.

Congestion control: It manages network traffic to avoid congestion and ensure efficient resource utilization.

109 of 129

Rliability

End-to-End Delivery: The Transport Layer establishes a connection between the sender and receiver applications.

It ensures that data is delivered from the source application to the destination application, regardless of the underlying network topology.

Multiple Connections: The Transport Layer allows multiple simultaneous connections between different applications on the same host. Each connection is identified by a unique port number.

Data Integrity: The Transport Layer ensures that data is delivered without modifications or corruption.

It uses checksums and other mechanisms to detect and correct errors.

Session Management: The Transport Layer manages the establishment, maintenance, and termination of connections between applications. t also handles issues like connection timeouts and resets.

110 of 129

Delay

Source –to-destination delay is another flow characteristic Again application can tolerate delay in different degrees.

1.Propagation Delay: Time for signal to travel through medium.

2. Transmission Delay: Time to transmit data packet.

3. Processing Delay: Time to process packet at router/switch.

4. Queuing Delay: Time spent in buffer waiting for transmission.

5. Network Delay: Total delay across entire network.

111 of 129

Jitter

Jitter in the transport layer refers to the variation in the time it takes for data packets to travel from the sender to the receiver. It's essentially the inconsistency or irregularity in the arrival times of packets.

Causes of Jitter

Network Congestion: When the network becomes overloaded, packets may experience delays or even be dropped, leading to varying arrival times.

Routing Changes: If the optimal path for data to travel changes, packets may take different routes with varying delays.

Queueing Delays: Packets may be queued at routers or switches, causing delays depending on the queue length and the priority of the packets.

Wireless Interference: In wireless networks, interference from other devices or environmental factors can cause packets to be retransmitted or delayed.

112 of 129

Jitter

Impact of Jitter on Applications

Voice and Video Communications: Jitter can lead to audio and video quality issues, such as echo, distortion, or pixilation.

Real-time Games: Jitter can cause lag or delays in gameplay, affecting the user experience.

Streaming Media: Jitter can result in buffering or interruptions during video or audio streaming.

Mitigating Jitter

Quality of Service (QoS): Network administrators can implement QoS mechanisms to prioritize certain types of traffic, such as real-time voice or video, and ensure that they receive lower jitter.

Buffering: Applications can use buffering to store incoming data and smooth out variations in arrival times.

Jitter Compensation: Some applications can employ algorithms to compensate for jitter by adjusting playback rates or introducing delays.

113 of 129

Bandwidth

  •  This is a network communications link’s ability to transmit the majority of data from one place to another in a specific amount of time.
  • Different application need different bandwidth
  • In video conferencing we need to send millions of bits per second to refresh a color screen while the total number of the bits an e mail may not reach even a million

114 of 129

QoS Applications

1. VoIP (Voice over Internet Protocol): Real-time voice communication.

2. Video conferencing: Real-time video communication.

3. Online gaming: Real-time data exchange.

4. Cloud computing: Reliable data transfer.

5. Financial transactions: Secure and reliable data transfer.

115 of 129

7.TECHNIQUES TO IMPROVE QoS

We tried to define QoS in terms of its characteristics. In this section, we discuss some techniques that can be used to improve the quality of service. We briefly discuss four common methods:scheduling, traffic shaping, admission control, and resource reservation.

Scheduling�Traffic Shaping�Resource Reservation

Admission Control

Topics discussed in this section:

116 of 129

Scheduling

24.116

117 of 129

FIFO queue

118 of 129

Priority queuing

119 of 129

Weighted fair queuing

120 of 129

Random Early Detection:

It is the idea of discarding packets before all the buffer space is really exhausted. A popular algorithm for doing this is called

RED (Random Early Detection)).

  • Response to lost packets is the source to slow down.
  • Lost packets are mostly due to buffer overruns rather than transmission errors.
  • The idea is that there is time for action to be taken before it is too late.
  • To determine when to start discarding? For this, routers maintain a running average of their queue lengths.
  • When the average queue length on some line exceeds a threshold, the line is said to be congested and action is taken.

121 of 129

Traffic Shaping

24.121

122 of 129

Leaky bucket �

  • If a bucket with a small hole at the bottom. Data packets enter the bucket at varying rates, but they can only exit at a fixed rate, which represents the network's bandwidth.
  • Incoming packets are added to the bucket.
  • If the bucket overflows (i.e., it reaches its capacity), the excess packets are discarded (dropped).
  • Packets exit the bucket at a constant rate, regardless of the incoming rate, smoothing out bursts of traffic

123 of 129

Leaky bucket

124 of 129

Leaky bucket implementation

125 of 129

Advantages:

1. Simple implementation.

2. Effective traffic shaping.

3. Reduces network congestion.

Disadvantages:

1. Can drop packets.

2. May not handle bursty traffic.

126 of 129

Token bucket�

  • The token bucket uses tokens to control the amount of data that can be transmitted. Each token allows the transmission of a certain number of bytes or packets.
  • Tokens are generated at a fixed rate and added to the bucket.
  • When a packet is ready to be sent, it can only be sent if there are enough tokens in the bucket. Sending a packet consumes tokens.
  • If the bucket is full, excess tokens can be discarded, allowing for burst traffic up to a certain limit.

127 of 129

Token bucket

128 of 129

Advantages:

  • 1. More flexible than Leaky Bucket.
  • 2. Handles bursty traffic.
  • 3. Provides better QoS.

Disadvantages:

  • 1. More complex implementation.
  • 2. Requires more resources.

129 of 129

Resource Reservation and Admission Control

  • Buffer, CPU Time, Bandwidth are the resources that can be reserved for particular flows for particular time to maintain the QoS.
  • Mechanism used by routers to accept or reject flows based on flow specifications is what we call Admission Control.

24.