1 of 10

Computer Networks

LECTURE # 13

CONNECTION ESTABLISHMENT , FLOW AND CONGESTION CONTROL

2 of 10

Understanding Connection-Oriented vs. Connectionless Communication

Connection-Oriented Communication: This type of communication establishes a connection before any data is sent. TCP is a prime example. It ensures reliability, ordering, and error-checking.

Connectionless Communication: In this method, data packets are sent without establishing a connection first. User Datagram Protocol (UDP) exemplifies this with lower latency but no reliability guarantees.

2. The Role of TCP in Connection Establishment

TCP enables reliable communication through a process known as the three-way handshake during connection establishment. Let’s break this down step by step:

3 of 10

2.1 Three-Way Handshake Process

  1. SYN (Synchronize): The initiating device (client) sends a SYN packet to the receiving device (server) to initiate a connection, including its initial sequence number.
  2. SYN-ACK (Synchronize-Acknowledge): The server acknowledges receipt of the SYN packet by sending back a SYN-ACK packet. This packet contains the server's own initial sequence number and an acknowledgment number, which is the client’s initial sequence number plus one.
  3. ACK (Acknowledge): The client sends an ACK packet back to the server, confirming the receipt of the SYN-ACK packet. Once this packet is received, the connection is established, and data transmission can begin.

This three-way handshake not only establishes the connection but also synchronizes the sequence numbers for both the client and server, ensuring that data packets are tracked accurately during transmission.

4 of 10

3. Timeout and Retransmissions

In addition to the handshake, TCP is equipped with mechanisms to handle lost packets. If an ACK is not received within a specific timeout period, the client or server will retransmit the data or SYN packet. This feature ensures reliability, a key aspect of TCP.

4. Connection Termination

Connection termination in TCP is as important as establishment. The process typically involves a four-step handshake, which can be summarized as follows:

  1. FIN (Finish): One side (let’s say the client) sends a FIN packet to terminate the connection.
  2. ACK: The other side (the server) responds with an ACK to recognize the FIN.
  3. FIN: The server then sends its FIN packet to the client.
  4. ACK: The client responds with an ACK, completing the termination process.

This orderly shutdown helps to ensure that all data is transmitted and acknowledged before the connection is fully closed.

5 of 10

5. Error Handling and Flow Control

TCP also employs various mechanisms for error handling (using checksums) and flow control (using the sliding window protocol). These ensure smooth and reliable data transmission even over unpredictable networks.

6 of 10

Understanding Flow Control

Definition

Flow control is a technique used to manage the rate of data transmission between sender and receiver, ensuring that the sender does not overwhelm the receiver with more data than it can handle at any given time.

1.2. Importance

Prevent Data Loss: By regulating the flow, we reduce the chance of data loss due to buffer overflow at the receiver.

Ensure Smooth Communication: Flow control helps maintain a smooth and efficient communication channel between devices.

1.3. Techniques

Here are some common flow control mechanisms:

Stop-and-Wait Protocol: After sending each message, the sender waits for an acknowledgment (ACK) from the receiver before sending the next one.

    • Pros: Simple to implement.
    • Cons: Inefficient as the sender spends a lot of time waiting.

Sliding Window Protocol: This allows the sender to send multiple frames before needing an acknowledgment. The number of unacknowledged frames is controlled by the window size.

    • Pros: More efficient than Stop-and-Wait, utilizes the capacity of the network better.
    • Cons: Requires more complex management of buffers and sequence numbers.

7 of 10

2. Understanding Congestion Control

2.1. Definition

Congestion control refers to mechanisms that ensure the network can handle the amount of data being transmitted without overwhelming its capacity, which can lead to packet loss, delays, and inefficient communication.

2.2. Importance

Network Efficiency: Prevents bottlenecks and ensures optimal utilization of network resources.

Quality of Service: Helps maintain the performance level of applications relying on the network, especially in real-time communications like VoIP and video streaming.

8 of 10

2.3. Congestion Control Strategies

Congestion control strategies can generally be divided into two categories: preventive and reactive.

Preventive Control:

    • Traffic Shaping: Regulates data flow based on predefined traffic profiles to avoid congestion.
    • Rate Limiting: Controls the amount of data sent over a time period to prevent overwhelming network resources.

Reactive Control:

    • TCP Congestion Control Algorithms: TCP employs algorithms that react to perceived network congestion. Key algorithms include:
      • Slow Start: Begins transmission at a low rate and increases exponentially until the congestion threshold is reached.
      • Congestion Avoidance: After reaching the threshold, it increases the transmission rate linearly.
      • Fast Retransmit and Fast Recovery: These mechanisms detect packet loss quickly and adjust the transmission rate accordingly to mitigate congestion.

9 of 10

3. Flow Control vs. Congestion Control

While flow control focuses on the sender-receiver relationship to ensure smooth data transmission, congestion control addresses global network conditions that affect all users. Both controls operate at different layers of the OSI model:

  • Flow Control: Primarily implemented at the transport layer (e.g., TCP).
  • Congestion Control: Works largely at the transport layer but has implications at the network layer (e.g., IP).

4. Challenges in Flow and Congestion Control

  • Dynamic Network Conditions: Networks are often unpredictable, with varying bandwidth, latency, and packet loss, making it challenging to maintain efficient flow and congestion control.
  • Scalability: As the number of users increases, maintaining performance becomes increasingly complex.
  • Different Traffic Types: Different applications have different requirements (e.g., streaming video vs. email), and managing these diverse needs adds complexity.

10 of 10

Thank You