Coupon Accepted Successfully!


Congestion Control

Problem: When too many packets are transmitted through a network, congestion occurs.

At very high traffic, performance collapses completely, and almost no packets are delivered.


 The bursty nature of traffic is the root cause ? When part of the network no longer can cope a sudden increase of traffic, congestion builds upon. Other factors, such as lack of bandwidth, ill-configuration and slow routers can also bring up congestion.

Open-Loop Congestion Control

Leaky Bucket / Token Bucket

  • Leaky bucket: consists of a finite queue
    – When a packet arrives, if there is a room on the queue its joins the queue; otherwise, it is discarded
    – At every (fixed) clock tick, one packet is transmitted unless the queue is empty
  • It eliminates bursts completely packets passed to the subnet at the same rate
  • This may be a bit overdone, and also packets can get lost (when bucket is full)
  • Token bucket: Tokens are added at a constant rate. For a packet to be transmitted, it must capture and destroy one token
    (a) shows that the bucket holds three tokens with five packets waiting to be transmitted
    (b) shows that three packets have gotten through but the other two are stuck waiting for tokens to be generated
  • Unlike leaky bucket, token bucket allows saving, up to maximum size of bucket n. This means that bursts of up to n packets can be sent at once, giving faster response to sudden bursts of input
  • An important difference between two algorithms: token bucket throws away tokens when the bucket is full but never discards packets while leaky bucket discards packets when the bucket is full.
  • Let token bucket capacity be C (bits), token arrival rate ρ  (bps), maximum output rate M (bps), and burst length S (s)
    – During burst length of S (s), tokens generated are ρ S (bits), and output burst contains a maximum of C + ρ S (bits)
    – Also output in a maximum burst of length S (s) is M · S (bits), thus C + ρ S = MS or ä = 979.png
  • Token bucket still allows large bursts, even though the maximum burst length S can be regulated by careful selection of ρ  and M
    • One way to reduce the peak rate is to put a leaky bucket of a larger rate (to avoid discarding packets) after the token bucket.


Closed Loop Congestion Control

Congestion Control in Virtual Circuits

  • These are closed-loop based designed for virtual circuits subnets, which are connection oriented  during connection set up, something can be done to help congestion control.
  • The basic principle is obvious: When setting up a virtual circuit, make sure that congestion can be avoided.
    • Admission control: Once congestion has been signaled, no more new virtual circuits can be set up until the problem has gone away. This is crude but simple and easy to do
    • Select alternative routes to avoid part of the network that is overloaded, i.e. temporarily rebuild your view of network



e.g. Normally, when router A sets a connection to B, it would pass through one of the two congested routers, as this would result in a minimum-hop route (4 and 5 hops respectively). To avoid congestion, a temporary subnet is redrawn by eliminating congested routers. A virtual circuit can then be established to avoid congestion

  • Negotiate quality of connection in advance, so that network provider can reserve buffers and other resources, guaranteed to be there.

Choke Packets

  • This closed-loop congestion control is applicable to both virtual circuits and datagram subnets


Choke packets in WANS (a) basic, (b) hope-by-hope

  • Basic idea: Router checks the status of each output line: if it is too occupied, sends a Choke packet to the source. The host is assumed to be cooperative and will slow down.
    • When the source gets a chock packet, it cuts rate by half, and ignores further choke packets coming from the same destination for a fixed period.
    • After that period has expired, the host listens for more choke packets. If one arrives, the host cut rate by half again. If no choke packet arrives, the host may increase rate.
  • Uncooperative cheating host may get all bandwidth while cooperative honest host gets penalized  Use weighted fair queueing to enforce cooperation and assign priority
  • Problem of basic Choke Packets: For high-speed WANs, return path for a choke packet may be so long that too many packets have already been sent by the source before the source notes congestion and takes action Host in San Francisco (router A) is sending heavy traffic to a host in New York (router D), and D is in trouble. It sends a choke packet to A. Note how long it takes for A to reduce the rate and eventually to relieve D
    • Solution: Use “push-back” or hop-by-hop choke packets
      When choke packet reaches router F, it forwards choke packet to router E as well as reduces its traffic to D. Thus the problem that D has is “push-back” to F and D gets relief quickly. This process is repeated down the route until the “ball” is back to the “root” source A.
  • Hop-by-Hop Choke Packets
    This technique is an advancement over Choked packet method. At high speed over long distances, sending a packet all the way back to the source doesn’t help much, because by the time choke packet reach the source, already a lot of packets destined to the same orginal destination would be out from the source.
    So to help this, Hop-by-Hop Choke packets are used. In this approach, the choke packet affects each and every intermediate router through which it passes by.
    Here, as soon as choke packet reaches a router back to its path to the source, it curtails down the traffic between those intermediate routers. In this scenario, intermediate nodes must dedicate few more buffers for the incoming traffic as the outflow through that node will be curtailed down immediately as choke packet arrives it, but the input traffic flow will only be curtailed down when choke packet reaches the node which is before it in the original path.


Depict the functioning of Hop-by-Hop choke packets

(a) Heavy traffic between nodes P and Q,
(b) Node Q sends the Choke packet to P,
(c) Choke packet reaches R, and the flow between R and Q is curtail down. Choke packet reaches P, and P reduces the flow out.





Protocol Structure - IP/IPv4 Header (Internet Protocol version 4)


  • Version -the version of IP currently used.
  • IP Header Length (IHL) - datagram header length. Points to the beginning of the data. The minimum value for a correct header is 5.
  • Type-of-Service- Indicates the quality of service desired by specifying how an upper-layer protocol would like a current datagram to be handled, and assigns datagrams various levels of importance. This field is used for the assignment of Precedence, Delay, Throughput and Reliability.
  • Total Length- Specifies the length, in bytes, of the entire IP packet, including the data and header. The maximum length could be specified by this field is 65,535 bytes. Typically, hosts are prepared to accept datagrams up to 576 bytes.
  • Identification- Contains an integer that identifies the current datagram. This field is assigned by sender to help receiver to assemble the datagram fragments.
  • Flags - Consists of a 3-bit field of which the two low-order (least-significant) bits control fragmentation. The low-order bit specifies whether the packet can be fragmented. The middle bit specifies whether the packet is the last fragment in a series of fragmented packets. The third or high-order bit is not used.
  • Fragment Offset - This 13 bits field indicates the position of the fragment’s data relative to the beginning of the data in the original datagram, which allows the destination IP process to properly reconstruct the original datagram.
  • Time-to-Live - It is a counter that gradually decrements down to zero, at which point the datagram is discarded. This keeps packets from looping endlessly.
  • Protocol- Indicates which upper-layer protocol receives incoming packets after IP processing is complete.
  • Header Checksum- Helps ensure IP header integrity. Since some header fields change, e.g., Time To Live, this is recomputed and verified at each point that the Internet header is processed.
  • Source Address-Specifies the sending node.
  • Destination Address-Specifies the receiving node.
  • Options- Allows IP to support various options, such as security.
  • Data - Contains upper-layer information.



Test Your Skills Now!
Take a Quiz now
Reviewer Name