Contents
LECTURE 11
Congestion Control and Quality of Service
• I. Data Traffic • II. Congestion • III. Congestion Control • IV. Two Examples • V. Quality of Service • VI. Techniques to Improve QoS • VII. Integrated Services • VIII. Differentiated Services
Chapter 24 Congestion Control and Quality of Service
• IX. QoS in Switched Networks
(Forouzan, Data Communications and Networking, 4th Edition)
2
1
I. Data Traffic
Figure 24.1 Traffic descriptors
The main focus of congestion control and quality of service is data traffic. In congestion control we try to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the traffic. So, before talking about congestion control and quality of service, we discuss the data traffic itself. Topics discussed in this section: Traffic Descriptor Traffic Profiles 3
• Traffic descriptors are qualitative values that represent a data flow • Average data rate = amount of data time • Peak data rate and Maximum burst size determine buffer space of equipment
4
Traffic Profiles
Figure 24.2 Three traffic profiles
• Constant-bit-rate (CBR): Average data rate is the same as peak data rate —No maximum burst size
• Variable-bit-rate (VBR): Rate of data flow changes gradually —More difficult to handle than CBR
• Bursty: Rate of data flow changes suddenly in a very short time —Average bit rate is very different from peak bit rate —Maximum burst size is significant —Normally need to be reshaped before entering network 5
II. Congestion
6
Figure 24.3 Queues in a router
Congestion in a network may occur if the load on the network—the number of packets sent to the network— is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity.
•
— Ex. Too many cars or accidents on the road
•
Topics discussed in this section:
Congestion occurs in any system that involve waiting or queuing Upon packet arrival, its undergoes three steps 1. Placed in an input queue 2. Processed to identify the route to destination 3. Placed in an appropriate output queue
Network Performance 7
•
How can input queue and output queue overflow?
8
Network Performance
Figure Packet delay and throughput as functions of load
• Delay and throughput are two factors that measure network performance • Delay also has negative effect on network load —When a packet is delayed, acknowledgement message may not reach the source —Source may retransmit the packet unnecessarily • Minimum = propagation + transmission delay • Queuing time → infinity when load > capacity
• Throughput = packets passing through the network in a unit of time • Increase proportionally with load • Unstable when router discards packets and source retransmits them 9
III. Congestion Control
• Also worsening the congestion situation
10
Figure 24.5 Congestion control categories
Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: openloop congestion control (prevention) and closed-loop congestion control (removal). • Prevent congestion from happening
Topics discussed in this section: Open-Loop Congestion Control Closed-Loop Congestion Control 11
• Alleviate congestion after it happens 12
Open-Loop Congestion Control
Figure 24.6 Backpressure method for alleviating congestion
• Retransmission Policy — Retransmission too early may increase congestion
• Window Policy — Selective Repeat window is better for congestion control
• Congested node stops receiving data from the immediate upstream nodes • Node-to-node congestion control
• Acknowledgement Policy — Slow down the source by not sending acknowledgement — Combine multiple acknowledgements in one transmission also reduces network load
— Starts with a node and propagates back to the source
• Discarding Policy
• Can be used only to a virtual circuit network, e.g., X.25
— Discarding less sensitive packet in audio transmission
— Each node knows its upstream node of a given flow
• Admission Policy — Check resource availability before admitting a flow into network — Reject virtual circuit setup if it may cause congestion
• Node III drops some packets and tells node II to slow down — Pressure on node III is moved backward to the source to remove congestion
13
14
Other Closed-Loop Congestion Control
Figure 24.7 Choke packet
• Implicit Signaling —Source guesses from other symptoms that there is congestion somewhere in the network • Ex. Lost or delayed acknowledgement
• Explicit Signaling —Similar to choke packet, but signal is included in the data packet —Backward signaling: Bit is set in a packet moving in the direction opposite to the congestion
• Node III sends choke packet to the source to inform it of congestion —Ex. ICMP source quench message
• Warn the source
—Forward signaling: Bit is set in a packet moving in the direction of the congestion
• Warning message goes directly to source 15
• Warn the receiver, who can use policy such as slowing down acknowledgement
16
IV. Two Examples
TCP Congestion Control (1)
To better understand the concept of congestion control, let us give two examples: one in TCP and the other in Frame Relay.
•
TCP window size = Minimum (rwnd, cwnd) — rwnd = Available buffer space at the receiver — cwnd = Congestion window size
•
Consists of three phases 1. Slow start • • • •
Topics discussed in this section: Congestion Control in TCP Congestion Control in Frame Relay
Initial cwnd = 1 maximum segment size (MSS) MSS is determined during connection establishment cwnd increases by 1 MSS each time an ACK is received Enter the next phase when cwnd >= ssthresh (Slow-start threshold)
2. Congestion avoidance 3. Congestion detection 17
18
TCP Congestion Control (2)
Figure 24.8 Slow start, exponential increase
1. Slow Start — cwnd increases slower if receiver combines ACK message
In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold. 2. Congestion avoidance
2 more segments can be sent cwnd increases, but not yet effective
— Also called additive phase — Each time the whole window of segments is acknowledged (one round), cwnd increases by 1 MSS
New cwnd is effective
In the congestion avoidance algorithm, the size of the congestion window increases additively until congestion is detected. 19
20
TCP Congestion Detection
Figure 24.9 Congestion avoidance, additive increase
•
Occurs when segment is retransmitted — Timer times out: Strong possibility of congestion • • •
ssthresh = 0.5 * current window size cwnd = 1 MSS Begin slow start phase
— Three ACKs are received: Weaker reaction • • • •
cwnd only increases if the complete window size of segments are acknowledged
21
Figure 24.10 TCP congestion policy summary
ssthresh = 0.5 * current window size cwnd = ssthresh Begin congestion avoidance phase Also called Fast Recovery
TCP reacts to congestion detection in one of the following ways: - If detection is by time-out, a new slow start phase starts. - If detection is by three ACKs, a new congestion avoidance phase 22 starts.
Figure 24.11 Congestion example
• MSS = 32 segments
0.5 * 20
23
24
Frame Relay Congestion Control
Figure 24.12 BECN
• Based on 2 bits in each frame for explicit notification —Backward explicit congestion notification (BECN) bit —Forward explicit congestion notification (FECN) bit
25
Figure 24.13 FECN
• Warn sender of network congestion • Switch can set BECN in the response frames from the receiver • Switch can also use predefined connection (DLCI = 1023) to send special frames 26
Figure 24.14 Four cases of congestion
• Warn receiver of network congestion • Assuming that flow control is done at a higher level 27
28
V. Quality Of Service
Figure 24.15 Flow characteristics
Quality of service (QoS) is an internetworking issue that has been discussed more than defined. We can informally define quality of service as something a flow seeks to attain.
• Reliability — Some applications cannot tolerate packet lost
• Delay — Real-time applications need minimum delay
Topics discussed in this section:
• Jitter = Variation in delay for packet belonging to the same flow
Flow Characteristics Flow Classes
— Multimedia streaming application does not care if delay is short or long, as long as they are the same for all packets — Jitter = variation in packet delay 29
VI. Techniques to Improve QoS In Section 24.5 we tried to define QoS in terms of its characteristics. In this section, we discuss some techniques that can be used to improve the quality of service. We briefly discuss four common methods: scheduling, traffic shaping, admission control, and resource reservation.
30
Scheduling • Good scheduling technique treats different flows in a fair and appropriate manner —FIFO (First-in, First-out) queuing —Priority queuing —Weighted fair queuing
Topics discussed in this section: Scheduling Traffic Shaping Resource Reservation Admission Control 31
32
Figure 24.16 FIFO queue
Figure 24.17 Priority queuing
• Ex. Waiting for a bus at bus stop • If the average arrival rate > average processing rate, queue will fill up and new packets will be discarded
• Packets are first assigned to a priority class — Each priority class has its own queue
• Packets in the highest-priority queue are processed first 33
Figure 24.18 Weighted fair queuing
— Packets in lower-priority queue may never be transmitted if there is a continuous flow of high-priority packets 34 — Also called starvation
Traffic Shaping • Control amount and rate of traffic sent to the network • Leaky bucket —Water leaks from a hole in the bucket at a constant rate —Input rate can vary, but output rate remains constant —Smooth out bursty traffic by storing it in the bucket first
• Token bucket • Queues are weighted based on priority • Queues are processed in a round-robin fashion —Number of packets selected from each queue is based 35 on the corresponding weight
—Allow idle hosts to accumulate credit for the future in the form of tokens —Host can send bursty data as long as the bucket is not empty
• Leaky bucket and Token bucket can be combined —Which one first?
36
Figure 24.20 Leaky bucket implementation
Figure 24.19 Leaky bucket
12 Mbps x 2 sec = 24 Mbits 2 Mbps x 3 sec = 6 Mbits Total = 24 + 6 = 30 Mbits
3 Mbps x 10 sec = 30 Mbits Total = 30 Mbits
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full.
37
38
Resource Reservation & Admission Control
Figure 24.21 Token bucket
• Resource Reservation —Quality of service is improved if resources are reserved beforehand —Buffer space, Bandwidth, CPU time, etc.
• Admission Control —Router can check flow specification before processing that flow —Reject a flow if the new flow needs more resources than currently available The token bucket allows bursty traffic at a regulated maximum rate.
39
40
VII. Integrated Services
IntServ: Flow Establishment
Two models have been designed to provide quality of service in the Internet: Integrated Services and Differentiated Services. We discuss the first model here.
—User needs to create a flow (like virtual circuit) from the source to the destination —Also needs to informs all routers of the resource requirement
• Use Resource Reservation Protocol (RSVP) to signal reservation for a particular flow • Flow specification consists of two parts
Integrated Services is a flow-based QoS model designed for IP. Topics discussed in this section: Signaling Flow Specification Admission Service Classes RSVP Problems with Integrated Services
• Integrated Services (IntServ) is flow-based
—Rspec (Resource specification) defines resources that the flow needs to reserve: Buffer & Bandwidth —Tspec (Traffic specification) defines traffic characterization of the flow 41
42
IntServ: Service Class
RSVP
• Guaranteed Service Class
• Create a flow and make resource reservation • RSVP is designed for multicasting
—For real-time traffic that needs a guaranteed minimum end-to-end delay —Guarantee that packet will arrive within a certain delivery time and not discarded
—Multimedia traffic (need RSVP) often uses multicasting
• Receiver (not sender) makes the reservation • Base on two main messages
• If flow traffic stays within the boundary of Tspec
—End-to-end delay and data rate need to be specified by the application
—Path messages —Resv messages
• Controlled-Load Service Class —Can accept some delays, but are sensitive to overloaded network or to the danger of losing packet —File transfer, email, or Internet access 43
44
Figure 24.23 Resv messages
Figure 24.22 Path messages
• Receivers need to know the path traveled by packets before the reservation is made • Path message
• Resv message travels toward the sender (upstream) —Make resource reservation on the routers that support RSVP
— Travels from sender and reaches all receivers in the multicast path — Stores necessary information for the receiver along the way — Sent in a multicast environment 45
Figure 24.24 Reservation merging
• Resources are not reserved for each receiver in a flow —Reservation is merged —Reservation is made for the largest value of all requests —Largest value should be able to handle all requests of the same multicast flow 47
46
Figure 24.25 Reservation styles
• If there is more than one flow, • Wild Card Filter: Single reservation for all senders — Based on the largest request — If flows from different senders do not occur at the same time
• Fixed Filter: Distinct reservation for each flow — If flows from different senders occur at the same time
• Shared Explicit: Single reservation for a set of flows — Not all flows
48
VIII. Differentiated Services
IntServ: Drawbacks • Reservation state needs to be refreshed periodically, or it will be removed
Differentiated Services (DS or Diffserv) was introduced by the IETF (Internet Engineering Task Force) to handle the shortcomings of Integrated Services.
—Also called Soft State
• Router needs to maintain information for each flow —Not scalable in large network
• Only provides two types of services —Guaranteed and control-load
Topics discussed in this section: DS Field 49
DiffServ vs IntServ
50
Figure 24.26 DS field
• Main processing is done at the network edge —Solve the scalability issue
• Use per-class service instead of per-flow service —Router routes packet based on class of service —Solve the service type limitation problem
• Set at network boundary (network edge) —By host or the first router (boundary router)
• Replace TOS field in IPv4 • DSCP = DiffServ Code Point —6-bit field that defines per-hop behavior (PHB) • DE PHB = Default PHB or best effort delivery • EF PHB = Expedite Forwarding PHB provides low latency and ensured bandwidth, similar to a virtual connection • AF PHB = Assured Forwarding PHB provide high assurance
Differentiated Services is a class-based QoS model designed for IP.
– As long as the class traffic does not exceed node’s traffic profile 51
• CU = Currently unused
52
Figure 20.5 IPv4 datagram format
Figure 24.27 Traffic conditioner
• Used by DiffServ node or host • Meter checks if flow matches the negotiated traffic profile • Marker can down-mark packet (lowering flow class) — If flow does not match the profile
• Shaper reshapes traffic if it is not compliant with the negotiated profile • Dropper discards packet if flow severely violate the negotiated profile — Like shaper with no buffer 53
IX. QoS in Switched Networks Let us now discuss QoS as used in two switched networks: Frame Relay and ATM. These two networks are virtual-circuit networks that need a signaling protocol such as RSVP.
54
Figure 24.28 Relationship between traffic control attributes
• QoS attributes are set during negotiation between user and the network — Bc = Committed burst size —CIR = Committed Information Rate — Be = Excess burst size
Topics discussed in this section: QoS in Frame Relay QoS in ATM
55
56
Frame Relay: QoS Attributes
Figure 24.29 User rate in relation to Bc and Bc + Be
• Access Rate
• First switch that receives the frames from the user will set DE bit for frames that exceed Bc
— Depends on physical bandwidth of the link
• Committed burst size (Bc) — Maximum number of bits in a predefined time that the network is committed to transfer • Without discarding any frame or setting the DE bit • Ex. 400 kbits for a period of 4 seconds
— Other switches will discard them upon congestion
(by the 1st switch)
• Committed Information Rate — Average transmission rate in bits per second
• Excess burst size — Maximum number of bits in excess of Bc that a user can send during a predefined time — Network is committed to transfer if there is no congestion 57
Figure 24.30 Service classes in ATM
58
Figure 24.31 Relationship of service classes to the total capacity of the network
• Constant bit rate (CBR) for real-time audio or video services — AAL1
• Variable bit rate (VBR) for audio or video services with compression — Can be either real-time (VBR-RT) or non-real-time (VBR-NRT)
• Available bit rate (ABR) guarantees minimum rate — Rate can be higher if network capacity is available
• Unspecified bit rate (UBR) represents best effort service — AAL5
59
60
ATM: Traffic Characteristics
ATM: Network Characteristics
• Negotiated at time of contract between user and network • Sustained cell rate (SCR)
• Cell loss ratio (CLR)
—Average cell rate over a long time interval
—Fraction of cells lost during transmission • Or delivered too late
• Cell transfer delay (CTD)
• Peak cell rate (PCR)
—Average time for a cell to travel from source to destination
—Sender’s maximum cell rate
• Minimum cell rate (MCR)
• Cell delay variation (CDV)
—Guarantee by network
—Difference between maximum and minimum CTD
• Cell variation delay tolerance (CVDT)
• Cell error ratio (CER)
—Variation in cell transmission times —Difference between the minimum and maximum delays in delivering the cells
—Fraction of cells delivered in error 61
62