Congestion Avoidance:
Tail DropTail drop causes problems with the network because it is not “smart” in how it deals with dropping traffic. Once the hardware and software queues become full it just starts dropping packets regardless of application, destination or need.
Global synchronization and TCP starvation are the result of tail drop.
- TCP Global Synchronization — When tail drop occurs all TCP based applications go into slow start, bandwidth use drops and queues clear. Once the queues clear flows increase their TCP send window until they start to receive dropped packets again. It results in waves peak utilization coupled with sudden drops in utilization, resulting in waves of traffic.
- TCP Starvation — TCP tries to work well in the network by backing off on bandwidth when packets are dropped, called slow start, but UDP does not. As a result when TCP traffic slows to deal with dropped traffic, UDP traffic does not slow, resulting in queues being filled by UDP packets, starving TCP of bandwidth.
Statistically RED drops more packets from aggressive flows than from slower flows and only flows who have packets dropped slow down, avoiding global synchronization.
RED measures the average queue depth to decide whether or not to drop packets because the average queue depth changes more slowly than the actual depth.
RED has three configuration parameters: minimum threshold, maximum threshold, and mark probability denominator (MPD).
Once the depth reaches the minimum threshold packets begin to be dropped and once it exceeds the maximum threshold there is effectively tail drop. Everything in between is governed by the mark probability denominator.
The mark probability denominator sets the maximum percentage of packets discarded by RED. IOS calculates the maximum percentage using the formula 1/MPD. For instance, an MPD of 10 yields a calculated value of 1/10, meaning the maximum discard rate is 10 percent.
The following table is from page 425 of the QoS Exam Certification Guide and shows how the minimum threshold, maximum threshold and queue depth all interact.
Average Queue Depth Versus Thresholds | Action | Name |
Average < Minimum Threshold | No packets dropped. | No Drop |
Minimum Threshold < Average Depth < Maximum Threshold | A percentage of packets are dropped, the percentage grows linearly as the average depth grows. | Random Drop |
Average Depth > Maximum Threshold | All new packets are dropped, like tail drop. | Full Drop |
WRED behaves the same as RED, except that WRED differentiates between IP precedence or DSCP value. The ONT book and the QoS Exam book both cover the same example of WRED, just in different levels of detail. Personally if you understand the concepts from the chart above with the min and max thresholds, the following charts will explain everything. When an interface starts to become congested, WRED discards lower priority traffic with a higher probability. By default in IOS lower precedence flows have smaller minimum thresholds and will therefore begin dropping packets before higher precedence flows. As the queue passes a threshold, for instance 22 packets for packets with a precedence of 1, then packets with precedence 0 and 1 will be dropped.
The tables below are taken from pages 430 and 431 of the QoS Exam Certification Guide.
This table is for IP Precedence based WRED defaults.
Precedence | Minimum Threshold | Maximum Threshold | Mark Probability Denominator | Calculated Maximum Percent Discarded |
0 | 20 | 40 | 10 | 10% |
1 | 22 | 40 | 10 | 10% |
2 | 24 | 40 | 10 | 10% |
3 | 26 | 40 | 10 | 10% |
4 | 28 | 40 | 10 | 10% |
5 | 31 | 40 | 10 | 10% |
6 | 33 | 40 | 10 | 10% |
7 | 35 | 40 | 10 | 10% |
RSVP | 37 | 40 | 10 | 10% |
DSCP | Minimum Threshold | Maximum Threshold | Mark Probability Denominator | Calculated Maximum Percent Discarded |
AF11, AF21, AF31, AF41 | 33 | 40 | 10 | 10% |
AF12, AF22, AF32, AF42 | 28 | 40 | 10 | 10% |
AF13, AF23, AF33, AF43 | 24 | 40 | 10 | 10% |
AF | 37 | 40 | 10 | 10% |
CBWRED is configured by applying WRED to CBWFQ. Remember, by default CBWFQ performs tail drop by default. By default WRED is based on IP precedence as seen in the chart above, it has eight profiles pre-defined. To me it is a joy to be able to look at the following configuration and understand it’s meaning. This is how to configure CBWRED from page 159 in the ONT book. Notice that they are just configuring the defaults from IOS as seen in the chart above. By tying it all together the configuration makes more sense.
class-map Business
match ip precedence 3 4
class-map Bulk
match ip precedence 1 2
!
policy-map Enterprise
class Business
bandwidth percent 30
random-detect
random-detect precedence 3 26 40 10
random-detect precedence 4 28 40 10
class Bulk
bandwidth percent 20
random-detect
random-detect precedence 1 22 36 10
random-detect precedence 2 24 36 10
class class-default
fair-queue
random-detect
match ip precedence 3 4
class-map Bulk
match ip precedence 1 2
!
policy-map Enterprise
class Business
bandwidth percent 30
random-detect
random-detect precedence 3 26 40 10
random-detect precedence 4 28 40 10
class Bulk
bandwidth percent 20
random-detect
random-detect precedence 1 22 36 10
random-detect precedence 2 24 36 10
class class-default
fair-queue
random-detect
class-map Business
match ip dscp af21 af22 af23 cs2
class-map Bulk
match ip dscp af11 af12 af13 cs1
!
policy-map Enterprise
class Business
bandwidth percent 30
random-detect dscp-based
random-detect dscp af21 32 40 10
random-detect dscp af22 28 40 10
random-detect dscp af23 24 40 10
random-detect dscp cs2 22 40 10
class Bulk
bandwidth percent 20
random-detect dscp-based
random-detect dscp af11 32 36 10
random-detect dscp af12 28 36 10
random-detect dscp af13 24 36 10
random-detect dscp cs1 22 36 10
class class-default
fair-queue
random-detect dscp-based
match ip dscp af21 af22 af23 cs2
class-map Bulk
match ip dscp af11 af12 af13 cs1
!
policy-map Enterprise
class Business
bandwidth percent 30
random-detect dscp-based
random-detect dscp af21 32 40 10
random-detect dscp af22 28 40 10
random-detect dscp af23 24 40 10
random-detect dscp cs2 22 40 10
class Bulk
bandwidth percent 20
random-detect dscp-based
random-detect dscp af11 32 36 10
random-detect dscp af12 28 36 10
random-detect dscp af13 24 36 10
random-detect dscp cs1 22 36 10
class class-default
fair-queue
random-detect dscp-based
Ref: http://www.chainringcircus.org/congestion-link-efficiency-traffic-policing-and-shaping/
http://www.cisco.com/en/US/docs/ios/12_2/qos/command/reference/qrfcmd7.html
No comments:
Post a Comment