Thursday, September 29, 2011

L2TP


Layer 2 Tunnel Protocol



Overview


L2TP is one of the key building blocks for virtual private networks in the dial access space and is endorsed by Cisco and other internetworking industry leaders. It combines the best of Cisco's Layer 2 Forwarding (L2F) protocol and Microsoft's Point-to-Point Tunneling Protocol (PPTP).

Purpose


The purpose of this document is to give an overview of what L2TP IOS® configuration commands are used in the L2TP tunneling process and what communication processes go on between network access devices.

Key L2TP Terms


CHAP: Challenge Handshake Authentication Protocol. A PPP authenication protocol.

L2TP Access Concentrator (LAC): An LAC can be a Cisco network access server connected to the public switched telephone network (PSTN). The LAC need only implement media for operation over L2TP. An LAC can connect to the LNS using a local-area network or wide-area network such as public or private Frame Relay. The LAC is the initiator of incoming calls and the receiver of outgoing calls.

L2TP Network Server (LNS): Most any Cisco router connected to a local-area network or wide-area network, such as public or private Frame Relay, can act as an LNS. It is the server side of the L2TP protocol and must operate on any platform that terminates PPP sessions. The LNS is the initiator of outgoing calls and the receiver of incoming calls. Figure 1 depicts the call routine between the LAC and LNS.

Virtual Private Dial Network (VPDN): A type of access VPN that uses PPP to deliver the service.

VPDN L2TP Model


Many different scenarios apply to the L2TP model. The most basic model is one in which a client initiates a call using a PC configured for PPP to his or her Internet service provider (ISP). With a wholesale dial model, an ISP outsources dial access to a service provider (SP). This paper examines L2TP behavior in the context of the wholesale dial model using VPDN, AAA, RADIUS, and L2TP. Figure 1 depicts a typical wholesale dial model. Dial access using an asynchronous or synchronous connection is assumed from the client to the SP.

Figure 1

L2TP LAC and LNS call routine. The physical call is terminated on the LAC while the PPP session is forwarded to the LNS.

Ref:
http://www.cisco.com/warp/public/cc/pd/iosw/tech/l2pro_tc.htm#wp1002209
http://www.cisco.com/warp/public/cc/pd/iosw/prodlit/l2tun_ds.htm#wp17522
http://www.cisco.com/en/US/tech/tk801/tk70/technologies_tech_note09186a0080094586.shtml
http://www.cisco.com/en/US/docs/ios/12_0t/12_0t1/feature/guide/l2tpT.html#wp19656
http://en.wikipedia.org/wiki/Layer_2_Tunneling_Protocol
http://www.cisco.com/en/US/docs/ios/11_3/security/configuration/guide/secur_c.html
http://www.cisco.com/en/US/docs/ios/11_3/security/configuration/guide/scradius.html
http://www.cisco.com/en/US/docs/ios/12_1/12_1dc/feature/guide/l2switch.html
http://www.cisco.com/en/US/docs/ios/vpdn/command/reference/vpd_m1.html

Congestion Avoidance

Congestion Avoidance:
Tail Drop
Tail drop causes problems with the network because it is not “smart” in how it deals with dropping traffic. Once the hardware and software queues become full it just starts dropping packets regardless of application, destination or need.
Global synchronization and TCP starvation are the result of tail drop.
  • TCP Global Synchronization — When tail drop occurs all TCP based applications go into slow start, bandwidth use drops and queues clear. Once the queues clear flows increase their TCP send window until they start to receive dropped packets again. It results in waves peak utilization coupled with sudden drops in utilization, resulting in waves of traffic.
  • TCP Starvation — TCP tries to work well in the network by backing off on bandwidth when packets are dropped, called slow start, but UDP does not. As a result when TCP traffic slows to deal with dropped traffic, UDP traffic does not slow, resulting in queues being filled by UDP packets, starving TCP of bandwidth.
Random Early Detection (RED)
Statistically RED drops more packets from aggressive flows than from slower flows and only flows who have packets dropped slow down, avoiding global synchronization.
RED measures the average queue depth to decide whether or not to drop packets because the average queue depth changes more slowly than the actual depth.
RED has three configuration parameters: minimum threshold, maximum threshold, and mark probability denominator (MPD).
Once the depth reaches the minimum threshold packets begin to be dropped and once it exceeds the maximum threshold there is effectively tail drop. Everything in between is governed by the mark probability denominator.
The mark probability denominator sets the maximum percentage of packets discarded by RED. IOS calculates the maximum percentage using the formula 1/MPD. For instance, an MPD of 10 yields a calculated value of 1/10, meaning the maximum discard rate is 10 percent.
The following table is from page 425 of the QoS Exam Certification Guide and shows how the minimum threshold, maximum threshold and queue depth all interact.
Average Queue Depth Versus Thresholds Action Name
Average < Minimum Threshold No packets dropped. No Drop
Minimum Threshold < Average Depth < Maximum Threshold A percentage of packets are dropped, the percentage grows linearly as the average depth grows. Random Drop
Average Depth > Maximum Threshold All new packets are dropped, like tail drop. Full Drop
Weighted Random Early Detection
WRED behaves the same as RED, except that WRED differentiates between IP precedence or DSCP value. The ONT book and the QoS Exam book both cover the same example of WRED, just in different levels of detail. Personally if you understand the concepts from the chart above with the min and max thresholds, the following charts will explain everything. When an interface starts to become congested, WRED discards lower priority traffic with a higher probability. By default in IOS lower precedence flows have smaller minimum thresholds and will therefore begin dropping packets before higher precedence flows. As the queue passes a threshold, for instance 22 packets for packets with a precedence of 1, then packets with precedence 0 and 1 will be dropped.
The tables below are taken from pages 430 and 431 of the QoS Exam Certification Guide.
This table is for IP Precedence based WRED defaults.
Precedence Minimum Threshold Maximum Threshold Mark Probability Denominator Calculated Maximum Percent Discarded
0 20 40 10 10%
1 22 40 10 10%
2 24 40 10 10%
3 26 40 10 10%
4 28 40 10 10%
5 31 40 10 10%
6 33 40 10 10%
7 35 40 10 10%
RSVP 37 40 10 10%
This table is for IOS DSCP-Based WRED defaults.
DSCP Minimum Threshold Maximum Threshold Mark Probability Denominator Calculated Maximum Percent Discarded
AF11, AF21, AF31, AF41 33 40 10 10%
AF12, AF22, AF32, AF42 28 40 10 10%
AF13, AF23, AF33, AF43 24 40 10 10%
AF 37 40 10 10%
Class-Based Weighted Random Early Detection (CBWRED)
CBWRED is configured by applying WRED to CBWFQ. Remember, by default CBWFQ performs tail drop by default. By default WRED is based on IP precedence as seen in the chart above, it has eight profiles pre-defined. To me it is a joy to be able to look at the following configuration and understand it’s meaning. This is how to configure CBWRED from page 159 in the ONT book. Notice that they are just configuring the defaults from IOS as seen in the chart above. By tying it all together the configuration makes more sense.
class-map Business
  match ip precedence 3 4
class-map Bulk
  match ip precedence 1 2
!
policy-map Enterprise
  class Business
   bandwidth percent 30
   random-detect
   random-detect precedence 3 26 40 10
   random-detect precedence 4 28 40 10
  class Bulk
   bandwidth percent 20
   random-detect
   random-detect precedence 1 22 36 10
   random-detect precedence 2 24 36 10
  class class-default
   fair-queue
   random-detect
And the same configuration using DSCP.
class-map Business
  match ip dscp af21 af22 af23 cs2
class-map Bulk
  match ip dscp af11 af12 af13 cs1
!
policy-map Enterprise
  class Business
   bandwidth percent 30
   random-detect dscp-based
   random-detect dscp af21 32 40 10
   random-detect dscp af22 28 40 10
   random-detect dscp af23 24 40 10
   random-detect dscp cs2   22 40 10
  class Bulk
   bandwidth percent 20
   random-detect dscp-based
   random-detect dscp af11 32 36 10
   random-detect dscp af12 28 36 10
   random-detect dscp af13 24 36 10
   random-detect dscp cs1   22 36 10
  class class-default
   fair-queue
   random-detect dscp-based
To verify configuration use the show policy-map interface command.

Ref: http://www.chainringcircus.org/congestion-link-efficiency-traffic-policing-and-shaping/

http://www.cisco.com/en/US/docs/ios/12_2/qos/command/reference/qrfcmd7.html

Wednesday, September 28, 2011

Troubleshooting SIP and SPA

Output of the show diag slot command 

Output of the show hw-module subslot all oir command indicates `out of service' as Operational state for a SPA-DSP.  

Output of the show hw-module subslot fpd command


Router# hw-module subslot 0/0 reload
 
show logging  

Ref: http://www.cisco.com/en/US/docs/interfaces_modules/shared_port_adapters/install_upgrade/ASR1000/ASRtrbl.html#wp1078516

Retrieving Information from the Crashinfo File


The crashinfo file is a collection of useful information related to the current crash stored in boot Flash or Flash memory.
When a router crashes due to data or stack corruption, more reload information is needed to debug this type of crash than just the output from the normal show stacks command. The reload information is written by default to bootflash:crashinfo on the Cisco 12000 Gigabit Router Processor (GRP), the Cisco 7000 and 7500 Route Switch Processor (RSP), and the Cisco 7200 Series Routers. For the Cisco 7500 Versatile Interface Processor 2 (VIP2), this file is stored by default to bootflash:vip2_slot_no_crashinfo where the slot_no is the VIP2 slot number. For the Cisco 7000 Route Processor (RP), the file is stored by default to flash:crashinfo.

When a crashinfo is available in boot Flash, this appears at the end of the show stack command output:  


Router#more bootflash:crashinfo_20000323-061850
 
Router#show file bootflash:crashinfo
 
Ref: http://www.cisco.com/en/US/products/hw/routers/ps167/products_tech_note09186a00800a6743.shtml 

Tuesday, September 27, 2011

MDI vs. MDIX

A medium dependent interface (MDI) port or an uplink port is an Ethernet port connection typically used on the Network interface controller (NIC) or integrated NIC port on a computer. Since inputs on a NIC must go to outputs on the switch or hub these latter devices have their inputs and outputs (transmit and receive signals) reversed in a configuration known as medium dependent interface crossover (MDIX or MDI-X). Some network hubs or switches have an MDI port (often switchable) in order to connect to other hubs or switches without an Ethernet crossover cable, but with a straight-through cable.
Auto-MDIX ports on newer network interfaces detect if the connection would require a crossover, and automatically chooses the MDI or MDIX configuration to properly match the other end of the link.

MDI vs. MDIX

The terminology generally refers to variants of the Ethernet over twisted pair technology that use a female 8P8C port connection on a computer, or other network device.
The X refers to the fact that transmit wires on an MDI device must be connected to receive wires on an MDIX device. Straight through cables connect pins 1 and 2 (transmit) on an MDI device to pins 1 and 2 (receive) on an MDIX device. Similarly pins 3 and 6 are receive on an MDI device and transmit on an MDIX device. The general convention was for network hubs and switches to use the MDIX configuration, while all other nodes such as personal computers, workstations, servers and routers used an MDI interface. Some routers and other devices had an uplink/normal switch to go back and forth between MDI and MDIX on a specific port.

To connect two ports of the same configuration (MDI to MDI or MDIX to MDIX), an ethernet crossover cable was needed to cross over the transmit and receive signals in the cable, so that they are matched at the connector level. The confusion of needing two different kinds of cables for anything but hierarchical star network topologies prompted a more automatic solution.
Auto-MDIX automatically detects the required cable connection type and configures the connection appropriately, removing the need for crossover cables to interconnect switches or connecting PCs peer-to-peer. As long as it is enabled on either end of a link, either type of cable can be used. For auto-MDIX to operate correctly, the data rate on the interface and duplex setting must be set to "auto". Auto-MDIX was developed by Hewlett-Packard engineers Daniel Joseph Dove and Bruce W. Melvin.[2] A pseudo-random number generator decides whether or not a network port will attach its transmitter, or its receiver to each of the twisted pairs used to auto-negotiate the link.[3][4]
When 2 auto-MDIX ports are connected together, which is normal for modern products, the algorithm resolution time is typically < 500 ms. However, a ~1.4 second asynchronous timer is used to resolve the extremely rare case (with a probability of less than 1 in 1021) of a loop where each end keeps switching.[5]
Subsequently, Dove promoted auto-MDIX within the 1000BASE-T standard[5] and also develop patented algorithms for "forced mode auto-MDIX" which allows a link to be automatically established even if the port does not auto-negotiate.[6] Newer routers, hubs and switches (including some 10/100, and all 1 Gigabit or 10 Gigabit devices in practice) use auto MDIX to automatically switch to the proper configuration once a cable is connected. The other four wires are used but are not crossed since auto-MDIX is mandatory at the higher data rates.


Auto-MDI/MDIX Feature


For RJ-45 interfaces on the ASA 5500 series adaptive security appliance, the default auto-negotiation setting also includes the Auto-MDI/MDIX feature. Auto-MDI/MDIX eliminates the need for crossover cabling by performing an internal crossover when a straight cable is detected during the auto-negotiation phase. Either the speed or duplex must be set to auto-negotiate to enable Auto-MDI/MDIX for the interface. If you explicitly set both the speed and duplex to a fixed value, thus disabling auto-negotiation for both settings, then Auto-MDI/MDIX is also disabled. For Gigabit Ethernet, when the speed and duplex are set to 1000 and full, then the interface always auto-negotiates; therefore Auto-MDI/MDIX is always enabled and you cannot disable it. 



Ref: http://en.wikipedia.org/wiki/Medium_dependent_interface

Sunday, September 25, 2011

Class Maps Policy Maps

Class Maps


The class-map command defines each Layer 3 and Layer 4 traffic class and each Layer 7 protocol class. You create class maps to classify the traffic received and transmitted by the ACE.

Layer 3 and Layer 4 traffic classes contain match criteria that identify the IP network traffic that can pass through the ACE or network management traffic that can be received by the ACE.

Layer 7 protocol-specific classes identify server load balancing based on HTTP traffic, deep inspection of HTTP traffic, or the inspection of FTP commands by the ACE.

A traffic class contains the following components:

Class map name

One or more match commands that define the match criteria for the class map

Instructions on how the ACE evaluates match commands when you specify more than one match command in a traffic class (match-any, match-all)

The ACE supports a system-wide maximum of 8192 class maps.

The individual match commands specify the criteria for classifying Layer 3 and Layer 4 network traffic as well as the Layer 7 HTTP server load balancing and application protocol-specific fields. The ACE evaluates the packets to determine whether they match the specified criteria. If a statement matches, the ACE considers that packet to be a member of the class and forwards the packet according to the specifications set in the traffic policy. Packets that fail to meet any of the matching criteria are classified as members of the default traffic class if one is specified.

When multiple match criteria exist in the traffic class, you can identify evaluation instructions using the match-any or match-all keywords. If you specify match-any as the evaluation instruction, the traffic being evaluated must match one of the specified criteria, typically match commands of the same type. If you specify match-all as the evaluation instruction, the traffic being evaluated must match all of the specified criteria, typically match commands of different types.

The specification of complex match criteria using the match-all or match-any keywords for Layer 7 HTTP load-balancing applications is useful as a means to provide the nesting of one class map within a second class map. For example, to specify a match criteria for load balancing where the URL is either /foo or /bar and the header "host" equals "thishost".

host1/Admin(config)# class-map type http loadbalance match-any 
URLCHK_SLB_L7_CLASS

host1/Admin(config-cmap-http-lb)# match http url /foo

host1/Admin(config-cmap-http-lb)# match http url /bar

host1/Admin(config-cmap-http-lb)# exit

host1/Admin(config)# class-map type http loadbalance match-all 
URLHDR_SLB_L7_CLASS

host1/Admin(config-cmap-http-lb)# match http header host header-value 
thishost

host1/Admin(config-cmap-http-lb)# match class-map URLCHK_SLB_L7_CLASS

host1/Admin(config-cmap-http-lb)# exit 
 

Policy Maps

The policy-map command creates the traffic policy. The purpose of a traffic policy is to implement specific ACE functions associated with a traffic class. A traffic policy contains the following components:
Policy map name
Previously created traffic class map or, optionally, the class-default class map
One or more of the individual Layer 3 and Layer 4 or Layer 7 policies that specify the actions (functions) to be performed by the ACE
The ACE supports a system-wide maximum of 4096 policy maps.
A Layer 7 policy map is always associated within a Layer 3 and Layer 4 policy map to provide an entry point for traffic classification. Layer 7 policy maps are considered to be child policies and can only be nested under a Layer 3 and Layer 4 policy map. Only a Layer 3 and Layer 4 policy map can be activated on a VLAN interface; a Layer 7 policy map cannot be directly applied on an interface. For example, to associate a Layer 7 load-balancing policy map, you nest the load-balancing policy map using the Layer 3 and Layer 4 loadbalance policy command.
Depending on the policy-map command, the ACE executes the action specified in the policy map on the network traffic as follows:
first-match—For policy-map commands that contain the first-match keyword, the ACE executes the specified action only for traffic that meets the first matching classification within a policy map. No additional actions are executed.
all-match—For policy-map commands that contain the all-match keyword, the ACE attempts to match a packet against all classes in the policy map and executes the actions of all matching classes associated with the policy map.
multi-match—For policy-map commands that contain the multi-match keyword, these commands specify that multiple sets of classes exist in the policy map and allow a multi-feature policy map. The ACE applies a first-match execution process to each class set in which a packet can match multiple classes within the policy map, but the ACE executes the action for only one matching class within each of the class sets. The definition of which classes are in the same class set depends on the actions applied to the classes; the ACE associates each policy map action with a specific set of classes. Some ACE functions may be associated with the same class set as other features (for example, application protocol inspection actions would typically all be associated with the same class set), while the ACE associates other features with a different class set.
When there are multiple instances of actions of the same type configured in a policy map, the ACE performs the first action encountered of the same type that has a match.
If none of the classifications specified in policy maps match, then the ACE executes the default actions specified against the class-default class map (if one is specified). All traffic that fails to meet the other matching criteria in the named class map belongs to the default traffic class. The class-default class map has an implicit match any statement in it and is used to match any traffic classification.
For example, with the following classifications for a specific request, the ACE attempts to match the incoming content request with the classification defined in class maps C1, C2, and C3.
host1/Admin(config)# policy-map type loadbalance first-match 
SLB_L7_POLICY
host1/Admin(config-pmap-lb)# class C1
host1/Admin(config-pmap-lb-c)# serverfarm SF1
host1/Admin(config-pmap-lb-c)# exit
host1/Admin(config-pmap-lb)# class C2
host1/Admin(config-pmap-lb-c)# serverfarm SF2
host1/Admin(config-pmap-lb-c)# exit
host1/Admin(config-pmap-lb)# class C3
host1/Admin(config-pmap-lb-c)# serverfarm SF3
host1/Admin(config-pmap-lb-c)# exit
host1/Admin(config-pmap-lb-c)# class class-default
host1/Admin(config-pmap-lb-c)# serverfarm SFBACKUP
 



Ref: From Cisco.com

Saturday, September 17, 2011

BGP peer-session template & peer-policy template

A Peer template is a pattern, a model that can be used to facilitate the management of  BGP peer configuration. Templates are much more flexible than Peer-group because of the concept of inheritance, in which a common template can be inherited by more specific templates according to a hierarchical scheme.
Two types of templates are available: peer-session templates for session establishment and peer-policy templates for prefix advertisement policies.

BGP Convergence and Updates

BGP scanner process monitors the next hop of installed routes to verify next-hop reachability. It is also responsible to select, install, and validate the BGP best path. By default, the BGP scanner is used to poll the RIB for this information every 60 seconds. During the 60 second time period between scan cycles, Interior Gateway Protocol (IGP) instability or other network failures can cause black holes and routing loops to temporarily form.
BGP scan process is also responsible for the checks to determine whether the conditional advertisement should or should not advertise the conditional route. It also checks whether route dampening information needs to be updated.
bgp scan-time
There is also a VPN4 equivalent that is configured under the VPN4 address family and the syntax is slightly different. By default it runs every 15 seconds.
bgp scan-time import

Ref:
http://21500.net/?tag=bgp-scan-interval
http://www.networkers-online.com/blog/2008/12/bgp-performance-tunning-convergence-stability-scalability-and-nsf-part-2/
http://routing-bits.com/2009/08/07/bgp-convergence-and-updates/

Thursday, September 15, 2011

Fast switching on the same interface is disabled

show processes cpu

Directly on the router, the router showed a 99% CPU utilization on the last minute & five minutes, after reading a very usefull article directly from CISCO documentation, we came with the following paragraph:


Fast switching on the same interface is disabled

If an interface has a lot of secondary addresses or subinterfaces and there is a lot of traffic sourced from the interface and destined for an address on that same interface, then all of those packets are process-switched. In this situation, you should enable ip route-cache same-interface on the interface. When Cisco Express Forwarding switching is used, you do not need to enable Cisco Express Forwarding switching on the same interface separately.

Just like described on the article , we had several secondary networks with high traffic running on the same Fast Ethernet, to be exact 4 secondary networks. During the peak times the processor was going to a 99% utilization and we were having a horrible packet loss of around 12-21%.

By running the command:

ip route-cache same-interface , this problem was completely solved, our 2610 router went from a killer 99% usage to a 14% ... needless to say no more packet loss.


Ref: http://www.felipecruz.com/blog_high-cpu-utilization-ip-input-cisco-router.php

DDOS

Keep the traffic out of your network:

1. null0 destination ip addr at edge devices
a) community b)manually

http://tools.ietf.org/html/rfc3882

http://www.linux.it/~md/text/blackholing.html

http://wozney.ca/2010/03/11/bgp-blackhole-community/



Allow the attacking traffic in your network:

1. qos the attacking traffic

http://www.cloudcentrics.com/?p=455

http://www.composednetworks.com/qos

2. iACL block fake source IP address

3. traffic clean based on destination ip address

4. divert attacking traffic to dummy device

Monday, September 12, 2011

DHCP Option 176

DHCP Option 176
Just the basics of DHCP option 176 are covered here. See the “4600 Series IP Telephone LAN
Administrator's Guide” for more details.
The DHCP specification has what are called options, numbered from 0 through 255. Each option is
associated with a specific bit of information to be sent by the DHCP server to the DHCP client. For
example, option 1 is the subnet mask option and is used to send the subnet mask to the client. Option 3 is
the router option and is used to send the default gateway address and other gateway addresses to the
client. Some options are defined – such as options 1 and 3 – and others are not. The defined options are
found in RFC 2132.
Options 128 through 254 are site-specific options. They are standard options that are not defined, and
vendors may use these options and define them to be whatever is necessary for a specific application.
Avaya IP telephones use site-specific option 176 as one of the methods to receive certain parameters from
the DHCP server.

For the Avaya application of option 176, it is defined as a string. The string contains parameters and
values separated by commas, as illustrated after the following table. The most prevalent parameters and
values are as follows.
Parameter Value
MCIPADD Address(es) of gatekeeper(s) – at least one required
MCPORT The UDP port used for registration – 1719 default
TFTPSRVR Address(es) of TFTP server(s) – at least one required
L2QVLAN 802.1Q VLAN ID – 0 default
L2QAUD L2 audio priority value.
L2QSIG L2 signaling priority value.
VLANTEST The number of seconds a phone will attempt to return to the
previously known voice VLAN
Table 8: DHCP option 176 parameters and values
The typical option 176 string for a single-VLAN environment looks like this.
MCIPADD=addr1,addr2,addr3, … ,MCPORT=1719,TFTPSRVR=addr
At least one gatekeeper (C-LAN or S8300) address must be present after MCIPADD to point the phones
to a call server. MCPORT specifies which UDP port to use for RAS registration. IP telephone firmware
1.6.1 and later already have 1719 as the default port, but it is prudent to include it. A TFTP server
address is necessary so that phones know where to go to download the necessary script files and binary
codes (see “Boot-up Sequence” heading below). L2QVLAN and VLANTEST would be included if
802.1Q tagging were required, such as in a dual-VLAN environment (see section 4.2). Other parameters
may be added, such as L2QAUD and L2QSIG, which are used to specify the L2 priority values for audio
and signaling. If these values are not specified in option 176, the default values (6/6) are used.
Note: The L3 priority values (DSCP) are received from the call server, as configured on the SAT ipnetwork-
region form. The reason L3 values are received from the call server and L2 values are not is
because an IP phone accepts all L2 values from one source. The preferred and recommended method is
via DHCP option 176. An alternative method is described in section 3.5, heading “ip-network-map,”
which utilizes the L2 values administered on the SAT ip-network-region form.
An administrator must create option 176 on the DHCP server and administer a properly formatted string
with the appropriate values. Option 176 could be applied globally or on a per scope basis. The
recommendation is to configure option 176 on a per scope basis, because the values themselves or the
order of the values could change on a per scope basis. As part of the DHCP process at boot-up, the IP
telephone requests option 176 from the DHCP server.
DHCP Lease Duration
A DHCP server gives out an IP address with a finite or infinite lease, and the Avaya recommended lease
duration for IP phones is 2 to 4 weeks. The DHCP specification calls for the client to renew the lease at
determined intervals, typically beginning at half-life of the lease. If the first renewal attempt fails, there
are allowances in the specification for further renewal attempts, dependent on the length of the lease. Too
short a lease requires too many renewals, which not only taxes the DHCP server but can also disrupt
service to the IP phones if renewals cannot be accomplished for whatever reason. On the other hand, too
long a lease can result in IP address exhaustion if hosts are unplugged from the network without properly
shutting them down to invoke a release of the IP address lease.

Ref: http://www.vocalcom.fr/download/VOIP-AVAYA-IP-GUIDE-3-0.pdf

DHCP for multiple VLANs

1. Normal way of doing it is to have trunk link between router and switch and setup DHCP server on the router based on the subinterfaces

2. If no trunk link available, IP helper on the switch can also be considered.

Tuesday, September 6, 2011

BGP process

Ref: http://hackingcisco.blogspot.com/2011/05/lab-142-bgp-timers.html


http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a00809d16f0.shtml#understandbgp



  • BGP Open - responsible for BGP session establishment.
  • BGP I/O - handles queuing and processing updates and keepalive packets.
  • BGP Scanner - responsible for conditional route advertisements, route dampening, import and export of routes into VRF (MPLS), and confirms the reachability to the NEXT_HOP (the last one is handled now by BGP next-hop tracking).
  • BGP Router - calculates the best path, establishes peers, sends and receives routes and interacts with RIB.

Troubleshooting Input Queue Drops and Output Queue Drops

Processing and Switching

In IP networks, routers make forwarding decisions based on the contents of the routing table. When a router searches the routing table, it looks for the longest match for the destination IP address. The router does this at the process level. Therefore, the search process is queued among other CPU processes, because of which, the lookup time is unpredictable and can be very long. Therefore, a number of switching methods based on exact-match-lookup have been introduced in Cisco IOS® Software.
The main benefit of exact-match-lookup is that the lookup time is deterministic and very short. This has significantly shortened the time a router takes to make a forwarding decision. Therefore, routines that perform the search can be implemented at the interrupt level. This means, the arrival of a packet triggers an interrupt, which causes the CPU to postpone other tasks and handle the packet. The legacy method to forward packets is to look for a best match in the routing table. This cannot be implemented at interrupt level and must be performed at process level. For a number of reasons, some of which are mentioned in this document, the longest-match-lookup method cannot be completely abandoned, so these two lookup methods exist in parallel on Cisco routers. This strategy has been generalized, and is now also applied to IPX and AppleTalk.
For more information on Cisco IOS Software switching paths, refer to Performance Tuning Basics.


Troubleshoot Input Queue Drops

router#show interfaces ethernet 0/0 
 
Router(conf-if)# hold-queue length in
router#show processes CPU | i ^PID|Input
 
show ip traffic
 
Router#show buffers input-interface serial 0/0
 

Output Queue Drops

Ref: http://www.cisco.com/en/US/products/hw/routers/ps133/products_tech_note09186a0080094791.shtml

Sunday, September 4, 2011

Route-map

Ref: http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a008047915d.shtml


  • If you use an ACL in a route-map permit clause, routes that are permitted by the ACL are redistributed.
  • If you use an ACL in a route-map deny clause, routes that are permitted by the ACL are not redistributed.
  • If you use an ACL in a route-map permit or deny clause, and the ACL denies a route, then the route-map clause match is not found and the next route-map clause is evaluated.




    A match or set command in each clause can be missed or repeated several times, if one of these conditions exist:
      • If several match commands are present in a clause, all must succeed for a given route in order for that route to match the clause (in other words, the logical AND algorithm is applied for multiple match commands).
      • If a match command refers to several objects in one command, either of them should match (the logical OR algorithm is applied). For example, in the match ip address 101 121 command, a route is permitted if it is permitted by access list 101 or access list 121.
      • If a match command is not present, all routes match the clause. In the previous example, all routes that reach clause 30 match; therefore, the end of the route-map is never reached.
      • If a set command is not present in a route-map permit clause then the route is redistributed without modification of its current attributes.
    Do not configure a set command in a deny route-map clause because the deny clause prohibits route redistribution—there is no information to modify.
    A route-map clause without a match or set command performs an action. An empty permit clause allows a redistribution of the remaining routes without modification. An empty deny clause does not allows a redistribution of other routes (this is the default action if a route-map is completely scanned but no explicit match is found).