Friday, December 7, 2012

DMVPN DESIGN QUESTIONS


DMVPN DESIGN Questions


  • Should I use Phase-1, -2 or -3 DMVPN?
  • Which routing protocol would work best in my DMVPN network?
  • How can I design networks with redundant connectivity or redundant hub/spoke routers?
  • How can I minimize the amount of routing information exchanged over DMVPN?
  • Can I use default routing over DMVPN?
  • How do I integrate centralized or local exit to the Internet with DMVPN?
  • How can I use DMVPN as a backup path for my MPLS/VPN connectivity?
  • How can I use 3G networks as a backup path for my DMVPN?


Different DMVPN phases
DMVPN and Routing Protocols – OSPF
DMVPN and Routing Protocols – EIGRP
DMVPN and Routing Protocols – BGP


Wednesday, November 28, 2012

Unicast Reverse Path Forwarding


Unicast Reverse Path Forwarding: Background

A number of common types of DoS attacks take advantage of forged or rapidly changing source IP addresses, allowing attackers to thwart efforts by ISPs to locate or filter these attacks. Unicast RPF was originally created to help mitigate such attacks by providing an automated, scalable mechanism to implement the Internet Engineering Task Force (IETF) Best Common Practices 38/Request for Comments 2827 (BCP 38/RFC 2827) anti-spoofing filtering on the customer-to-ISP network edge. By taking advantage of the information stored in the Forwarding Information Base (FIB) that is created by the CEF switching process, Unicast RPF can determine whether IP packets are spoofed or malformed by matching the IP source address and ingress interface against the FIB entry that reaches "back" to this source (a so-called "reverse lookup"). Packets that are received from one of the best reverse path routes back out of the same interface are forwarded as normal. If there is no reverse path route on the same interface from which the packet was received, it might mean that the source address was modified, and the packet is dropped (by default).
This original implementation of Unicast RPF, known as "strict mode," required a match between the ingress interface and the reverse path FIB entry. With Unicast RPF, all equal-cost "best" return paths are considered valid, meaning that it works for cases in which multiple return paths exist, provided that each path is equal in routing cost to the others (number of hops, weights, and so on), and as long as the route is in the FIB. Unicast RPF also functions when Enhanced Interior Gateway Routing Protocol (EIGRP) variants are being used and unequal candidate paths back to the source IP address exist. The strict mode works well for customer-to-ISP network edge configurations that have symmetrical flows (including some multihomed configurations in which symmetrical flows can be enforced).
However, some customer-to-ISP network edges and nearly all ISP-to-ISP network edges use multihomed configurations in which routing asymmetry is typical. When traffic flows are asymmetrical, that is, those in which traffic from Network A to Network B would normally take a different path from traffic flowing from Network B to Network A, the Unicast RPF check will always fail the strict mode test. Because this type of asymmetric routing is common among ISPs and in the Internet core, the original implementation of Unicast RPF was not available for use by ISPs on their core routers and ISP-to-ISP links.
Over time and with an increase in DDoS attacks on the Internet, the functionality of Unicast RPF was reviewed as a tool that ISPs can use on the ISP-to-ISP network edge (an ISP router "peered" with another ISP router) to enable dynamic BGP, triggered black-hole filtering. To provide this functionality, however, the mechanisms used with Unicast RPF had to be modified to permit its deployment on the ISP-to-ISP network edge so that asymmetrical routing is not an issue.

Loose Mode

To provide ISPs with a DDoS resistance tool on the ISP-to-ISP edge of a network, Unicast RPF was modified from its original strict mode implementation to check the source addresses of each ingress packet without regard for the specific interface on which it was received. This modification is known as "loose mode." Loose mode allows Unicast RPF to automatically detect and drop packets such as the following:
IETF RFC 1918 source addresses
Other Documenting Special Use Addresses (DUSA) that should not appear in the source
Unallocated addresses that have not been allocated by the Regional Internet Registries (RIRs)
Source addresses that are routed to a null interface on the router
Loose mode removes the match requirement on the specific ingress interface, allowing Unicast RPF to loose-check packets. This packet checking allows the "peering" router of an ISP having multiple links to multiple ISPs to check the source IP address of ingress packets to determine whether they exist in the FIB. If they exist, the packets are forwarded. If they do not exist in the FIB, the packets fail and are dropped. This checking increases resistance against DoS and DDoS attacks that use spoofed source addresses and unallocated IP addresses.


When administrators use Unicast RPF in strict mode, the packet must be received on the interface that the router would use to forward the return packet. Unicast RPF configured in strict mode may drop legitimate traffic that is received on an interface that was not the router's choice for sending return traffic. Dropping this legitimate traffic could occur when asymmetric routing paths are present in the network.
When administrators use Unicast RPF in loose mode, the source address must appear in the routing table. Administrators can change this behavior using the allow-default option, which allows the use of the default route in the source verification process. Additionally, a packet that contains a source address for which the return route points to the Null 0 interface will be dropped. An access list may also be specified that permits or denies certain source addresses in Unicast RPF loose mode.
Care must be taken to ensure that the appropriate Unicast RPF mode (loose or strict) is configured during the deployment of this feature because it can drop legitimate traffic. Although asymmetric traffic flows may be of concern when deploying this feature, Unicast RPF loose mode is a scalable option for networks that contain asymmetric routing paths.

Unicast RPF in an Enterprise Network

In many enterprise environments, it is necessary to use a combination of strict mode and loose mode Unicast RPF. The choice of the Unicast RPF mode that will be used will depend on the design of the network segment connected to the interface on which Unicast RPF is deployed.
Administrators should use Unicast RPF in strict mode on network interfaces for which all packets received on an interface are guaranteed to originate from the subnet assigned to the interface. A subnet composed of end stations or network resources fulfills this requirement. Such a design would be in place for an access layer network or a branch office where there is only one path into and out of the branch network. No other traffic originating from the subnet is allowed and no other routes are available past the subnet.
Unicast RPF loose mode can be used on an uplink network interface that has a default route associated with it.

An important consideration for deployment is that Cisco Express Forwarding switching must be enabled for Unicast RPF to function.
Unicast RPF is enabled on a per-interface basis.
Unicast RPF can be configured on the PIX Security Appliance

Configuring CPU Threshold Notifications


Setting a Rising CPU Thresholding Notification: Example
The following example shows how to set a rising CPU thresholding notification for total CPU utilization.
When total CPU utilization exceeds 80 percent for a period of 5 seconds or longer, a rising threshold
notification is sent.


Router(config)# process cpu threshold type total rising 80 interval 5

Note When the optional falling arguments (percentage and seconds) are not specified, they take on the same
values as the rising arguments (percentage and seconds).


Setting a Falling CPU Thresholding Notification: Example
The following example shows how to set a falling CPU thresholding notification for total CPU
utilization. When total CPU utilization, which at one point had risen above 80 percent and triggered a
rising threshold notification, falls below 70 percent for a period of 5 seconds or longer, a falling
threshold notification is sent.


Router(config)# process cpu threshold type total rising 80 interval 5 falling 70 interval 5

Note When the optional falling arguments (percentage and seconds) are not specified, they take on the same
values as the rising arguments (percentage and seconds).

Configuring CPU Threshold Notifications

Friday, October 5, 2012

VLAN Access Control Lists (VACLs)

Use VACL to filter traffic within a vlan

access-list 100 permit icmp 10.10.10.1  host 10.10.10.2


vlan access-map VACL 10
 action forward
 match ip address 1
vlan access-map VACL 20
 action drop
vlan filter VACL vlan-list 11



VLAN Access Control Lists (VACLs) Tier 1

Thursday, October 4, 2012

Multi-VRF Selection Using Policy-Based Routing (PBR)


Multi-VRF Selection Using Policy-Based Routing (PBR)

Prerequisites for Multi-VRF Selection Using Policy-Based Routing (PBR)

The router must support policy-based routing (PBR) in order for you to configure this feature. For platforms that do not support PBR, use the Directing MPLS VPN Traffic Using a Source IP Address feature.
A VRF must be defined before you configure this feature. An error message is displayed on the console if no VRF exists.

Restrictions for Multi-VRF Selection Using Policy-Based Routing (PBR)

All commands that aid in routing also support hardware switching, except for the set ip next-hop verify availabilitycommand because Cisco Discovery Protocol information is not available in the line cards.
Protocol Independent Multicast (PIM) and multicast packets do not support PBR and cannot be configured for a source IP address that is a match criterion for this feature.
The set vrf and set ip global next-hop commands can be configured with the set default interfaceset interfaceset ip default next-hop, and set ip next-hop commands. But the set vrf and set ip global next-hopcommands take precedence over the set default interfaceset interface, set ip default next-hop, and set ip next-hop commands. No error message is displayed if you attempt to configure the set vrf command with any of these three set commands.
The Multi-VRF Selection Using Policy-Based Routing (PBR) feature cannot be configured with IP prefix lists.
The set global and set vrf commands cannot be simultaneously applied to a route map.
The Multi-VRF Selection Using Policy-Based Routing (PBR) feature supports VRF-lite; that is, only IP routing protocols run on the router. Multiprotocol Label Switching (MPLS) and VPN cannot be configured.






Wednesday, October 3, 2012

IP CEF load balancing test


IP CEF load balancing test



 33.33.33.33           -      f0/13    -           55.55.55.55
                     - R1 -                     - R2 -  
 44.44.44.44           -      f0/15    -            56.56.56.56


R1 = IPT-LAB-SWITCH
R2 = C3560-48


R1 is in OSPF totally stub area and receives two default route from R2

IPT-LAB-SWITCH#sh ip route ospf
O*IA 0.0.0.0/0 [110/2] via 10.0.15.5, 00:35:24, FastEthernet0/15
               [110/2] via 10.0.13.5, 00:35:24, FastEthernet0/13





IPT-LAB-SWITCH#sh ip cef exact-route 33.33.33.33 55.55.55.55
33.33.33.33 -> 55.55.55.55 => IP adj out of FastEthernet0/13, addr 10.0.13.5
IPT-LAB-SWITCH#sh ip cef exact-route 44.44.44.44 55.55.55.55
44.44.44.44 -> 55.55.55.55 => IP adj out of FastEthernet0/13, addr 10.0.13.5
IPT-LAB-SWITCH#sh ip cef exact-route 33.33.33.33 56.56.56.56
33.33.33.33 -> 56.56.56.56 => IP adj out of FastEthernet0/15, addr 10.0.15.5
IPT-LAB-SWITCH#sh ip cef exact-route 44.44.44.44 56.56.56.56
44.44.44.44 -> 56.56.56.56 => IP adj out of FastEthernet0/15, addr 10.0.15.5



R2 has specific routes from R1

C3560-48#sh ip route ospf
     33.0.0.0/32 is subnetted, 1 subnets
O       33.33.33.33 [110/2] via 10.0.15.3, 00:33:41, FastEthernet0/15
                    [110/2] via 10.0.13.3, 00:33:41, FastEthernet0/13
     44.0.0.0/32 is subnetted, 1 subnets
O       44.44.44.44 [110/2] via 10.0.15.3, 00:33:41, FastEthernet0/15
                    [110/2] via 10.0.13.3, 00:33:41, FastEthernet0/13

C3560-48#sh ip cef exact-route 56.56.56.56 33.33.33.33
56.56.56.56 -> 33.33.33.33 => IP adj out of FastEthernet0/13, addr 10.0.13.3
C3560-48#sh ip cef exact-route 56.56.56.56 44.44.44.44
56.56.56.56 -> 44.44.44.44 => IP adj out of FastEthernet0/13, addr 10.0.13.3
C3560-48#sh ip cef exact-route 55.55.55.55 33.33.33.33
55.55.55.55 -> 33.33.33.33 => IP adj out of FastEthernet0/13, addr 10.0.13.3
C3560-48#sh ip cef exact-route 55.55.55.55 44.44.44.44
55.55.55.55 -> 44.44.44.44 => IP adj out of FastEthernet0/13, addr 10.0.13.3


R1 seems load balanced between the links but NOT R2???

After change the load-sharing algorithm on R2:
C3560-48(config)#ip cef load-sharing algorithm universal FFFFFFFF


C3560-48#sh ip cef exact-route 55.55.55.55 33.33.33.33
55.55.55.55 -> 33.33.33.33 => IP adj out of FastEthernet0/13, addr 10.0.13.3
C3560-48#sh ip cef exact-route 55.55.55.55 44.44.44.44
55.55.55.55 -> 44.44.44.44 => IP adj out of FastEthernet0/15, addr 10.0.15.3
C3560-48#sh ip cef exact-route 56.56.56.56 44.44.44.44
56.56.56.56 -> 44.44.44.44 => IP adj out of FastEthernet0/15, addr 10.0.15.3
C3560-48#sh ip cef exact-route 56.56.56.56 33.33.33.33
56.56.56.56 -> 33.33.33.33 => IP adj out of FastEthernet0/13, addr 10.0.13.3

ip cef load-sharing algorithm

To select a Cisco Express Forwarding (CEF) load balancing algorithm, use the ip cef load-sharing algorithm command in global configuration mode. To return to the default universal load balancing algorithm, use the no form of this command.
ip cef load-sharing algorithm {original | tunnel [id] | universal [id]}
no ip cef load-sharing algorithm {original | tunnel [id] | universal [id]}
original
Sets the load balancing algorithm to the original based on a source and destination hash.
tunnel
Sets the load balancing algorithm for use in tunnel environments or in environments where there are only a few IP source and destination address pairs.
universal
Sets the load balancing algorithm to the universal algorithm that uses a source and destination, and ID hash.
id
(Optional) Fixed identifier.

Monday, October 1, 2012

Policy based routing


Note The set ip next-hop and set ip default next-hop are similar commands but have a different order of operations. Configuring the set ip next-hop command causes the system to use policy routing first and then use the routing table. Configuring the set ip default next-hop command causes the system to use the routing table first and then policy route the specified next hop.

http://www.cisco.com/en/US/docs/ios/12_3/iproute/command/reference/ip2_s1g.html#wp1037892
Policy-Based Routing Using the set ip default next-hop and set ip next-hop Commands Configuration Example

P.S. you will NOT able to disable IP CEF under Cisco 3560 therefore you can NOT debug ip policy to verify the policy routing.

Saturday, September 22, 2012

ip cef load-sharing algorithm


Usage Guidelines


The original CEF load sharing algorithm produced distortions in load sharing across multiple routers due to the use of the same algorithm on every router. When the load sharing algorithm is set to universal mode, each router on the network can make a different load sharing decision for each source-destination address pair which resolves load sharing distortions.

The tunnel algorithm is designed to more fairly share load when only a few source-destination pairs are involved.

Examples


The following example shows how to enable the CEF load sharing algorithm for universal environments:

Tuesday, September 4, 2012

Inverse Multiplexing Over ATM (IMA) on Cisco


Introduction

Inverse Multiplexing over ATM (IMA) involves inverse multiplexing and de-multiplexing of ATM cells in a cyclical fashion among physical links grouped to form a higher-bandwidth and logical link. The rate of the logical link is approximately the sum of the rate of the physical links in the IMA group. Streams of cells are distributed in a round-robin manner across the multiple T1/E1 links and reassembled at the destination to form the original cell stream. Sequencing is provided using IMA Control Protocol (ICP) cells.
In the transmit direction, the ATM cell stream received from the ATM layer is distributed on a cell by cell basis across the multiple links within the IMA group. At the far-end, the receiving IMA unit reassembles the cells from each link on a cell-by-cell basis and recreates the original ATM cell stream. The image below displays how cell streams are transmitted across multiple interfaces and recombined to form the original cell stream. The receiving interface discards the ICP cells, and the aggregate cell stream is then passed to the ATM layer.
Periodically, the transmit IMA sends special cells that permit reconstruction of the ATM cell stream at the receiving IMA. These ICP cells provide the definition of an IMA frame.
Cell streams are transmitted across multiple interfaces and recombined to form the original stream.

Inverse Multiplexing Over ATM (IMA) on Cisco 2600 and 3600 Routers
Inverse Multiplexing for ATM (IMA) FAQ
Understanding the Variable Bit Rate Real Time (VBR-rt) Service Category for ATM VCs

Friday, August 31, 2012

Perl Extracting matches

So /\d+/ and /(\d+) will still match as many digits as possible. but in the latter case they will be remembered in a special variable to be back referenced later.

Programming Perl


Extracting matches

The grouping metacharacters () also allow the extraction of the parts of a string that matched. For each grouping, the part that matched inside goes into the special variables $1 , $2 , etc. They can be used just as ordinary variables:

    # extract hours, minutes, seconds
    $time =~ /(\d\d):(\d\d):(\d\d)/; # match hh:mm:ss format
    $hours = $1;
    $minutes = $2;
    $seconds = $3;

http://perldoc.perl.org/perlrequick.html

Monday, August 27, 2012

AnyConnect VPN Client on IOS Router with IOS Zone Based Policy Firewall Configuration Example



In Cisco IOS® Software Release 12.4(20)T and later, a virtual interface SSLVPN-VIF0 was introduced for AnyConnect VPN client connections. But, this SSLVPN-VIF0 interface is an internal interface, which does not support user configurations. This created a problem with AnyConnect VPN and Zone Based Policy Firewall since with the firewall, traffic can only flow between two interfaces when both interfaces belong to security zones. Since the user cannot configure the SSLVPN-VIF0 interface to make it a zone member, VPN client traffic terminated on the Cisco IOS WebVPN gateway after decryption cannot be forwarded to any other interface belonging to a security zone. The symptom of this problem can be seen with this log message reported by the firewall:
*Mar  4 16:43:18.251: %FW-6-DROP_PKT: Dropping icmp session 192.168.1.12:0 192.168.10.1:0 due to One of the interfaces not being cfged for zoning with ip ident 0
This issue was later addressed in newer software releases of Cisco IOS. With the new code, the user can assign a security zone to a virtual-template interface, which is referenced under the WebVPN context, in order to associate a security zone with the WebVPN context .

 AnyConnect VPN Client on IOS Router with IOS Zone Based Policy Firewall Configuration Example

code:

interface Virtual-Template1
 ip unnumbered Loopback0
 zone-member security inside
 !
!
 
Note: reload the router after the change.  
Cisco SSL-VPN LAN Access with Zone Based Policy Firewall 

Monday, August 20, 2012

Automatically backup your router config

There are many ways to automatically backup a router config

1. SNMP poll from a server
How To Copy Configurations To and From Cisco Devices Using SNMP

2. Use TCL/Expect script from a server
Script to backup Cisco Device Config

3. EEM/Kron policy list on a router
Daily backup

4. Using archive IOS command
How to use archive command to save configuration


I think #4 is the easiest one,  all you need is a FTP server.

877wr1(config)#archive
877wr1(config-archive)#time-period
877wr1(config-archive)#write-memory
path ftp://192.168.1.110/backup/$h

!!$h is the variable of hostname

877wr1(config-archive)#?
Archive configuration commands:
  default       Set a command to its defaults
  exit          Exit from archive configuration mode
  log           Logging commands
  maximum       maximum number of backup copies !! the maximum is 14
  no            Negate a command or set its defaults
  path          path for backups
  rollback      Rollback parameters
  time-period   Period of time in minutes to automatically archive the running-config
  write-memory  Enable automatic backup generation during write memory

877wr1#sh archive
The maximum archive configurations allowed is 14.
The next archive file will be named ftp://192.168.1.110/backup/877wr1-1
 Archive #  Name
   1        ftp://192.168.1.110/backup/877wr1-0 <- Most Recent

Friday, August 17, 2012

ESMTP The AUTH Command

The AUTH command is an ESMTP command (SMTP service extension) that is used to authenticate the client to the server. The AUTH command sends the clients username and password to the e-mail server. AUTH can be combined with some other keywords as PLAIN, LOGIN, CRAM-MD5 and DIGEST-MD5 (e.g. AUTH LOGIN) to choose an authentication mechanism. The authentication mechanism chooses how to login and which level of security that should be used. 

Below are the AUTH LOGIN  commands/mechanisms described.

S: 220 smtp.server.com Simple Mail Transfer Service Ready
C: EHLO client.example.com
S: 250-smtp.server.com Hello client.example.com
S: 250-SIZE 1000000
S: 250 AUTH LOGIN PLAIN CRAM-MD5
C: AUTH LOGIN
S: 334 VXNlcm5hbWU6
C: adlxdkej  <<<<<<<<<<<<<<<base64 converted username
S: 334 UGFzc3dvcmQ6
C: lkujsefxlj
<<<<<<<<<<<<<<<base64 converted password
S: 235 2.7.0 Authentication successful


The AUTH Command 

Difference Between LOOKUP Function and VLOOKUP in Excel

The vector syntax for LOOKUP looks for a matching value in a range of cells (vertical or horizontal) and returns the value in the matching vector position of the second supplied range. It is similar to VLOOKUP and HLOOKUP; however, it is limited to a single row or column to hold results.

VLOOKUP looks for a matching value in the first column of a range of cells and returns the value from the same row in the column of the range you specify. The range can have multiple columns. LOOKUP would have only one column to choose from.

To discrbe the difference, I would say LOOKUP has a single column or row range to hold the lookup values, and a single column or row range to hold the return values. The return range does not need to be adjacent to the lookup range, but can be. VLOOKUP can have multiple columns, the first being the lookup column. The other columns hold the result values and are choosen by the column parameter. The VLOOKUP fucntion uses a single multi-cell range.

What's the difference between LOOKUP function and VLOOKUP in Excel?

Thursday, August 16, 2012

Hybrid Access Layer Design

Below is an interesting LAN switching solution, which covers layer two and layer three requirement.

Hybrid Access Layer Design

Encryption & Cryptographic Hash Function

Encryption

Data Encryption Standard (DES) Key Sizes 56 bits

Triple Data Encryption Algorithm (TDEA or Triple DEA) Key Sizes  168, 112 or 56 bits

Advanced Encryption Standard (AES) Key Sizes  128, 192 or 256 bits

RSA Key Sizes 1,024 to 4,096 bits

RC4  Key Sizes 40–2,048 bits



cryptographic hash function

MD5 Message-Digest Algorithm Digest sizes 128 bits

SHA-1 Digest sizes 160 bits

A cryptographic hash function is a hash function, that is, an algorithm that takes an arbitrary block of data and returns a fixed-size bit string, the (cryptographic) hash value, such that an (accidental or intentional) change to the data will (with very high probability) change the hash value. The data to be encoded is often called the "message," and the hash value is sometimes called the message digest or simply digest.
The ideal cryptographic hash function has four main or significant properties:
  • it is easy to compute the hash value for any given message
  • it is infeasible to generate a message that has a given hash
  • it is infeasible to modify a message without changing the hash
  • it is infeasible to find two different messages with the same hash

Internet Key Exchange

Architecture

Most IPsec implementations consist of an IKE daemon that runs in user space and an IPsec stack in the kernel that processes the actual IP packets.
User-space daemons have easy access to mass storage containing configuration information, such as the IPsec endpoint addresses, keys and certificates, as required. Kernel modules, on the other hand, can process packets efficiently and with minimum overhead—which is important for performance reasons.
The IKE protocol uses UDP packets, usually on port 500, and generally requires 4-6 packets with 2-3 turn-around times to create an SA on both sides. The negotiated key material is then given to the IPsec stack. For instance, this could be an AES key, information identifying the IP endpoints and ports that are to be protected, as well as what type of IPsec tunnel has been created. The IPsec stack, in turn, intercepts the relevant IP packets if and where appropriate and performs encryption/decryption as required. Implementations vary on how the interception of the packets is done—for example, some use virtual devices, others take a slice out of the firewall, etc.

IKE Phases

IKE consists of two phases: phase 1 and phase 2.[10]
IKE phase 1's purpose is to establish a secure authenticated communication channel by using the Diffie–Hellman key exchange algorithm to generate a shared secret key to encrypt further IKE communications. This negotiation results in one single bi-directional ISAKMP Security Association (SA).[11] The authentication can be performed using either pre-shared key (shared secret), signatures, or public key encryption.[12] Phase 1 operates in either Main Mode or Aggressive Mode. Main Mode protects the identity of the peers; Aggressive Mode does not.[10]
During IKE phase 2, the IKE peers use the secure channel established in Phase 1 to negotiate Security Associations on behalf of other services like IPsec. The negotiation results in a minimum of two unidirectional security associations (one inbound and one outbound).[13] Phase 2 operates only in Quick Mode.[10]

 The ISAKMP security association negotiated during Phase 1 includes the negotiation of the following attributes used for subsequent negotiations:

    An encryption algorithm to be used, such as the Data Encryption Standard (DES).

    A hash algorithm (MD5 or SHA, as used by AH or ESP).

    An authentication method, such as authentication using previously shared keys.

    A Diffie-Hellman group. Diffie and Hellman were two pioneers in the industry who invented public-key cryptography. In this method, instead of encrypting and decrypting with the same key, data is encrypted using a public key knowable to anyone, and decrypted using a private key that is kept secret. A Diffie-Hellman group defines the attributes of how to perform this type of cryptography. Four predefined groups derived from OAKLEY are specified in IKE and provision is allowed for defining new groups as well.


Internet Key Exchange
IPSec Key Exchange (IKE)  

Tuesday, August 14, 2012

Spanning Tree Protocol priorities

Spanning Tree Protocol (STP) is vital for detecting loops within a switched network. Spanning tree works by designating a common reference point (the root bridge) and systematically building a loop-free tree from the root to all other bridges. All redundant paths remain blocked unless a designated link fails.

Spanning Tree Protocol operation

Select a root bridge.

Determine the least cost paths to the root bridge.

Disable all other root paths.

Modifications in case of ties.


In summary, the sequence of events to determine the best received BPDU (which is your best path to the root) is
  1. Lowest root bridge ID - Determines the root bridge
  2. Lowest cost to the root bridge - Favors the upstream switch with the least cost to root
  3. Lowest sender bridge ID - Serves as a tie breaker if multiple upstream switches have equal cost to root
  4. Lowest sender port ID - Serves as a tie breaker if a switch has multiple (non-Etherchannel) links to a single upstream switch

Bridge ID = priority (16 bits) + ID [MAC address] ( 48bits)
default bridge priority is 32768

Port ID =  priority (4 bits) + ID [Interface number] ( 12bits)
default port priority is 128

Data rate and STP path cost

The table below shows the default cost of an interface for a given data rate.
Data rate STP Cost (802.1D-1998) RSTP Cost (802.1D-2004 / 802.1w)
4 Mbit/s 250 5,000,000
10 Mbit/s 100 2,000,000
16 Mbit/s 62 1,250,000
100 Mbit/s 19 200,000
1 Gbit/s 4 20,000
2 Gbit/s 3 10,000
10 Gbit/s 2 2,000

http://en.wikipedia.org/wiki/Spanning_Tree_Protocol
http://packetlife.net/blog/2008/may/5/spanning-tree-protocol-priorities/
http://www.cisco.com/warp/public/473/spanning_tree1.swf

Monday, August 13, 2012

TCL + Cisco IOS + Kron

Task: performance scheduled system health check on a Cisco router and email the result automatically. 

In the sendmail.tcl script, use below statement to save the "Cisco IOS show result" into mail body

set show_clock [exec {show clock}]
set show_ip_interfaces [exec {show interface summary}]

append body "\n" "$show_clock"
append body "\n" "$show_ip_interfaces"

Configuring the router with the Tcl ios_config command

And then you can schedule task using kron

kron occurrence sendmail in 1 oneshot
kron policy-list sendmail
 cli tclsh sendmail.tcl

Cisco Kron +TCL

Friday, August 10, 2012

MAC of Switch and Router

Switch has a base MAC address and different MAC for every interface

MAC addresses should be unique on each network. They can be same in different networks. i.e MAC-address /layer 2 information changes with each hop.
1]
"Every interface of router have same MAC"
This is right behaviour as each interface of the router will be connected to different network. So it will be highly unlikely that mac-addresss clash would take place.

2]
In case of switch "It shows different MAC for every port"
This is also correct as switches can be used as Layer 3 and as well as in a layer 2 environment. Chances are mac-addresses might clash within same network. Consider cases like SVI/VLAN interfaces, routed interfcaes or plain layer interfaces. So it required unique mac on each interface.



A bridge sends a BPDU frame using the unique MAC address of the port itself as a source address, and a destination address of the STP multicast address 01:80:C2:00:00:00.




There are three types of BPDUs:
  • Configuration BPDU (CBPDU), used for Spanning Tree computation
  • Topology Change Notification (TCN) BPDU, used to announce changes in the network topology
  • Topology Change Notification Acknowledgment (TCA)
BPDUs are exchanged regularly (every 2 seconds by default) and enable switches to keep track of network changes and to start and stop forwarding at ports as required.



HWIC-2FE and HWIC-4ESW Q&A

Q. What are the 1- and 2-port Fast Ethernet HWICs?
A. The Cisco® 1- and 2-Port Fast Ethernet High-Speed WAN Interface Cards (HWICs) are singlewide interface cards, available as a 1-port HWIC (HWIC-1FE) and as a 2-port HWIC (HWIC-2FE), that provide Cisco modular and integrated services routers with additional Layer 3 routed ports.


Q. Are there features not supported on the Fast Ethernet HWICs?
A. Yes. Features specifically not supported include Cisco Inter-Switch Link (ISL) trunking, Connectivity Fault Management (CFM), flow control, and online insertion and removal (OIR, hot-swappable).


Q. Can these interfaces be used as switch ports?
A. No, these are native Layer 3 interfaces, designed for routing. They can be configured to bridge using the router CPU. There is no switching application-specific integrated circuit (ASIC), nor are switching features supported.
Cisco 1- and 2-port Fast Ethernet High-Speed


Q. What are the 4- and 9-port Cisco® EtherSwitch® high-speed WAN interface cards (HWICs)?
A. The 4- and 9-port Cisco EtherSwitch HWICs are modular HWICs that provide line-rate Layer 2 switching across Ethernet ports using Cisco IOS® Catalyst® Software.

Q. Can I assign each switch port to a unique VLAN? If so, are there any limitations?
A. Each switch port can be assigned to its own VLAN, effectively providing four additional routed ports. However, there are serious performance and feature limitations to doing this. The VLAN interfaces are truly Layer 3 switching interfaces and are treated uniquely among interface types on the router. Many features are NOT supported or tested on these interfaces, including Point-to-Point Protocol over Ethernet (PPPOE) termination, Layer 2 Tunneling Protocol Version 3 (L2TPv3) termination, MAC address assignment, Layer 3 QoS, and others. You should carefully test any desired feature and solution prior to deploying it.

Q. What is the connection speed to the router backplane of the EtherSwitch HWICs?
A. The 4-port HWIC connects to the backplane with a maximum throughput of 100 Mbps, while the 9-port HWIC can support a maximum bandwidth of 200 Mbps. Actual performance will depend on many factors, including performance of the hosting router, other services configured on the hosting router, and the type of traffic stream being generated.

Q. What is intra-chassis stacking?
A. Intra-chassis stacking is defined as the ability to have multiple Cisco EtherSwitch HWICs connected with any two Cisco EtherSwitch ports in the same router. An example of intra-chassis stacking is placing two Cisco EtherSwitch HWICs in the same router connected together through any four ports on the HWICs.
Intra-chassis stacking is limited to two HWICs in any router. The HWICs must be connected externally using the Fast Ethernet interfaces and a crossover cable. Intra-chassis stacking allows all the Fast Ethernet interfaces on the two HWICs to participate in the same Layer 2 domain.

Q. What is the maximum number of VLANs supported for the Cisco EtherSwitch HWICs?
A. Both Cisco EtherSwitch HWICs support up to 15 VLANs on the Cisco Integrated Services Routers

Q. Is online insertion and removal (OIR) supported for the Cisco EtherSwitch HWICs?
A. The HWIC architecture does NOT support the OIR specification. OIR for the 4- and 9-port HWICs is not supported on the Cisco Integrated Services Routers.
Cisco EtherSwitch 4- and 9-Port High-Speed WAN Interface Cards

Q. What is a Cisco® Enhanced High-Speed WAN Interface Card (EHWIC)?
A. The Cisco High-Speed WAN Interface Card (EHWIC) is an updated and enhanced version of the current HWIC for the Cisco Integrated Services Router Generation 2 (ISR G2). The EHWIC offers greater speeds (up to 800 Mbps bidirectionally) and higher port density than the current WIC. It also has a third row of pins for increased power to the cards, as well as support for Enhanced Power over Ethernet (EPoE) with up to 20 watts per port. Furthermore, the EHWICs have a connection to the traditional router CPU and the new Multi-Gigabit Fabric (MGF) backplane. EHWICs are available in single-wide and double-wide form factors.
Cisco Enhanced High-Speed WAN Interface Cards

Overview of Cisco Interface Cards for Cisco Access Routers

Wednesday, August 8, 2012

Cisco Integrated Services Routers Generation 2 (ISR G2)

Cisco Integrated Services Routers Generation 2


Platform list:
Cisco 3900 Series
Cisco 2900 Series
Cisco 1900 Series
Cisco 890, 880, 860 Series
http://www.cisco.com/en/US/prod/collateral/routers/ps10538/aag_c45_556315.pdf
Cisco Integrated Services Routers Generation 2


Software Activation Terminology and Details
Universal Image
Each 1900, 2900 and 3900 system is loaded with a universal Cisco IOS image. Universal IOS image contains all Cisco IOS features. The level of Cisco IOS functionality available is determined by the combination of one or more licenses installed on the device.
There will be two versions of universal images supported on the next generation ISRs.
1. Universal images with the "universalk9" designation in the image name: This universal image offers all the Cisco IOS features including strong crypto features such as VPN payload, Secure UC etc.
2. Universal images with the universalk9_npe" designation in the image name: The robust licensing encryption solution provided by Cisco Software Activation satisfies requirements for the export of encryption capabilities. However, some countries have import requirements that require that the device does not support any strong crypto functionality such as VPN payload etc. in any form. To satisfy the import requirements of those countries, this universal image does not support any strong payload encryption such as VPN payload, secure voice etc. This image supports threat defense features through SECNPE-K9 license.
Unique Device Identifier (UDI)
The Unique Device Identifier is made up of two components: the Product ID (PID) and Serial Number (SN). Serial Number is an 11 digit number which uniquely identifies a device. The Product ID identifies the type of device. This information can be found using the "show license UDI" command on the router CLI. This information is also present on a pull-out label tray found on the device. You may have to remove "V01" that follows the PID. eg. use only "CISCO2921/K9", instead of "CISCO2921/K9 V01".

Q. What tunnel count and performance throughput are available on the Cisco ISR G2 routers with the SECK9 license?
A. The SEC-K9 permanent licenses apply to the Cisco 1900, 2900, and 3900 ISR G2 platforms; these licenses limit all encrypted tunnel counts to 225 tunnels maximum for IP Security (IPsec), Secure Sockets Layer VPN (SSLVPN), a secure time-division multiplexing (TDM) gateway, and secure Cisco Unified Border Element (CUBE) and 1000 tunnels for Transport Layer Security (TLS) sessions.
The SEC-K9 license limits encrypted throughput to less than or equal to 85-Mbps unidirectional traffic in or out of the ISR G2 router, with a bidirectional total of 170 Mbps. This requirement applies for the Cisco 1900, 2900, and 3900 ISR G2 platforms.
 








TCP MSS Adjustment


When a host (usually a PC) initiates a TCP session with a server, it negotiates the IP segment size by using the MSS option field in the TCP SYN packet. The value of the MSS field is determined by the maximum transmission unit (MTU) configuration on the host. The default MSS value for a PC is 1500 bytes.
The PPP over Ethernet (PPPoE) standard supports a MTU of only 1492 bytes. The disparity between the host and PPPoE MTU size can cause the router in between the host and the server to drop 1500-byte packets and terminate TCP sessions over the PPPoE network. Even if the path MTU (which detects the correct MTU across the path) is enabled on the host, sessions may be dropped because system administrators sometimes disable the ICMP error messages that must be relayed from the host in order for path MTU to work.
The ip tcp adjust-mss command helps prevent TCP sessions from being dropped by adjusting the MSS value of the TCP SYN packets.
The ip tcp adjust-mss command is effective only for TCP connections passing through the router.
In most cases, the optimum value for the max-segment-size argument is 1452 bytes. This value plus the 20-byte IP header, the 20-byte TCP header, and the 8-byte PPPoE header add up to a 1500-byte packet that matches the MTU size for the Ethernet link.
If you are configuring the ip mtu command on the same interface as the ip tcp adjust-mss command, it is recommended that you use the following commands and values:
ip tcp adjust-mss 1452
ip mtu 1492

Monday, August 6, 2012

Tunnel Mode SSL VPN

interface Loopback252
 description Cisco SSL VPN Client for WebVPN
 ip address 192.168.4.1 255.255.255.0

interface Virtual-Template2
 ip unnumbered Loopback252
 ip nat inside
 ip virtual-reassembly
!

ip local pool ILP_WVPN_CLIENT 192.168.4.100 192.168.4.105

webvpn gateway ssl-gw1
 hostname webvpn1
 ip interface Dialer0 port 443
 ssl trustpoint SSL
 inservice
 !
webvpn install svc flash:/webvpn/sslclient-win-1.1.4.176.pkg sequence 1
 !

webvpn context vpn1
 title "Welcome"
 secondary-color black
 title-color black
 ssl authenticate verify all
 !

policy group vpn1
   functions svc-enabled
   svc address-pool "ILP_WVPN_CLIENT"
   svc default-domain "cisco.com"
   svc keep-client-installed
   svc split exclude local-lans
   svc split dns "yourLocalDomain.com" ! this domain will be resolved by the tunnel DNS
   svc split exclude 10.0.0.0 255.0.0.0 ! exclude your local network
   svc dns-server primary 192.168.4.1
   svc dns-server secondary 8.8.8.8
 virtual-template 2
 default-group-policy vpn1
 gateway ssl-gw1
 inservice
!
end

P.S. statement "svc split [exclude|include]" can NOT be used at the same time.

SSL VPN
SSL VPN in IOS 12.4T
Cisco SSL VPN Configuration ( easy / simple example )
Cisco IOS SSL VPN Policy Groups
AnyConnect VPN Client on IOS Router with IOS Zone Based Policy Firewall Configuration Example
Configuring Cisco SSL VPN AnyConnect (WebVPN) on Cisco IOS Routers

Thursday, August 2, 2012

Using FVRF and IVRF in DMVPN

Using FVRF and IVRF in DMVPN


1. OVERVIEW
This document provides configuration guidance for users of Cisco® Dynamic Multipoint VPN (DMVPN) technology on Cisco IOS® IPSec routers. The Cisco 7600 Series platform is an exception because it does not support FVRF. The IVRF configuration described below shall work on the Cisco 7600 Series and the Cisco Catalyst® 6500 Series as well. The testing was performed on Cisco 1841 integrated services routers running Cisco IOS Software Releasae 12.3(11)T3. The objective of the testing was to configure and test interaction of DMVPN with Front VRF (FVRF) as well as internal VRF (IVRF).
Advantage: The advantage of using an FVRF is primarily to carve out a separate routing table from the global routing table (where tunnel interface exists). The advantage of using an IVRF is to define a private space to hold the DMVPN and private network information. Both these configurations provide extra security from anyone trying to attack the router from the Internet by separating out routing information. These VRF configurations can be used on both DMVPN hub and spoke.
What is the configuration difference? In case of FVRF, the tunnel destination lookup needs to be done in FVRF. Secondly, since the Internet-facing interface is in a VRF, the ISAKMP key lookup is also done in the VRF. As for using IVRF, the tunnel, private subnets, and routing protocol need to be defined in the IVRF space. The tunnel destination and ISAKMP key are looked up in global space for this scenario.

Wednesday, August 1, 2012

DOS batch ping multiple hosts script

DOS batch ping multiple hosts script

Name: batchping.bat

You also need myhosts.txt which should contain the hosts IP that you want to ping, and batchping.log which will log the result.

---script begin---
@echo off
del /Q batchping.log
for /f %%i in (myhosts.txt) do call ::pingit %%i

:pingit
if "%1"=="" goto END
ping -n 1 %1 >nul
if errorlevel 1 goto FAIL
echo %1 - is good >> batchping.log
goto END

:FAIL
echo %1 - is not pingable >> batchping.log

:END

---script end---

http://forums.hexus.net/networking-broadband/116568-dos-windows-ping-utility-multiple-hosts.html
http://www.krishnababug.com/2009/09/ping-mutiple-ips-bat-file.html

Wednesday, July 25, 2012

Understanding Packet Counters in show policy-map interface Output


Cisco IOS, also referred to as the Layer 3 (L3) processor, and the interface driver use the transmit ring when moving packets to the physical media. The two processors collaborate in this way:
  • The interface transmits packets in accordance with the interface rate or a shaped rate.
  • The interface maintains a hardware queue or transmit ring, where it stores the packets that wait for transmission onto the physical wire.
  • When the hardware queue or transmit ring fills, the interface provides explicit back pressure to the L3 processor system. The interface notifies the L3 processor to stop dequeuing packets to the interface transmit ring because the transmit ring is full. The L3 processor now stores the excess packets in the L3 queues.
  • When the interface sends the packets on the transmit ring and empties the ring, it once again has sufficient buffers available to store the packets. It releases the back pressure, and the L3 processor dequeues new packets to the interface.
The most important aspect of this communication system is that the interface recognizes that its transmit ring is full and throttles the receipt of new packets from the L3 processor system. Thus, when the interface is congested, the drop decision is moved from a random, last-in/first-dropped decision in the transmit ring first in, first out (FIFO) queue to a differentiated decision based on IP-level service policies implemented by the L3 processor.

What Is the Difference Between "Packets" and "Packets Matched"?

Next, you need to understand when your router uses the L3 queues, since service policies apply only to packets stored in the layer-3 queues.
This table illustrates when packets sit in the L3 queue. Locally generated packets are always process-switched and are delivered first to the L3 queue before they are passed on to the interface driver. Fast-switched and Cisco Express Forwarding (CEF)-switched packets are delivered directly to the transmit ring and sit in the L3 queue only when the transmit ring is full.


Packet Type
Congestion
Non-Congestion
Locally-generated packets, which includes Telnet packets and pings
Yes
Yes
Other packets that are process-switched
Yes
Yes
Packets that are CEF- or fast-switched
Yes
No



Without congestion, there is no need to queue any excess packets. With congestion, packets, which includes CEF- and fast-switched packets, may go into the L3 queue. Refer back to how the Cisco IOS configuration guide defines congestion: "If you use congestion management features, packets accumulating at an interface are queued until the interface is free to send them; they are then scheduled according to their assigned priority and the queuing mechanism configured for the interface."
Normally, the "packets" counter is much larger than the "pkts matched" counter. If the values of the two counters are nearly equal, then the interface currently receives a large number of process-switched packets or is heavily congested. Both of these conditions should be investigated to ensure optimal packet forwarding.


Understanding Packet Counters in show policy-map interface Output

Saturday, July 21, 2012

DMVPN, NHRP, RRI

Dynamic Multipoint Virtual Private Network (DMVPN)[1] is a dynamic tunneling form of a virtual private network (VPN) supported on Cisco IOS-based routers based on the standard protocols, GRE, NHRP and IPsec. DMVPN provides the capability for creating a dynamic-mesh VPN network without having to pre-configure (static) all possible tunnel end-point peers, including IPsec (Internet Protocol Security) and ISAKMP (Internet Security Association and Key Management Protocol) peers. DMVPN is initially configured to build out a hub-and-spoke network by statically configuring the hubs (VPN headends) on the spokes, no change in the configuration on the hub is required to accept new spokes. Using this initial hub-and-spoke network, tunnels between spokes can be dynamically built on demand (dynamic-mesh) without additional configuration on the hubs or spokes. This dynamic-mesh capability alleviates the need for and load on the hub to route data between the spoke networks.

Dynamic Multipoint Virtual Private Network
Cisco IOS DMVPN Overview
DMVPN Explained
Dynamic Multipoint VPN (DMVPN)
Configuring Dynamic Multipoint VPN (DMVPN) using GRE over IPSec between Multiple Routers

Next Hop Resolution Protocol (NHRP) is sometimes used to improve the efficiency of routing computer network traffic over Non-Broadcast, Multiple Access (NBMA) Networks. It is defined in IETF RFC 2332, and further described in RFC 2333.
Configuring NHRP


Reverse route injection (RRI) is the ability for static routes to be automatically inserted into the routing process for those networks and hosts protected by a remote tunnel endpoint. These protected hosts and networks are known as remote proxy identities.

Each route is created on the basis of the remote proxy network and mask, with the next hop to this network being the remote tunnel endpoint. By using the remote Virtual Private Network (VPN) router as the next hop, the traffic is forced through the crypto process to be encrypted.

Enhancements to the default behavior of RRI, the addition of a route tag value, and enhancements to how RRI is configured were added to the Reverse Route Injection feature in Cisco IOS Release 12.3(14)T.

An enhancement was added in Cisco IOS Release 12.4(15)T that allows a distance metric to be set for routes that are created by a VPN process so that the dynamically learned route on a router can take precedence over a locally configured static route.

Reverse Route Injection

Thursday, July 19, 2012

SSL VPN over DDNS on Cisco 877W

conf t
 ip domain name domain.com
 cry key gen rsa general-keys label SSL_VPN mod 1024
 crypto pki trustpoint SSL
 enrollment selfsigned
 fqdn none
 subject-name CN=domain.com
 revocation-check crl
 rsakeypair SSL_VPN
 cry pki enr SSL

webvpn gateway ssl-gw1
ip interface Dialer0 port 443
hostname webvpn1
ssl trustpoint SSL
inservice
!

webvpn context vpn1
ssl authenticate verify all
!
url-list "eng"
   url-text "wwwin-eng" url-value "http://wwwin-eng.cisco.com"
!
policy group vpn1
   url-list "eng"
!
 port-forward "MGMT"
   local-port 3000 remote-server "192.168.1.110" remote-port 8080 description "MGMT"
 !
 port-forward "SHARE"
   local-port 3001 remote-server "192.168.1.110" remote-port 80 description "SHARE"
 !
default-group-policy vpn1
gateway ssl-gw1
inservice
!

ip http server
ip http secure-server
ip http access-class 6

SSL VPN and Dynamic DNS - ddns on IOS
Cisco IOS SSL VPN Gateways and Contexts
Cisco SSL VPN Configuration
Cisco VPN Client and Thin-Client SSL VPN (WebVPN) in the same 877 router
Downloading and Installing Cisco Router and Security Device Manager