Tuesday, 9 September 2014

VPN Options

Last night, i tried running Voice VLAN on IPSec using CCP and was deemed successful. All the other desired VLAN traffic performed as expected going thought NAT and only Voice VLAN went directly from HQ to Remote site sending ICMP packets directly to the internal host at each end.

Tonight, i tried Implementing IPSec into GRE Tunnel. Everything went along just as expected except that it removes the NAT at both CE Router ends and tried re-routing every individual VLAN traffic through the tunnel. This may seem right concerning security urgency but it wasn't the performance expected. The Only traffic that needs to go through the tunnel is the Voice traffic generated by Voice VLANs on both ends.

I then tried configuring GRE Tunnel manually which now works well. IPSec traffic is now running through GRE Tunnel using EIGRP 101 as is Protocol.
ICMP packets successfully traversing the network end to end.

Working on Voice Configuration and hopefully the CME features in this router allows me to implement more calling features.


Monday, 8 September 2014

VPN Traffic

After deciding to change VPN host from yesterday's project implementation phase, i tried implementing the IPSec today to be hosted from both CE Routers on either end.

Leaving the Core MPLS network to run on its forwarding state, transmitting VLAN 20, 30, 40 destined for Data, Executive and Server from HQ Site to the the remote site VLANs 60, 70 and 80 namely the same as in HQ site. This VLAN traffic is applied through NAT Translation from the CE Router at both ends, transmitted through the MPLS Core to each destined sites. VLANs 10 and 50 from each site namely Voice VLAN is to be transmitted through the VPN Tunnel.

The IPSec VPN Tunnel was configured today using CCP, tested and each VLAN was able to send ICMP traffic end to end directly to each PCs in the internal network.

Next Phase:
Implement Voice, configure and test calls, implement features, measure network performance and reporting.

Sunday, 7 September 2014

MPLS VPN Configuration

Today, my task was to implement MPLS VPN configurations, Configure Voice and Allow other traffic through normal VPN. Upon further research on MPLS VPN, it was confirmed that the VPN tunnel for MPLS gets activated from the ISP's PE router (Provider's Edge Router) therefore majority of your VPN traffic gets controlled by the ISP. Some typical disadvantages are listen below:
  • Your routing protocol choice might be limited.
  • Your end-to-end convergence is controlled primarily by the service provider.
  • The reliability of your L3 MPLS VPN is influenced by the service provider's competence level.
  • Deciding to use MPLS VPN services from a particular service provider also creates a very significant lock-in. It’s hard to change the provider when it’s operating your network core.
Considering security, i'd prefer having the core Control of my traffic In-house therefore every other traffic will now continue to run through the core MPLS Switching path except Voice which will be routed from CE Router (Customer Edge Router) to the end destination.

It took a while to try and figure out the basic principles of MPLS VPN and its configurations and then trying to apply them to the main-project implementation frame. After the above mentioned finding, i will now have to change the VPN plan to traffigate Voice traffic from the CE router on each ends instead of passing it through the MPLS Core.

Next Phase: 
- Configure Voice calls site to site and within sites
- Apply voice traffic through VPN (configure VPN using CCP)
- Test voice-quality scoring and performance

Ref:
(Retrieved from:  http://searchenterprisewan.techtarget.com/guides/MPLS-VPN-fundamentals, Open at 7:30pm, Sunday - 7th September, 2014)






Sunday, 31 August 2014

VLAN Configuration

VLAN administration is always a challenging bit in my networking carrier and today's task invloves the following:
- Create 4 Different VLANS namely Voice, Data, Executive and Server
- create an access-list allowing all other vlans except Voice which is to be transported via VPN
- Create a dynamic and static Global pool for NAT translations
- test connectivity from HQ to Remote site using the translation pool of addresses.

All vlans were successfully created and were able to route icmp traffic from one to another. A challenge was met when HQ was able to ping successfully to the Remote site including it's translated address whereas Remote Site couldn't ping back.

Troubleshooting this problem using comparison and shoot from the hip method.Took quite a while to finally figure out the phase that created the drop in ICMP packets from Remote to HQ site.

After troubleshooting, one can successfully ping the translated address both ways from either site.

With MPLS Core and VLANs all functioning accordingly, the next phase of implementation is to configure Voice into the provided network.


Sunday, 24 August 2014

MPLS Update

Managed to get run MPLS today in my core network. Made a few show commands for basic analysis and will be implementing MPLS Core Switching configurations in the week to come.

Anway, was abit stuck today when i tried running vlan configurations for voice. Once that is fixed, should be able to implement voice and traffic tests will be up soon.

Do find below some show command results and a tickle script after implementing MPLS.

PE-HQ#traceroute 209.165.200.225

Type escape sequence to abort.
Tracing the route to 209.165.200.225

  1 10.10.10.2 [MPLS: Label 21 Exp 0] 60 msec 60 msec 60 msec
  2 10.10.20.1 [MPLS: Label 21 Exp 0] 52 msec 48 msec 48 msec
  3 209.165.200.2 24 msec 20 msec 24 msec
  4 209.165.200.225 20 msec 20 msec 24 msec



P-ISPSW#traceroute 209.165.100.225

Type escape sequence to abort.
Tracing the route to 209.165.100.225

  1 10.10.10.1 [MPLS: Label 21 Exp 0] 12 msec 12 msec 16 msec
  2 209.165.100.2 16 msec 12 msec 16 msec
  3 209.165.100.225 16 msec 16 msec 12 msec


P-ISPSW#traceroute 209.165.200.225

Type escape sequence to abort.
Tracing the route to 209.165.200.225

  1 10.10.20.1 [MPLS: Label 21 Exp 0] 36 msec 36 msec 36 msec
  2 209.165.200.2 16 msec 12 msec 16 msec
  3 209.165.200.225 16 msec 12 msec 12 msec


PE-REMOTE#traceroute 209.165.100.225

Type escape sequence to abort.
Tracing the route to 209.165.100.225

  1 10.10.20.2 [MPLS: Label 19 Exp 0] 60 msec 60 msec 60 msec
  2 10.10.10.1 [MPLS: Label 21 Exp 0] 28 msec 28 msec 28 msec
  3 209.165.100.2 24 msec 20 msec 20 msec
  4 209.165.100.225 20 msec 20 msec 24 msec



PE-HQ SHOW COMMANDS

PE-HQ#show mpls int
PE-HQ#show mpls interfaces
Interface              IP            Tunnel   Operational
Serial0/0/1            Yes (ldp)     No       Yes
PE-HQ#
PE-HQ#


PE-HQ#show mpls ldp discovery
 Local LDP Identifier:
    10.2.1.1:0
    Discovery Sources:
    Interfaces:
        Serial0/0/1 (ldp): xmit/recv
            LDP Id: 10.1.1.1:0; no host route


PE-HQ#show mpls ldp nei
PE-HQ#show mpls ldp neighbor
    Peer LDP Ident: 10.1.1.1:0; Local LDP Ident 10.2.1.1:0
        TCP connection: 10.1.1.1.646 - 10.2.1.1.24613
        State: Oper; Msgs sent/rcvd: 30/30; Downstream
        Up time: 00:16:14
        LDP discovery sources:
          Serial0/0/1, Src IP addr: 10.10.10.2
        Addresses bound to peer LDP Ident:
          10.10.10.2      10.10.20.2      10.1.1.1


PE-HQ#show mpls ldp bin
PE-HQ#show mpls ldp bindings
  tib entry: 10.1.1.0/24, rev 12
        local binding:  tag: 19
        remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 10.2.1.0/24, rev 8
        local binding:  tag: imp-null
        remote binding: tsr: 10.1.1.1:0, tag: 16
  tib entry: 10.3.1.0/24, rev 6
        local binding:  tag: 18
        remote binding: tsr: 10.1.1.1:0, tag: 17
  tib entry: 10.10.10.0/30, rev 10
        local binding:  tag: imp-null
        remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 10.10.20.0/30, rev 14
        local binding:  tag: 20
        remote binding: tsr: 10.1.1.1:0, tag: imp-null
  tib entry: 209.165.100.0/30, rev 16
        local binding:  tag: imp-null
        remote binding: tsr: 10.1.1.1:0, tag: 18
  tib entry: 209.165.100.224/27, rev 18
        local binding:  tag: 21
        remote binding: tsr: 10.1.1.1:0, tag: 19
  tib entry: 209.165.200.0/30, rev 4
        local binding:  tag: 17
        remote binding: tsr: 10.1.1.1:0, tag: 20
  tib entry: 209.165.200.224/27, rev 2
        local binding:  tag: 16
        remote binding: tsr: 10.1.1.1:0, tag: 21



P-ISPSW SHOW COMMANDS

P-ISPSW#show mpls interfaces
Interface              IP            Tunnel   BGP Static Operational
Serial0/0/0            Yes (ldp)     No       No  No     Yes
Serial0/0/1            Yes (ldp)     No       No  No     Yes
P-ISPSW#
P-ISPSW#
P-ISPSW#show mpls ldp di
P-ISPSW#show mpls ldp discovery
 Local LDP Identifier:
    10.1.1.1:0
    Discovery Sources:
    Interfaces:
        Serial0/0/0 (ldp): xmit/recv
            LDP Id: 10.2.1.1:0; no host route
        Serial0/0/1 (ldp): xmit/recv
            LDP Id: 10.3.1.1:0; no host route
P-ISPSW#
P-ISPSW#
P-ISPSW#show mpls ldp nei
P-ISPSW#show mpls ldp neighbor
    Peer LDP Ident: 10.3.1.1:0; Local LDP Ident 10.1.1.1:0
        TCP connection: 10.3.1.1.29031 - 10.1.1.1.646
        State: Oper; Msgs sent/rcvd: 36/36; Downstream
        Up time: 00:21:19
        LDP discovery sources:
          Serial0/0/1, Src IP addr: 10.10.20.1
        Addresses bound to peer LDP Ident:
          10.10.20.1      209.165.200.1   10.3.1.1
    Peer LDP Ident: 10.2.1.1:0; Local LDP Ident 10.1.1.1:0
        TCP connection: 10.2.1.1.24613 - 10.1.1.1.646
        State: Oper; Msgs sent/rcvd: 35/35; Downstream
        Up time: 00:20:44
        LDP discovery sources:
          Serial0/0/0, Src IP addr: 10.10.10.1
        Addresses bound to peer LDP Ident:
          209.165.100.1   10.10.10.1      10.2.1.1
P-ISPSW#
P-ISPSW#
P-ISPSW#show mpls ldp bind
P-ISPSW#show mpls ldp bindings
  lib entry: 10.1.1.0/24, rev 2
        local binding:  label: imp-null
        remote binding: lsr: 10.3.1.1:0, label: 16
        remote binding: lsr: 10.2.1.1:0, label: 19
  lib entry: 10.2.1.0/24, rev 4
        local binding:  label: 16
        remote binding: lsr: 10.3.1.1:0, label: 17
        remote binding: lsr: 10.2.1.1:0, label: imp-null
  lib entry: 10.3.1.0/24, rev 6
        local binding:  label: 17
        remote binding: lsr: 10.3.1.1:0, label: imp-null
        remote binding: lsr: 10.2.1.1:0, label: 18
  lib entry: 10.10.10.0/30, rev 8
        local binding:  label: imp-null
        remote binding: lsr: 10.3.1.1:0, label: 18
        remote binding: lsr: 10.2.1.1:0, label: imp-null
  lib entry: 10.10.20.0/30, rev 10
        local binding:  label: imp-null
        remote binding: lsr: 10.3.1.1:0, label: imp-null
        remote binding: lsr: 10.2.1.1:0, label: 20
  lib entry: 209.165.100.0/30, rev 12
        local binding:  label: 18
        remote binding: lsr: 10.3.1.1:0, label: 19
        remote binding: lsr: 10.2.1.1:0, label: imp-null
  lib entry: 209.165.100.224/27, rev 14
        local binding:  label: 19
        remote binding: lsr: 10.3.1.1:0, label: 20
        remote binding: lsr: 10.2.1.1:0, label: 21
  lib entry: 209.165.200.0/30, rev 16
        local binding:  label: 20
        remote binding: lsr: 10.3.1.1:0, label: imp-null
        remote binding: lsr: 10.2.1.1:0, label: 17
  lib entry: 209.165.200.224/27, rev 18
        local binding:  label: 21
        remote binding: lsr: 10.3.1.1:0, label: 21
        remote binding: lsr: 10.2.1.1:0, label: 16




PE-REMOTE SHOW COMMANDS

PE-REMOTE#show mpls interfaces
Interface              IP            Tunnel   BGP Static Operational
Serial0/0/0            Yes (ldp)     No       No  No     Yes
PE-REMOTE#
PE-REMOTE#
PE-REMOTE#
PE-REMOTE#show mpls ldp dis
PE-REMOTE#show mpls ldp discovery
 Local LDP Identifier:
    10.3.1.1:0
    Discovery Sources:
    Interfaces:
        Serial0/0/0 (ldp): xmit/recv
            LDP Id: 10.1.1.1:0; no host route
PE-REMOTE#
PE-REMOTE#
PE-REMOTE#
PE-REMOTE#show mpls ldp nei
PE-REMOTE#show mpls ldp neighbor
    Peer LDP Ident: 10.1.1.1:0; Local LDP Ident 10.3.1.1:0
        TCP connection: 10.1.1.1.646 - 10.3.1.1.29031
        State: Oper; Msgs sent/rcvd: 40/40; Downstream
        Up time: 00:24:30
        LDP discovery sources:
          Serial0/0/0, Src IP addr: 10.10.20.2
        Addresses bound to peer LDP Ident:
          10.10.10.2      10.10.20.2      10.1.1.1
PE-REMOTE#
PE-REMOTE#
PE-REMOTE#
PE-REMOTE#show mpls ldp bind
PE-REMOTE#show mpls ldp bindings
  lib entry: 10.1.1.0/24, rev 2
        local binding:  label: 16
        remote binding: lsr: 10.1.1.1:0, label: imp-null
        remote binding: lsr: 10.1.1.1:0, label: imp-null
        local binding:  label: 17
        remote binding: lsr: 10.1.1.1:0, label: 16
  lib entry: 10.3.1.0/24, rev 6
        local binding:  label: imp-null
        remote binding: lsr: 10.1.1.1:0, label: 17
  lib entry: 10.10.10.0/30, rev 8
        local binding:  label: 18
        remote binding: lsr: 10.1.1.1:0, label: imp-null
  lib entry: 10.10.20.0/30, rev 10
        local binding:  label: imp-null
        remote binding: lsr: 10.1.1.1:0, label: imp-null
  lib entry: 209.165.100.0/30, rev 12
        local binding:  label: 19
        remote binding: lsr: 10.1.1.1:0, label: 18
  lib entry: 209.165.100.224/27, rev 14
        local binding:  label: 20
        remote binding: lsr: 10.1.1.1:0, label: 19
  lib entry: 209.165.200.0/30, rev 16
        local binding:  label: imp-null
        remote binding: lsr: 10.1.1.1:0, label: 20
  lib entry: 209.165.200.224/27, rev 18
        local binding:  label: 21
        remote binding: lsr: 10.1.1.1:0, label: 21


TCLSH SCRIPT

HQ to REMOTE

foreach address {
209.165.100.2
209.165.100.1
10.10.10.1
10.10.10.2
10.10.20.1
10.10.20.2
209.165.200.1
209.165.200.2
209.165.200.225
} {
ping $address
}





RESULTS

CE-HQ#tclsh
CE-HQ(tcl)#foreach address {
+>172.16.1.1
+>209.165.100.2
+>209.165.100.1
+>10.10.10.1
+>10.10.10.2
+>10.10.20.1
+>10.10.20.2
+>209.165.200.1
+>209.165.200.2
+>209.165.200.225
+>} {
+>ping $address
+>}

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.100.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/30/36 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.100.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/15/16 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.10.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/14/16 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.10.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/28/32 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.20.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/42/44 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.20.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/29/32 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.200.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/44/48 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.200.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/57/60 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.200.225, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
CE-HQ(tcl)#

Friday, 15 August 2014

Current Update - End to End Solution

For a week to date, i've been trying to figure out how to make connectivity from the HQ's internal network to the Remote site's internal network. With Mark's help, we were able to troubleshoot and find a few loop holes that prevents end to end connectivity. A pool of addresses were created for both internal networks (172.16.1.0/24 - HQ and 172.16.2.0/24 - REMOTE) and given the Public pool distribution by the TELCO Provider (209.165.200.224 for remote and 209.165.100.224 for HQ) respectively. Dynamic NAT translation was configured for the 2 pools and with a given internal ip address from each end, a static NAT was also configured to provide end to end translation and provide entry to each respective internal network via the ISP or TELCO Provider.

With abit of complications met, a few troubleshooting methods were introduced - shoot from the hip and follow the path method were used in this case given the following show commands:
- show ip route, show nat translations, show nat statistics and traceroute.

The traceroute command show potential problem within the internal network that prevents end to end connectivity. Disabling the PC firewall enable the ICMP packets to be received.

Given below are the connectivity test results in TCL script form.

HQ to REMOTE

foreach address {
209.165.100.2
209.165.100.1
10.10.10.1
10.10.10.2
10.10.20.1
10.10.20.2
209.165.200.1
209.165.200.2
209.165.200.225
} {
ping $address 
}





RESULTS

CE-HQ#tclsh
CE-HQ(tcl)#foreach address {
+>172.16.1.1
+>209.165.100.2
+>209.165.100.1
+>10.10.10.1
+>10.10.10.2
+>10.10.20.1
+>10.10.20.2
+>209.165.200.1
+>209.165.200.2
+>209.165.200.225
+>} {
+>ping $address
+>}

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.100.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/30/36 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.100.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/15/16 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.10.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/14/16 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.10.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/28/32 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.20.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/42/44 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.20.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/29/32 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.200.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 40/44/48 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.200.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/57/60 ms
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 209.165.200.225, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
CE-HQ(tcl)#


The next phase is to implement MPLS into this network and see if it is capable of handling MPLS traffic.


Saturday, 9 August 2014

Basic Connectivity configurations

Project Implementation is underway and am currently working on the basic network connectivity. Current issues faced at the moment is the NAT translation that requires Public to Private address translation. The ICMP packets doesn't seem to pick the virtual Public address from each ends of the Customer Link (HQ - Remote CE)

Current protocols running within the network are BGP and OPSPF - with static routing and default routes applied to interfaces where it suites.

Current investigation is underway to troubleshoot the problem at hand and hoping to discuss this with Mark (Lecturer) sometimes this week.

Apart from those complications, DHCP, running protocols and basic internal connectivity is performing as expected.

The next four phases of this project implementation are:
1. Apply MPLS Core Switching configurations

2. Implement IP-PBX

3. Configure CME Features

4. Network Performance testing

Basic complete rate is targeted to end of mid-semester 2.