This article will assume a familiarity with basic MPLS concepts and configurations and will focus on setting up and maintaining TE tunnels for MPLS.
One of the inherent benefits of MPLS is the fact that it relies on labels to forward traffic throughout the cloud. The nature of these labels is such that they can be stacked upon one another to define a very specific route the frame should take. Routing protocols with MPLS TE extensions exist, such as OSPF-TE and ISIS-TE as well. These protocols allow for dynamic “contstraint-based routing” dependent on the current load of links throughout the network. I will not discuss these protocols in today’s post, and rather focus on manual TE tunnels only. You can look forward to future posts regarding these protocols though.
Below is a diagram of the lab we’ll be using in this post.
The four core routers are doing the bulk of the functionality while the two outside routers are simply there to provide pingable hosts. In this topography there are two distinct paths through the core, the top path and the bottom path. We will experiment with steering traffic across said paths.
The tunnels in this lab are created to and from the head-ends (R0 and R3) while R1 and R2 simply act as interim hops. It’s important to note that tunnels are unidirectional also, so if you want a round-trip tunnel you’ll need to create two, one sourced at the head-end and one sourced at the tail-end.
This next part will assume MPLS and IGP functionality within the core, if you’re having trouble with that part, the full configs can be found at the bottom of the post.
First we’ll establish a tunnel from R0 to R3:
mpls traffic-eng tunnels ! interface Tunnel2 ip unnumbered Loopback0 tunnel destination 172.16.255.13 tunnel mode mpls traffic-eng tunnel mpls traffic-eng autoroute announce tunnel mpls traffic-eng priority 2 2 tunnel mpls traffic-eng bandwidth 158 tunnel mpls traffic-eng path-option 1 explicit name BOTTOM ! ip explicit-path name BOTTOM enable next-address 172.16.1.2 next-address 172.16.3.2
As you can see, the tunnel is created by a standard tunnel interface and passed a few MPLS TE-specific parameters. One such parameter is the explicit path, referenced as BOTTOM. Under the entry for the ip explicit-path above you can see each next-hop specifically defined. These next-hop addresses are the addresses of the closest interface on each interim router (only one interim and the tail-end in this case). Assuming the path is valid, we can see how the tunnel looks by the show mpls traffic-eng tunnels command:
Name: R0_t2 (Tunnel2) Destination: 172.16.255.13 Status: Admin: up Oper: up Path: valid Signalling: connected path option 1, type explicit BOTTOM (Basis for Setup, path weight 2) Config Parameters: Bandwidth: 158 kbps (Global) Priority: 2 2 Affinity: 0x0/0xFFFF Metric Type: TE (default) AutoRoute: enabled LockDown: disabled Loadshare: 158 bw-based auto-bw: disabled InLabel : - OutLabel : GigabitEthernet2/0, 16 RSVP Signalling Info: Src 172.16.255.10, Dst 172.16.255.13, Tun_Id 2, Tun_Instance 12 RSVP Path Info: My Address: 172.16.1.1 Explicit Route: 172.16.1.2 172.16.3.1 172.16.3.2 172.16.255.13 Record Route: NONE Tspec: ave rate=158 kbits, burst=1000 bytes, peak rate=158 kbits RSVP Resv Info: Record Route: NONE Fspec: ave rate=158 kbits, burst=1000 bytes, peak rate=158 kbits History: Tunnel: Time since created: 52 minutes, 53 seconds Time since path change: 52 minutes, 44 seconds Current LSP: Uptime: 52 minutes, 45 seconds
We can see that it has an explicit route defined and can also see each hop along the way. Tunnels are signaled with RSVP one hop at a time. First, an RSVP PATH message is sent from the head-end to the tail-end and inspected/modified by the interim routers. When the interim routers receive the RSVP PATH message, they check the EXPLICIT_ROUTE field to see if any of their interfaces are part of it. If so, they remove their IP from the EXPLICIT_ROUTE, add it to the RECORD_ROUTE, add the TE LSP to their database, and pass the RSVP PATH message along to the next hop. The tail-end router then crafts an RSVP RESV packet to acknowledge the creation of the tunnel and allocate labels along the way. It’s important to note that the label-allocation process is not done by LDP in this case, but by the RSVP RESV packet on the way back to the head-end. A packet capture of an RSVP TEAR, PATH, and RESV transaction can be found at the end of the post, save the file and change the extension to .pcap (it is .txt to stop WordPress from complaining). At this time we’ll create the second tunnel from R3 back to R0 to create a bidirectional path:
mpls traffic-eng tunnels ! interface Tunnel2 ip unnumbered Loopback0 tunnel destination 172.16.255.10 tunnel mode mpls traffic-eng tunnel mpls traffic-eng autoroute announce tunnel mpls traffic-eng priority 2 2 tunnel mpls traffic-eng bandwidth 158 tunnel mpls traffic-eng path-option 1 explicit name BOTTOM ! ip explicit-path name BOTTOM enable next-address 172.16.3.1 next-address 172.16.1.1
After the creation of the second tunnel, you will be able to see it in the MPLS TE tunnel database on R0 (and vice-versa). You can also see tunnels from the interim routers, here is how it looks from R2:
R2#show mpls forwarding-table Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 16 Pop tag 172.16.255.10 2 [12] 0 Gi1/0 172.16.3.2 17 Pop tag 172.16.255.13 2 [8] 0 Gi2/0 172.16.1.1 --output omitted-- 22 Pop tag 172.16.255.10/32 0 Gi2/0 172.16.1.1 24 Pop tag 172.16.255.13/32 0 Gi1/0 172.16.3.2 R2#show mpls forwarding-table lsp-tunnel Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 16 Pop tag 172.16.255.10 2 [12] 0 Gi1/0 172.16.3.2 17 Pop tag 172.16.255.13 2 [8] 0 Gi2/0 172.16.1.1 R2#
As we can see, the tunnel has been properly created and signaled. There are a variety of ways in which traffic can be forwarded into the tunnel, in my example I’m using Autoroute. Autoroute announces downstream subnets as directly-accessible through the tunnel. You can also disable Autoroute and forward traffic into tunnels manually, based on QoS values, or from particular MPLS VPNs. It’s important to keep in mind that while this example doesn’t utilize MPLS VPNs, it’s just as easy to augment MPLS VPN functionality with MPLS TE tunnels.
R0#show ip route 10.10.1.0 Routing entry for 10.10.1.0/24 Known via "ospf 1", distance 110, metric 3, type intra area Last update from 172.16.255.13 on Tunnel2, 01:42:12 ago Routing Descriptor Blocks: * 172.16.255.13, from 172.16.255.13, 01:42:12 ago, via Tunnel2 Route metric is 3, traffic share count is 1 R0#
With that being said, let’s try a ping from R4 to R5, it should go through the tunnel.
R4#ping 10.10.1.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.10.1.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 4/15/32 ms R4# --switch to R2-- R2#show mpls forwarding-table lsp-tunnel Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 16 Pop tag 172.16.255.10 2 [12] 570 Gi1/0 172.16.3.2 17 Pop tag 172.16.255.13 2 [8] 570 Gi2/0 172.16.1.1 R2#
We can see that the ping succeeds and the byte counter on R2 increments for the tunnel. We now have a successfully-functioning MPLS TE tunnel, let’s see how we can switch the path. First, we’ll create a second explicit route on R0 for the top path:
ip explicit-path name TOP enable next-address 172.16.0.2 next-address 172.16.2.2
We can then yank the old one from the tunnel and add the new one. At this point, RSVP immediately sends a TEAR from head to tail, brings the tunnel down, and signals a new one via R1. Now R1 carries the tunnel from R0 to R3, and R2 carries the tunnel back from R3 to R0. That means traffic to R5 will take the top, and response traffic will take the bottom. Let’s try a ping to see the counters increase on the interim routers:
R4#ping 10.10.1.2 repeat 100 Type escape sequence to abort. Sending 100, 100-byte ICMP Echos to 10.10.1.2, timeout is 2 seconds: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Success rate is 100 percent (100/100), round-trip min/avg/max = 8/14/24 ms R4# --switch to R1-- R1#show mpls forwarding-table lsp-tunnel Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 23 Pop tag 172.16.255.10 2 [13] 51870 Gi2/0 172.16.2.2 R1# --switch to R2-- R2#show mpls forwarding-table lsp-tunnel Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 17 Pop tag 172.16.255.13 2 [8] 52440 Gi2/0 172.16.1.1 R2#
We can see that the packets take a circular route, but everything is working as expected!
You can also add priorities to the explicit paths, that way, if the IGP loses connectivity among any of the tunnel members, it will switch to the next-highest priority (1 being the lowest).
This barely scratches the surface of what is capable with TE tunnels in MPLS. If you’re interested in this sort of thing, I encourage you to check out their different functionality. A few other very cool topics for TE tunnels are interim-terminated tunnels and Fast Reroute. Interim-terminated tunnels allow a tunnel to end on a P router (in MPLS VPN terminology), and Fast Reroute allows the avoidance of a failed device or link with virtually no packet-loss. Below are the full configs of each router.