Main DMVPN post

A slight disclaimer before going into how all this works. The protocols behave different depending on which type of IGP you are using and what is described here is the most general behavior. Alternatives will be described when stuff like OSPF in DMVPN is explored.

GRE and mGRE

DMVPN tunnels are created with Generic Routing Encapsulation. GRE tunnels can be used to create an overlay network where traffic types such as multicast can traverse. This means you can use a routing protocol in a network that'd otherwise not support it. In our case that'd be using an IGP between two remote sites across the SP network.

With GRE you create point-to-point tunnels. This requires statically assigning a tunnel source and destination, which is fine for small deployments. That solution doesn't scale well, as it quickly becomes too much configuration to properly manage. Multipoint GRE saves us and provides the scalability for DMVPN.

Using mGRE we can create just one interface for all the connections. We do not have to specify a destination address.

NHRP

Next Hop Resolution Protocol is similar to ARP, only it is an IP to IP binding of your overlay/underlay network instead of IP to MAC. It provides dynamic address mapping for the private tunnels and physical public interfaces.

NHRP resolves networks in the RIB. A network can be learned "via NHRP" when looking in the ip route table. It'll be marked with a "%" / NHO (Next-Hop Override).

NHS & NHC: Next-Hop Server and Next-Hop Client are the two modes for DMVPN members. The NHS is the hub and is what NHC (spokes) query for NHRP mappings. The NHS keeps a database of the mappings. You can configure more than one NHS on the spokes. Spokes can also act as servers in phase 3, to a lesser extent.

NHRP request: You don't configure the mapping statically on the spoke. Instead configure the spoke to know about the hub address. When the spoke router is configured, it will send a request to the HUB router for a VPN to NBMA mapping. This means we can use the same mapping configuration for our spokes. The only things we have to change are tunnel IP and maybe the tunnel source interface.

NHRP Resolution Request: Spoke-to-Hub connection has been established. The resolution request happens with spoke to spoke traffic. The request is a spoke asking the hub for the NHRP mapping of another spoke.

NHRP Redirect: This is the response to a spoke's resolution request. It's like an IP redirect and simply tells the spoke routers how to reach each other directly. You do not require IP redirect enabled on the interface to use NHRP redirect.

Phases 1,2 and 3

The difference between the three phases are how hub and spoke traffic patterns work. Don't consider the phases as steps you have to go through. Rather, the phases are different configurations of DMVPN.

Phase 1: In phase 1 the spokes only communicate through the hub. The spokes can't create a tunnel between each other. This phase is ineffective if you have a lot of traffic between spokes and there are very few use cases.

You can use summarization and default routing from the hub to spokes in this phase. In regards to default routing, it is unlikely you'll want to use it for getting onto the internet. It doesn't have a lot of use to send traffic directed towards the internet from spoke to hub and then out of the hub again.

Two quick use cases for phase 1: First is if you want all traffic to pass through a firewall on connected to the hub. Second is if you want to use MPLS inside the DMVPN. Cisco has previously documented you can't use label switching between spokes. Support for MPLS between spokes has been added, but it seems flaky. It'll be a while before I get into that.

Phase 2: With phase 2, spoke-to-spoke tunneling is introduced. However, we can't summarize and we can't use default routing. In phase 2 we have to preserve the next-hop IP from the routers advertising their networks.

If you are interested in a very in-depth look at P1 and P2, I'd recommend the INE DMVPN explained blog. It's a long post with a lot of debug and show commands and it should be able to provide an understanding of why phase 2 works in this limited way.

Phase 3: This is where it's at. We get spoke-to-spoke traffic, summarization and default routing. The way this is done, is with two extra configurations on the tunnel interface.


Refer to the topology above. When NHRP resolves the mapping between spokes it happens in a different way than phase 2.

Phase 2 resolution happens by spoke1 asking for the NHRP mapping to reach 10.10.0.0/24. The mapping it gets is a Next-Hop mapping from Spoke2, which would be 10.1.0.1 20.0.0.1.

Phase 3 resolution maps the requested network to the NBMA 10.10.0.0 20.0.0.1. That's at least what the documentation states. More about that when the configuration starts.

The way the spoke and hub interacts with spoke-to-spoke traffic is a little special. There will be a difference in how the router tells you it can reach a network between the RIB and CEF. Pretend you have a routing protocol on spoke2 and it has advertised the network 10.10.0.0/24. For the hub, this network has a next-hop address on spoke2 and for spoke1, the next-hop address is the hub. This will stay the same in the RIB. Spoke1 wants to talk with spoke2 and establish a tunnel directly. If you use a traceroute from spoke1 to spoke2, the first try will pass through the hub, because we don't have our NHRP registration yet. The second attempt will route directly to spoke2. Now the next-hop from spoke1 to 10.10.0.0 is the spoke2 and while the RIB will tell you it's through the hub, looking in the CEF table, the next-hop will be installed as spoke2.

The next post will hopefully make this easier to understand.

A last thing to note about phase 2 and 3 is on-demand tunnel creation. A DMVPN tunnel between spokes will only be created after an initial request. If no traffic is trying to route in a tunnel, it will eventually time-out and get torn down.

IPsec

IPsec is not required to use in DMVPN. But if you want your data encrypted and secure when traversing the internet, then this is what you need to know.

IPsec creates tunnels when there is interesting traffic. Interesting traffic simply means any traffic that has to go between two routers and is mapped to the crypto list for encryption. These tunnels can go down, if there is no traffic. To avoid having the tunnels go up and down, we can do something simple such as using an IP SLA to ping between two routers. The ping would be the interesting traffic required to keep the tunnel alive.

Due to GNS3 limitations I can't make a lab with both IPsec and phase 3 DMVPN correctly working.

Overlay/Underlay

Two terms to be familiar with are overlay and underlay network. They need to be mapped together with NHRP.

Overlay: Our internal network across the DMVPN. The tunnel IP address.

Underlay: The service provider network or whatever other network you carry the DMVPN across. This is the public IP address, which is also referred to as the NBMA address. Non-Broadcast Multi-Access is a network without broadcast or multicast support, where you connect a device with a single interface to potentially multiple other devices. In our case this'd be the SP network. What this means, is that we have to statically configure our neighbors.

With NBMA we can still do dynamic peering, we just have to help some of the protocols a bit.

Configuration

This is the configuration used for one of the hub routers:

interface tunnel1 Our tunnel interface.
ip address 10.1.0.1 255.255.254.0 Overlay address. All the DMVPN routers in the same NHRP ID have to be on the same subnet.
ip nhrp authentication BigScale Authentication (optional). This will be sent in clear text, but it won't be an issue in regards to security, as it will all be encrypted inside an IPsec.
ip nhrp map multicast dynamic This allows DMVPN to
automatically map NHRP bindings from the NHRP requests.
ip nhrp network-id 1 A network ID has to be specified. It has to match on all the routers in the same DMVPN, but it also has to be unique in your routing domain.
ip nhrp holdtime 500 How long between communication of the tunnel do we keep it up. Cisco recommends using a value between 300 to 600 seconds.
ip nhrp redirect The hub informs the spoke that it can reach the other spoke directly and where to reach it.
tunnel source ethernet2/0 Which interface in our public/NBMA network do we send the traffic from.
tunnel key 1 This is optional. It has to be the same for all the DMVPN routers that need to communicate in the same NHRP ID. The reason we have both this and NHRP authentication is because they are 2 different protocols, which both have uses outside of DMVPN.
tunnel mode gre multipoint This is how the router knows it's a multipoint link. The router can use the same tunnel interface for more than one connection.

And a spoke configuration:

interface tunnel1
ip address 10.1.0.2 255.255.254.0
ip nhrp authentication BigScale
ip nhrp map 10.1.0.1 20.0.0.2 On every spoke we map the overlay and underlay address of the hub router.
ip nhrp map multicast 20.0.0.2 Required to use an IGP. Has to be the hub NBMA address.
ip nhrp nhs 10.1.0.1 This tells the spoke router who the hub is and who to send NHRP requests to.
ip nhrp network-id 1
ip nhrp holdtime 500
ip nhrp shortcut Makes the spoke router capable of resolving NHRP requests. It will keep it's own database of the mapping.
tunnel mode gre multipoint We don't have to configure the spoke for mGRE encapsulation. The mGRE on the spoke allows spoke-to-spoke communication. The alternative is tunnel destination hub NBMA address, which then enables the spoke to only communicate through the hub.
tunnel source ethernet2/0
tunnel key 1