This is a technical introduction to ACI. It's not meant to be a configuration guide, but an overview of the technical highlights of the technology and what components are used to make it work.

Application Centric Interface is Cisco's take on SDN in the datacenter. When I first heard of ACI and what it did I didn't get it, because the versions of it I heard made it sound so special and unique. However, it's actually not that complicated in terms of how the concept works. At its core, it is "just" VXLAN tunnels that are automatically generated in the fabric. It's also automatically generating routing, which is pretty neat.

Looking at various forums and listening to companies (that are not Cisco), it seems that the technology is working quite well. It's actually a good platform for any size of datacenter, if you have the need for the features. That's also an important "if", because using the NX9k ACI means giving up on a lot of routing features, such as MPLS L3 VPN. There are similar platforms to ACI, such as NSX. The thing that is attractive about the NX9K, is the price per port you get, even if the licensing is annoying.

Tenancy

A term to get familiar with in regards to the datacenter. A tenant is a user, a customer, a company or any other description of an entity which needs separated infrastructure. A hosting company has a lot of customers and the hosting company's datacenter is "multitenancy". For enterprises using the datacenter just for themselves, the multitenancy term is likely not familiar. But it will be now, because it is how every node/host/segment is described at the end of a VXLAN tunnel, a tenant.

VXLAN

This is an RFC standard (7348) and doesn't "belong" to ACI. It can be configured independently, but some vendors have used it in an automated way like ACI or VMware NSX. How VXLAN actually works and optimizes the control plane is an extensive topic in itself and will not be part of this.

VXLAN uses stateless L2 tunnels across IP. Hosts dont know the location of each other by default. They will discover each other on the same VXLAN segment the same way they would on a regular VLAN. The stateless tunneling comes from the switches. The switches don't have to preconfigure the tunnels between the VXLAN segments. They don't have to know the IP address of the tunnel interface on the destination switch, which is something they will discover, if a host wants to send traffic. This concept is important because it speaks to the automation and scalability of the technology.

What does it accomplish?

  • Extends VLAN range, 24bits or slightly above 16.700.000 VLANs.
  • L2 multipath, meaning no STP for loop prevention, can use what would otherwise be backup paths.
  • Most importantly, it keeps L2 requirements for applications across IP.
NX9K vs NX7700 price

VXLAN is supported on both platforms, but the 9K is cheaper. This is because the 9K platform uses Broadcom chips, as opposed to the ASICs build specifically for the NX7700. This means less features, but the 9K is build specifically for the ACI and using the VXLAN there isn't a need for custom ASIC. As mentioned earlier, VXLAN is just a L2 segment across a UDP IP tunnel, which the Broadcom ASIC are perfectly capable of forwarding. Even though Cisco has been adding more features to the 9K platform, the use of these cheaper / off-the-shelf ASIC means some technologies will never be available.

APIC

The command center of the ACI fabric. This is a 1U appliance server sold from Cisco. I don't believe it can be acquired to install on own metal. It is where everything is done and the configuration is made. With ACI the idea is to use the APIC GUI, which is likely easier than CLI would be. It supposedly also works quite well, so it's not another ASDM, even though it does still use Java. A few key points of the tech:

  • The switches running ACI don't run NX-OS, they run ACI-mode. It's a different system on the flash. It also means the 9K can run NX-OS, but why you'd want to do that, I don't know.
  • There are Leafs and Spines (and controllers) in ACI. The spine is the core/backplane and is purely forwarding traffic between leafs. A leaf connects to everything, spine, controller, tenants and it's also what connects to the routers outside the ACI.
  • The APIC/Controller connects to the leaf switches.
  • 3 controllers are recommend and Cisco will not provide support on fabrics with less.
  • Native Python scripting support.

The APIC autoconfigures many things in the fabric, which is really quite interesting. It uses IS-IS as the IGP and it automatically configures all the IP interfaces. On top of that will be BGP and VXLAN, none of which we touch in regards to the core configuration. There is a CLI and some configuration will be done through there, but most of the management of ACI is done through GUI.

The access control is what makes ACI. There are policies or "contracts" on every segment, which determines what can be communicated between. It's microsegmentation of the datacenter.

Standalone VXLAN

There are automatic configuration systems to go along with VXLAN solution. Cisco Virtual Topology System (VTS) is one of them and it works on the NX7, but no automatic access control build into that.

Implementing ACI

Integrating ACI into an existing datacenter is, of course(?), something Cisco has thought of. But that doesn't mean it's going to be easy. There a different ways to do things such as peering between the ACI fabric and the "regular" network. There are definitely some considerations of how it'd work with current setup and how long it takes to migrate completely.