ATMCOMIO

Tutorial: Deploying a Cumulus Linux Demo Environment With Vagrant

Thanks to ActualTech Media’s relationship with the Cumulus Networks team, I had the opportunity to attend an onsite bootcamp at Cumulus HQ in Mountain View, Calif., where a Cumulus professional services consultant (David Marshall) trained myself and a few others on the ins and outs of Cumulus Linux, Cumulus NetQ, and the Cumulus Host Pack. It was an enlightening experience, not only with regard to Cumulus products, but also about whitebox networking in general. Coming from the server/storage/virtualization world, networking is more of a second language to me, and I don’t think I fully grasped all the neat implications that open source networking has for the enterprise data center prior to this full-day deep dive. (I’ll share more on the philosophical thoughts another time.)

Bootcamp Lab Exercises

Naturally, this technical bootcamp involved some lab exercises to solidify participants’ understanding of what we were learning. Our bootcamp instructor provisioned some labs for us that were already configured with a demo environment consisting of multiple network devices and a simulated physical topology. What’s really cool is that you can access pretty much the same environment for free via the Cumulus website. Cumulus in the Cloud (CitC) allows the curious to spin up a virtual lab environment on Cumulus’s dime and try out the technology, with basically no setup involved to get started. It’s a great way to have a first look at Cumulus from a hands-on standpoint.
Additionally, Cumulus VX is a virtual machine version of a Cumulus Linux-powered device, provided as a way for customers to create simulations of their environment for testing and validation purposes. It runs the exact same code as a physical device, so you’re testing “apples to apples” in the simulation environment (which isn’t always true of other simulations you may run in something like GNS3). Cumulus VX is freely available, and another good way you can create a lab environment to learn more about Cumulus Linux and NetQ.
Both CitC and VX are more than adequate for learning purposes. But I’m a nerd at heart, and during the bootcamp, I noticed something in my lab that interested me and I wanted to explore it more. I noticed during a lab exercise that one of the network devices I was configuring referenced a Vagrant interface. It occurred to me that our lab environments for the bootcamp must be created with some sweet, sweet vagrant up action. I quickly copied and pasted the rest of the lab commands so I could finish the exercise and keep current, but frankly I wasn’t paying that much attention to the lab exercise anymore – I needed to rush over to the Cumulus GitHub page and see if they had kindly provided the boxes and Vagrantfile to build this environment on my own machine.
As I had hoped, the cldemo-vagrant project exists to give Cumulus engineers, Cumulus customers, and curious nerds like me a way to quickly build and destroy a fully configured Cumulus demo environment – locally – anytime we want. The Vagrantfile to build everything is in this repo, and the Cumulus VX box is in Vagrant Cloud and ready for us. The demo environment that this Multi-Machine Vagrantfile specifies has enough different devices running that you can test just about any Cumulus functionality you’d want to; I suppose if you ran two of them at once you could probably even simulate a robust multi-site configuration with only a few commands.
By the end of the afternoon, I had the lab environment you see below running on my laptop so that I could continue learning on the flight home. It’s a two-tier spine-leaf topology with a couple of hosts and a dedicated out-of-band management network. By the end of this article, you can have one of these too if you’d like to follow along!

Deployment

The cldemo-vagrant project has clear deployment instructions for Windows, macOS, and Linux. So you could just as easily follow the tutorial in the repo’s documentation area as opposed to following along here. But for my own enjoyment and for a little more visual tour of the deployment, I’m going to share the steps I followed here with some screenshots.
I’m deploying on macOS. I assume the process is mostly the same for Windows and Linux, but candidly, I didn’t check! Also, I’m going to be using Virtualbox. If you were deploying a simulation environment with Cumulus VX for “production” use by an IT team to simulate changes, test automation, and even create a CI sort of network configuration pipeline, you’d probably want to build this with Libvirt/KVM instead. That’s going to be outside the scope of this article, though.
Before actually pulling down the Vagrant boxes and bringing up machines, there’s a short list of prerequisites to handle.

  • XCode & XCode Tools
  • Homebrew
  • VirtualBox & VirtualBox Extension Pack
  • Git
  • Vagrant

Since I actually had all of these installed already, I just went ahead and did a quick check to be sure everything was in order.

With my prerequisites satisfied, I create a directory called CumulusLab, moved into it, and cloned the cldemo-vagrant GitHub repo into it.

With a fresh local copy of the repo, I moved into the folder and checked that the Vagrantfile was being read correctly and was ready to start spinning up boxes by running vagrant status.

Everything looked good, so I went ahead and ran vagrant up oob-mgmt-switch to spin up the first box. Instantiating a Cumulus VX box for the first time causes vagrant to run out to Vagrant Cloud and grab the latest copy. Thanks to a good solid Internet connection, this only took about three minutes.


With the out-of-band management switch up, I went ahead and started turning up more network devices and a host. The host is a different Ubuntu box, so Vagrant had to go out and grab a copy of that box, too.

With a subset of the environment online (I did it this way because the GitHub readme only starts a subset, and I was following the instructions in the readme), I went ahead and instantiated the rest to bring the full environment online.

FYI, if you don’t want to pace yourself like I did, you could always try just vagrant up without specifying which boxes. That’ll bring up the entire Multi-Machine Vagrant environment, albeit in a less controlled manner. Since there are some dependencies within the environment, I think you’re probably better off doing this by hand and making sure that the out-of-band management switch is up before the rest.

So Now What?

So now that the demo environment is up and running, it’s time to get in there and poke around! I’m going to connect to the out-of-band management server. It’s connected to a switch that is cabled up to the management port on all of the network devices in the demo environment. The management server also has a private key installed for which the corresponding public key exists on every other device; this allows you to SSH to anywhere else in the demo environment without a password.

When you provision machines with Vagrant, the best way to connect to them is through vagrant, because the authentication and port forwarding and anything else that connection might require is handled by Vagrant. So to get connected to this demo environment, I connected to the management server via vagrant ssh oob-mgmt-server. Using the management server as a portal to everything else, I jumped over to one of the leaf switches and checked it out.

To make it super easy to test and experiment with different Cumulus features, the demo set also includes a healthy pile of Ansible playbooks that you can run from the management server to configure the environment so as to demonstrate a feature. To show how this works, I’m going to use the routing demo to automatically configure BGP Unnumbered with one of the provided playbooks. BGP Unnumbered is a key feature of the Cumulus platform, so this will make a perfect test.
Below, you see me connect to the management server, clone the repo with the demo configs in it, and list the directory contents.

Next, I actually run the playbook for configuring BGP Unnumbered and give it a minute to complete. It takes a minute or two to apply the changes and bounce all the interfaces. Once everything is ready, Ansible lets me know that the configuration has completed successfully.


The network is now set up to let me test and play with the configuration of BGP Unnumbered.

Additional Demos

Below is the list of the wide variety of demos that Cumulus has supplied for your learning and experimenting benefit. Playing with some of these would be a great way to see if Cumulus Networks is a good fit for your needs. I suggest checking these over and giving it a try!

  • Cldemo-config-routing — This Github repository contains the configuration files necessary for setting up Layer 3 routing on a CLOS topology using Cumulus Linux and Quagga.
  • Cldemo-config-mlag — This demo shows a topology using MLAG to dual-connect hosts at Layer 2 to two top of rack leafs and uses BGP unnumbered/L3 for everything above the leaf layer.
  • Cldemo-roh-ansible — This demo shows a topology using ‘Routing on the Host’ to add host reachability directly into a BGP routed fabric.
  • Cldemo-roh-docker — This demo shows how to redistribute docker bridges into a Routing on the Host container to advertise host container subnets into a BGP routed fabric.
  • Cldemo-roh-dad — This demo shows how to dynamically advertise host-routes for container IP addresses into a Routing on the Host Container to advertise containers into a BGP routed fabric.
  • Cldemo-automation-puppet — This demo demonstrates how to write a manifest using Puppet to configure switches running Cumulus Linux and servers running Ubuntu.
  • Cldemo-automation-ansible — This demo demonstrates how to write a playbook using Ansible to configure switches running Cumulus Linux and servers running Ubuntu.
  • Cldemo-automation-chef — This demo demonstrates how to write a set of cookbooks using Chef to configure switches running Cumulus Linux and servers running Ubuntu.
  • Cldemo-puppet-enterprise — This demo demonstrates how to setup Puppet Enterprise to control Cumulus Linux switches with Puppet manifests.
  • Cldemo-ansible-tower — This demo demonstrates how to setup Ansible Tower to control Cumulus Linux switches with Ansible playbooks.
  • Cldemo-openstack — Installs OpenStack Mitaka on servers networked via Cumulus Linux
  • Cldemo-onie-ztp-ptm — This demo demonstrates how to configure an out of band management network to automatically install and configure Cumulus Linux using Zero Touch Provisioning, and validate the cabling of the switches using Prescriptive Topology Manager.
  • Cldemo-rdnbr-ansible — This demo shows a topology using ‘redistribute-neighbor’ to add host reachability directly into a BGP routed fabric.
  • Cldemo-pim — This demo implements Cumulus Linux PIM EA version. The demo includes simple python applications to simulate multicast senders and receivers.
  • Cldemo-evpn — This demo implements EVPN on Cumulus Linux. This demo is standalone and does not require cldemo-vagrant.
  • Cldemo-dynamic-ansible-inventory — A demonstration of using Ansible with external data sources, specifically Redis or MySQL databases.
  • Cldemo-docker-macvlan — A demonstration of advertising docker containers using the macvlan networking option.
  • NetQDemo-1.0 — Demos using NetQ. NOTE: The NetQ VM is available for Cumulus Customers
  • cldemo-evpn-symmetric — Provides a setup to show a VXLAN Routing with EVPN environment using the symmetric IRB model.