Manage your datacenter infrastructure with Ansible

Ansible is not only a great tool for simple tasks (like updating servers) but can be of much help deploying and automating the infrastructure underneath it. Ansible supports building your infrastructure from the ground up.

Ansible is compatible with almost anything, if you can use CLI – you can use Ansible. Out of the box it has lots of plugins for vendors like Cisco, HP/Aruba, Arista, Juniper, Netapp and many more. Want to take it to a higher level? There is also support for VmWare, Xenserver, RHEV and more. Nothing ansible cannot build for you.

Build our network topology with Ansible

I will show you how to build a leaf-spine topology using ansible. If you want to know more about leaf-spine network topology’s please refer to this article. In short; leafs are access switches connected to the spines using layer-3 routing (ospf/bgp).

Ansible Hosts file

In order to manage our entire infrastructure in one place we will create a hosts file with groups (spines, leafs, servers) and children objects (the actual devices). For now i use VyOS as swithes but this can be any Cisco, HP or Juniper switch of course.

[leafs]
leaf01 ansible_host=192.168.136.71 ansible_network_os=vyos
leaf02 ansible_host=192.168.136.76 ansible_network_os=vyos

[spines]
spine01 ansible_host=192.168.136.70 ansible_network_os=vyos
spine02 ansible_host=192.168.136.77 ansible_network_os=vyos

[vms]
server01 ansible_host=192.168.200.100
server02 ansible_host=192.168.200.101

[infrastructure:children]
leafs
spines

[datacenter:children]
leafs
spines
vms

In the above example you will see i have two leaf switches that i want to connect to my two spine switches. I grouped them under the two host categories and then created a new categorie “infrastructure” linking them together. With that setup i can run tasks on either a set of leafs or on both spines and leafs together. Don’t forget to create a local ansible.cfg pointing to the hosts file

[defaults]
inventory = ~/ansible-datacenter/hosts
filter_plugins = ~/ansible-datacenter/plugins/

Configuring the interfaces of leafs and spines

Let’s start with the easy part, configure all devices to have interfaces in the correct subnet so they can communicate with eachother. Also, i am giving them a loopback address on interface lo used for internal purposes and management. Let’s create the playbook ipaddr.yml

---
- hosts: infrastructure
  connection: network_cli
  vars:
    interface_data:
      leaf01:
          - { name: eth1, ipv4: 192.168.11.2/30 }
          - { name: eth2, ipv4: 192.168.21.2/30 }
          - { name: lo, ipv4: 10.0.0.11/32 }
      leaf02:
          - { name: eth1, ipv4: 192.168.12.2/30 }
          - { name: eth2, ipv4: 192.168.22.2/30 }
          - { name: lo, ipv4: 10.0.0.12/32 }
      spine01:
          - { name: eth1, ipv4: 192.168.11.1/30 }
          - { name: eth2, ipv4: 192.168.12.1/30 }
          - { name: lo, ipv4: 10.0.0.1/32 }
      spine02:
          - { name: eth1, ipv4: 192.168.21.1/30 }
          - { name: eth2, ipv4: 192.168.22.1/30 }
          - { name: lo, ipv4: 10.0.0.2/32 }
  tasks:
    - name: VyOS | Configure IPv4
      vyos_l3_interface:
        aggregate: "{{interface_data[inventory_hostname]}}"

Notice that in this case i am using the hostsgroup ‘infrastructure’ because i want to set these IP adresses on all the switches (leafs and spines). This saves time as i can now do this from only one playbook. So, run it and see if the leaf actually has the correct configuration now.

vyos@leaf01:~$ show interfaces
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface        IP Address                        S/L  Description
---------        ----------                        ---  -----------
eth0             192.168.136.71/28                 u/u
eth1             192.168.11.2/30                   u/u
eth2             192.168.21.2/30                   u/u
lo               127.0.0.1/8                       u/u
                 10.0.0.11/32
                 ::1/128

Looks great! So now i have 2 spines and 2 leaves that can ping eachother and are connected.

vyos@leaf01:~$ ping 192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.
64 bytes from 192.168.11.1: icmp_req=1 ttl=64 time=0.288 ms
64 bytes from 192.168.11.1: icmp_req=2 ttl=64 time=0.305 ms

Automated checking

Ansible is meant to automate stuff, so after applying the interface configuration it would be best to automaticly check if the devices are reachable. Let’s create a task to ping from within the playbook after applying the configuration.

    - name: VyOS | Test IPv4 connectivity
      vyos_command:
        commands:
          - "ping 192.168.11.2 count 5"
          - "ping 192.168.12.2 count 5"
      register: spine01_result
      when: 'inventory_hostname == "spine01"'
    - name: VyOS | Testresults Connectivity
      assert:
        that:
          - "'192.168.11.2' in spine01_result.stdout[0]"
          - "'0% packet loss' in spine01_result.stdout[0]"
          - "'192.168.12.2' in spine01_result.stdout[1]"
          - "'0% packet loss' in spine01_result.stdout[1]"
      when: 'inventory_hostname == "spine01"'

So now we have spines, leaves, everything is connected but we still need Layer3 routing. We can use either BGP or OSPF. In this example i will use ansible to push the OSPF configuration to VyOS and create a new “area 0”. There are two ways to accomplish this, using the CLI or just push the config file itself. I’m going to the easy way, push the file. So i create a template in ansible and save it as ospf_conf.j2;

protocols {
    ospf {
        area 0 {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] != "lo" -%}
            network {{ dict["ipv4"] | ipaddr("network") }}/{{ dict["ipv4"] | ipaddr("prefix") }}
            {% else -%}
            network {{ dict["ipv4"] }}
            {% endif -%}
            {% endfor -%}
        }
        parameters {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] == "lo" -%}
            router-id {{ dict["ipv4"] | ipaddr("address") }}
            {% endif -%}
            {% endfor -%}
        }
    }
}

interfaces {
    {% for dict in interface_data[inventory_hostname] -%}
    {% if dict["name"] != "lo" -%}
    ethernet {{ dict["name"] }} {
        ip {
            ospf {
                network point-to-point
            }
        }
    }
    {% endif -%}
    {% endfor -%}
}

So what this does is add each range from the interface_data to the ospf networks, and add the ospf parameter to the interface (eth1/2). So add this task to the playbook:

  - name: push ospf configurtion to vyos
    vyos_config:
      src: ./ospf_conf.j2
      save: yes

So, run the playbook. After this you will see that each device has a working ospf configuration and the leafs are now redundantly connected to each spine.

vyos@leaf01# show protocols ospf
 area 0 {
     network 10.0.0.11/32
     network 192.168.11.0/30
     network 192.168.21.0/30
 }
 parameters {
     router-id 10.0.0.11
 }
vyos@leaf01# run show ip ospf neighbor

Neighbor ID     Pri State           Dead Time Address         Interface            RXmtL RqstL DBsmL
10.0.0.1          1 Full/DROther      32.379s 192.168.11.1    eth1:192.168.11.2        0     0     0
10.0.0.2          1 Full/DROther      32.369s 192.168.21.1    eth2:192.168.21.2        0     0     0

vyos@leaf01# traceroute 192.168.12.2
traceroute to 192.168.12.2 (192.168.12.2), 30 hops max, 60 byte packets
 1  192.168.11.1 (192.168.11.1)  0.320 ms  0.318 ms  0.313 ms
 2  192.168.12.2 (192.168.12.2)  0.557 ms  0.553 ms  0.584 ms
[edit]
vyos@leaf01# traceroute 192.168.22.2
traceroute to 192.168.22.2 (192.168.22.2), 30 hops max, 60 byte packets
 1  192.168.21.1 (192.168.21.1)  0.254 ms  0.249 ms  0.245 ms
 2  192.168.22.2 (192.168.22.2)  0.503 ms  0.502 ms  0.499 ms

The routing table will show you all 4 ip’s, some directly attached and some using OSPF:

vyos@leaf01.leaf01# run show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

S>* 0.0.0.0/0 [210/0] via 192.168.136.65, eth0, 19:18:47
O>* 10.0.0.1/32 [110/10] via 192.168.11.1, eth1, 18:05:55
O>* 10.0.0.2/32 [110/10] via 192.168.21.1, eth2, 17:20:50
O   10.0.0.11/32 [110/0] is directly connected, lo, 18:06:16
C>* 10.0.0.11/32 is directly connected, lo, 18:06:16
O>* 10.0.0.12/32 [110/20] via 192.168.11.1, eth1, 17:20:50
  *                       via 192.168.21.1, eth2, 17:20:50

We did this in less then 5 minutes, remember the time where we had to configure this by hand? Now if i would require a new interface i can edit my playbook, run it and be done in 30 seconds.

Next: Devide this playbook in to nice roles.

Please follow and like us:
error

One thought on “Manage your datacenter infrastructure with Ansible”

  1. I absolutely love your site.. Very nice colors & theme.
    Did you create this website yourself? Please reply back as I’m trying to create my very own site and want to find out where you got this from
    or what the theme is called. Kudos!

Leave a Reply

Your email address will not be published. Required fields are marked *