Ansible through Ubuntu (WSL) on Windows 10

Windows Subsystem for Linux (WSL) allows you to run Linux straight from your Windows Desktop. I use this on a daily basis for running Ansible scripts without having to install VM’s. Make sure you installed al latest updates.

Enable WSL feature

Open up a Powershell box as Administrator (search powershell, right click and run as Administrator).

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

This will initiate the installation and once finished ask if you would like to reboot your system. Go ahead and do that. When the reboot is done search for ‘bash’ and open that, it will first require a few anwsers. Simply fill out all the questions and once that is done you will have Ubuntu up and running.

Install Ansible

Now you are basicly in a Linux environment so you can install Ansible the typical way. Again, in the ‘bash’ window of course, use these instructions:

sudo apt-get -y install python-pip python-dev libffi-dev libssl-dev
sudo pip install ansible

Should you get any permission errors (i did not have this time, but given the nature of how WSL works that could happen) install pip with the –user flag. This will cause it to install ansible in the users home dir, not globally.

You are done. Using the following command you can check what ansible version is now installed:

ansible --version

If you need the most recent version check out my other post here.

Please follow and like us:
error

Install latest version ansible on Ubuntu 16.04 / 18.04

Ubuntu doesn’t ship with the newest version of ansible out of the box, sadly. You have to manually configure the PPA on your system in order to upgrade to the stable version. Follow these commands to install the PPA:

$ sudo apt update
$ sudo apt upgrade
$ sudo apt install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible

Hit Enter when asked, and once the process is done update your apt repos:

$ sudo apt update

Now you can either upgrade or simply install ansible:

$ sudo apt install ansible

This should be all, use the following to verify the ansible version:

$ ansible --version
Please follow and like us:
error

Ubuntu 18.04 resize/expand (root) filesystem

Running Ubuntu 18.04 i ran out of disk space on my main partition. I increased disk space in VMware and needed to expand the partitions from within Ubuntu. Start with scanning for changes on your disk

echo 1 > /sys/class/block/sda/device/rescan

Verify that you can see the new (correct) disk space using:

fdisk -l

Create a new partition using “cfdisk”, navigate to the free space and hit “new”. After that hit Write to make sure the partition table gets written. Close cfdisk and either reboot or rescan to update your partition table. Now it’s time to add the disk space. First, find the new partition number:

fdisk -l

In my case sda3 was created to i’m going to create a new volume on it

pvcreate /dev/sda3

now extent my volume group with the newly added volume:

vgextend ubuntu-dev-box /dev/sda3

Extend the volume with all available (new) disk space:

lvextend -l+100%FREE /dev/ubuntu-dev-box/root

Now resize the filesystem:

resize2fs /dev/mapper/ubuntu--dev--box--root
Please follow and like us:
error

SSH Tunnel to watch Netflix

I often use a ‘hopping server’ when connecting to clients, that means i need to login twice each time. To make my life easier i sometimes use an SSH tunnel so i can connect to clients directly.

SSH Tunnel can also be usefull when your office blocks netflix 😉

Local Port Forwarding

This will allow you to access remote servers direcly from your local computer. Let’s assume you want to use RDP (3389) to a clients hosts (10.0.1.1) and your hopping server is ‘hopping.server’

ssh -L 6000:10.0.1.1:3389 wieger@hopping.server

Now you can open Remote Desktop and connect to ‘localhost:6000’, directing you through the tunnel!

Remote Port Forwarding

This will make your local service/port acccessible from a remote host. Sometimes i use this to keep a ‘backdoor’ and login remotely (home or whatever).

Let’s say you want to make a webapplication (TCP 443) availible at port 6000 on the remote SSH server

ssh -R 6000:localhost:443 wieger@bontekoe.technology

Now you should be able to connect to port 6000 on the remote host (bontekoe.technology)

Dynamic Forwarding (Proxy)

This is ideal for people who want to use the internet safely/anonymous or for offices where Netflix is blocked 😉

Use a remote server to tunnel all web traffic (eg. home server), connect through SSH to it using the -D flag

ssh -D 6000 wieger@bontekoe.technology

Now open up your browser settings, navigate to the connection properties and enter a Proxy server (manually using SOCKS). Use 127.0.0.1 as host and 6000 as port. The tunnel will remain open as long as you are connected through SSH.

Please follow and like us:
error

mod_pagespeed Module on Ubuntu 18.04

mod_pagespeed is an open-source Apache module created by Google to help Make the Web Faster by rewriting web pages to reduce latency and bandwidth. mod_pagespeed releases are available as precompiled linux packages or as source. (See Release Notes for information about bugs fixed)

Installation

  1. Update system

    apt update -y
    apt upgrade -y

  2. Install Apache

    apt-get install apache2 -y

  3. Enable Apache Startup

    systemctl start apache2
    systemctl enable apache2

  4. Install mod_pagespeed

    wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
    dpkg -i mod-pagespeed-stable_current_amd64.deb
    systemctl restart apache2

  5. Verify mod_pagespeed is running

    curl -D- localhost | head | grep pagespeed

Web Interface

mod_pagespeed has a very simple web-interface to see statistics. If you do not case, skip this step.

nano /etc/apache2/mods-available/pagespeed.conf

Add these lines to it:

<Location /ps_admin>
    Order allow,deny
    Allow from localhost
    Allow from 127.0.0.1
    Allow from all
    SetHandler ps-admin
</Location>

<Location /ps_global_admin>
    Order allow,deny
    Allow from localhost
    Allow from 127.0.0.1
    Allow from all
    SetHandler ps_global_admin
</Location>

After restarting apache you can go to http://<your url>/ps_admin

Please follow and like us:
error

Test internet speed using speedtest-cli

Speedtest-cli is a great tool to test your internet speed using the Speedtest servers, make sure you have Python installed before installing Speedtest-cli.

Installing and Using Speedtest-CLI

  1. Update APT and install packages

    apt-get update; apt-get install python-pip speedtest-cli

  2. Test your speed!

    speedtest-cli
    Testing download speed........................................
    Download: 913.12 Mbit/s
    Testing upload speed..................................................
    Upload: 524.12 Mbit/s

  3. Share your speed 🙂

    speedtest-cli --share
    This will provide you with an image to share proving the speed.

Please follow and like us:
error

Load Balancing Remote Desktop

Using HAProxy to loadbalance between RDS servers is usefull if you have more then one RDS servers and want users to connect to a single IP.

  1. Install Haproxy

    sudo apt-get update
    sudo apt-get install haproxy

  2. Add the RDP VIP (virtual IP) and RDP hosts

    defaults
    clitimeout 1h
    srvtimeout 1h
    listen VIP1 193.x.x.x:3389
    mode tcp
    tcp-request inspect-delay 5s
    tcp-request content accept if RDP_COOKIE
    persist rdp-cookie
    balance rdp-cookie
    option tcpka
    option tcplog
    server win2k19-A 192.168.10.5:3389 weight 10 check inter 2000 rise 2 fall 3
    server win2k19-B 192.168.10.6:3389 weight 10 check inter 2000 rise 2 fall 3
    option redispatch

Now we have HAProxy running on a 193.x.x.x ip address, when you connect to that IP it will direct you to one of the Windows 2019 machines. If one dies, it will remove it and you can reconnect to the last one that is online.

Please follow and like us:
error

Installing phpIPAM

phpipam is an open-source web IP address management application (IPAM). Its goal is to provide light, modern and useful IP address management. It has lots of features and can be integrated with PowerDNS!

  1. Install Apache, PHP 7.2, MariaDB and GIT client

    apt-get install apache2 mariadb-server php7.2 libapache2-mod-php7.2 php7.2-curl php7.2-mysql php7.2-curl php7.2-gd php7.2-intl php-pear php7.2-imap php-memcache php7.2-pspell php7.2-recode php7.2-tidy php7.2-xmlrpc php7.2-mbstring php-gettext php7.2-gmp php7.2-json php7.2-xml git wget -y

  2. Run the secure installation for MariaDB

    mysql_secure_installation
    Enter current password for root (enter for none):
    Set root password? [Y/n]: N
    Remove anonymous users? [Y/n]: Y
    Disallow root login remotely? [Y/n]: Y
    Remove test database and access to it? [Y/n]: N
    Reload privilege tables now? [Y/n]: Y

  3. Create database and user

    MariaDB [(none)]> create database phpipam;
    MariaDB [(none)]> grant all on phpipam.* to phpipam@localhost identified by 'bontekoe123';

    MariaDB [(none)]> FLUSH PRIVILEGES;
    MariaDB [(none)]> EXIT;

  4. Clone it from Github

    cd /var/www/html
    git clone https://github.com/phpipam/phpipam.git /var/www/html/phpipam/
    cd /var/www/html/phpipam
    git checkout 1.3
    git submodule update --init --recursive

  5. Edit the config file

    cp config.dist.php config.php
    nano config.php

    Add your MariaDB database, username and password there.

  6. Import PHP IPADM Database

    mysql -u root -p phpipam < db/SCHEMA.sql

  7. Set the correct permissions

    chown -R www-data:www-data /var/www/html/phpipam
    chmod -R 755 /var/www/html/phpipam

  8. Apache configuration

    Either you use the default apache configuration or you can create a virtual host specificly for this. That i leave up to you.

  9. Finish the installer

    Go to http://x.x.x.x/phpipam and you will see the wizard, asking you to finish the installation. Follow the easy steps. After that you can login with admin/admin and change the default password.

  10. Create your first subnet

    Here you can create your first subnet. If can automaticly help you by discovery, checking hosts status and more.

Please follow and like us:
error

Browser Caching via .htaccess

A browser retrieves many resources from the webserver (css/js, etc). Cache allows websites to store this files in temporary storage to allow faster retrieval next time the file is needed.

mod_expires headers

Using these statements we can inform the browser that it can cache files for a longer period. Make sure mod_headers is enabled in apache.

<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType text/css "access 1 month"
ExpiresByType text/html "access 1 month"
ExpiresByType image/gif "access 1 year"
ExpiresByType image/png "access 1 year"
ExpiresByType image/jpg "access 1 year"
ExpiresByType image/jpeg "access 1 year"
ExpiresByType image/x-icon "access 1 year"
ExpiresByType application/pdf "access 1 month"
ExpiresByType application/javascript "access 1 month"
ExpiresByType text/x-javascript "access 1 month"
ExpiresByType application/x-shockwave-flash "access 1 month"
ExpiresDefault "access 1 month"
</IfModule>

Cache control in mod_headers

<ifModule mod_headers.c>
<filesMatch "\.(ico|jpe?g|png|gif|swf)$">
Header set Cache-Control "public"
</filesMatch>
<filesMatch "\.(css)$">
Header set Cache-Control "public"
</filesMatch>
<filesMatch "\.(js)$">
Header set Cache-Control "private"
</filesMatch>
<filesMatch "\.(x?html?|php)$">
Header set Cache-Control "private, must-revalidate"
</filesMatch>
</ifModule>

Turn off ETags

By removing the ETag header, you disable caches and browsers from being able to validate files, so they are forced to rely on your Cache-Control and Expires header.

<IfModule mod_headers.c>
   Header unset Etag
   Header set Connection keep-alive
</IfModule>
FileETag None

Deflating compression

Compression is implemented by the DEFLATE filter. The following directive will enable compression for documents in the container where it is placed; again, make sure the module is enabled in Apache.

<IfModule mod_deflate.c>
  # Compress HTML, CSS, JavaScript, Text, XML and fonts
  AddOutputFilterByType DEFLATE application/javascript
  AddOutputFilterByType DEFLATE application/rss+xml
  AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
  AddOutputFilterByType DEFLATE application/x-font
  AddOutputFilterByType DEFLATE application/x-font-opentype
  AddOutputFilterByType DEFLATE application/x-font-otf
  AddOutputFilterByType DEFLATE application/x-font-truetype
  AddOutputFilterByType DEFLATE application/x-font-ttf
  AddOutputFilterByType DEFLATE application/x-javascript
  AddOutputFilterByType DEFLATE application/xhtml+xml
  AddOutputFilterByType DEFLATE application/xml
  AddOutputFilterByType DEFLATE font/opentype
  AddOutputFilterByType DEFLATE font/otf
  AddOutputFilterByType DEFLATE font/ttf
  AddOutputFilterByType DEFLATE image/svg+xml
  AddOutputFilterByType DEFLATE image/x-icon
  AddOutputFilterByType DEFLATE text/css
  AddOutputFilterByType DEFLATE text/html
  AddOutputFilterByType DEFLATE text/javascript
  AddOutputFilterByType DEFLATE text/plain
  AddOutputFilterByType DEFLATE text/xml
  # Remove browser bugs (only needed for really old browsers)
  BrowserMatch ^Mozilla/4 gzip-only-text/html
  BrowserMatch ^Mozilla/4\.0[678] no-gzip
  BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
  Header append Vary User-Agent
</IfModule>
Please follow and like us:
error

Manage your datacenter infrastructure with Ansible

Ansible is not only a great tool for simple tasks (like updating servers) but can be of much help deploying and automating the infrastructure underneath it. Ansible supports building your infrastructure from the ground up.

Ansible is compatible with almost anything, if you can use CLI – you can use Ansible. Out of the box it has lots of plugins for vendors like Cisco, HP/Aruba, Arista, Juniper, Netapp and many more. Want to take it to a higher level? There is also support for VmWare, Xenserver, RHEV and more. Nothing ansible cannot build for you.

Build our network topology with Ansible

I will show you how to build a leaf-spine topology using ansible. If you want to know more about leaf-spine network topology’s please refer to this article. In short; leafs are access switches connected to the spines using layer-3 routing (ospf/bgp).

Ansible Hosts file

In order to manage our entire infrastructure in one place we will create a hosts file with groups (spines, leafs, servers) and children objects (the actual devices). For now i use VyOS as swithes but this can be any Cisco, HP or Juniper switch of course.

[leafs]
leaf01 ansible_host=192.168.136.71 ansible_network_os=vyos
leaf02 ansible_host=192.168.136.76 ansible_network_os=vyos

[spines]
spine01 ansible_host=192.168.136.70 ansible_network_os=vyos
spine02 ansible_host=192.168.136.77 ansible_network_os=vyos

[vms]
server01 ansible_host=192.168.200.100
server02 ansible_host=192.168.200.101

[infrastructure:children]
leafs
spines

[datacenter:children]
leafs
spines
vms

In the above example you will see i have two leaf switches that i want to connect to my two spine switches. I grouped them under the two host categories and then created a new categorie “infrastructure” linking them together. With that setup i can run tasks on either a set of leafs or on both spines and leafs together. Don’t forget to create a local ansible.cfg pointing to the hosts file

[defaults]
inventory = ~/ansible-datacenter/hosts
filter_plugins = ~/ansible-datacenter/plugins/

Configuring the interfaces of leafs and spines

Let’s start with the easy part, configure all devices to have interfaces in the correct subnet so they can communicate with eachother. Also, i am giving them a loopback address on interface lo used for internal purposes and management. Let’s create the playbook ipaddr.yml

---
- hosts: infrastructure
  connection: network_cli
  vars:
    interface_data:
      leaf01:
          - { name: eth1, ipv4: 192.168.11.2/30 }
          - { name: eth2, ipv4: 192.168.21.2/30 }
          - { name: lo, ipv4: 10.0.0.11/32 }
      leaf02:
          - { name: eth1, ipv4: 192.168.12.2/30 }
          - { name: eth2, ipv4: 192.168.22.2/30 }
          - { name: lo, ipv4: 10.0.0.12/32 }
      spine01:
          - { name: eth1, ipv4: 192.168.11.1/30 }
          - { name: eth2, ipv4: 192.168.12.1/30 }
          - { name: lo, ipv4: 10.0.0.1/32 }
      spine02:
          - { name: eth1, ipv4: 192.168.21.1/30 }
          - { name: eth2, ipv4: 192.168.22.1/30 }
          - { name: lo, ipv4: 10.0.0.2/32 }
  tasks:
    - name: VyOS | Configure IPv4
      vyos_l3_interface:
        aggregate: "{{interface_data[inventory_hostname]}}"

Notice that in this case i am using the hostsgroup ‘infrastructure’ because i want to set these IP adresses on all the switches (leafs and spines). This saves time as i can now do this from only one playbook. So, run it and see if the leaf actually has the correct configuration now.

vyos@leaf01:~$ show interfaces
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface        IP Address                        S/L  Description
---------        ----------                        ---  -----------
eth0             192.168.136.71/28                 u/u
eth1             192.168.11.2/30                   u/u
eth2             192.168.21.2/30                   u/u
lo               127.0.0.1/8                       u/u
                 10.0.0.11/32
                 ::1/128

Looks great! So now i have 2 spines and 2 leaves that can ping eachother and are connected.

vyos@leaf01:~$ ping 192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.
64 bytes from 192.168.11.1: icmp_req=1 ttl=64 time=0.288 ms
64 bytes from 192.168.11.1: icmp_req=2 ttl=64 time=0.305 ms

Automated checking

Ansible is meant to automate stuff, so after applying the interface configuration it would be best to automaticly check if the devices are reachable. Let’s create a task to ping from within the playbook after applying the configuration.

    - name: VyOS | Test IPv4 connectivity
      vyos_command:
        commands:
          - "ping 192.168.11.2 count 5"
          - "ping 192.168.12.2 count 5"
      register: spine01_result
      when: 'inventory_hostname == "spine01"'
    - name: VyOS | Testresults Connectivity
      assert:
        that:
          - "'192.168.11.2' in spine01_result.stdout[0]"
          - "'0% packet loss' in spine01_result.stdout[0]"
          - "'192.168.12.2' in spine01_result.stdout[1]"
          - "'0% packet loss' in spine01_result.stdout[1]"
      when: 'inventory_hostname == "spine01"'

So now we have spines, leaves, everything is connected but we still need Layer3 routing. We can use either BGP or OSPF. In this example i will use ansible to push the OSPF configuration to VyOS and create a new “area 0”. There are two ways to accomplish this, using the CLI or just push the config file itself. I’m going to the easy way, push the file. So i create a template in ansible and save it as ospf_conf.j2;

protocols {
    ospf {
        area 0 {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] != "lo" -%}
            network {{ dict["ipv4"] | ipaddr("network") }}/{{ dict["ipv4"] | ipaddr("prefix") }}
            {% else -%}
            network {{ dict["ipv4"] }}
            {% endif -%}
            {% endfor -%}
        }
        parameters {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] == "lo" -%}
            router-id {{ dict["ipv4"] | ipaddr("address") }}
            {% endif -%}
            {% endfor -%}
        }
    }
}

interfaces {
    {% for dict in interface_data[inventory_hostname] -%}
    {% if dict["name"] != "lo" -%}
    ethernet {{ dict["name"] }} {
        ip {
            ospf {
                network point-to-point
            }
        }
    }
    {% endif -%}
    {% endfor -%}
}

So what this does is add each range from the interface_data to the ospf networks, and add the ospf parameter to the interface (eth1/2). So add this task to the playbook:

  - name: push ospf configurtion to vyos
    vyos_config:
      src: ./ospf_conf.j2
      save: yes

So, run the playbook. After this you will see that each device has a working ospf configuration and the leafs are now redundantly connected to each spine.

vyos@leaf01# show protocols ospf
 area 0 {
     network 10.0.0.11/32
     network 192.168.11.0/30
     network 192.168.21.0/30
 }
 parameters {
     router-id 10.0.0.11
 }
vyos@leaf01# run show ip ospf neighbor

Neighbor ID     Pri State           Dead Time Address         Interface            RXmtL RqstL DBsmL
10.0.0.1          1 Full/DROther      32.379s 192.168.11.1    eth1:192.168.11.2        0     0     0
10.0.0.2          1 Full/DROther      32.369s 192.168.21.1    eth2:192.168.21.2        0     0     0

vyos@leaf01# traceroute 192.168.12.2
traceroute to 192.168.12.2 (192.168.12.2), 30 hops max, 60 byte packets
 1  192.168.11.1 (192.168.11.1)  0.320 ms  0.318 ms  0.313 ms
 2  192.168.12.2 (192.168.12.2)  0.557 ms  0.553 ms  0.584 ms
[edit]
vyos@leaf01# traceroute 192.168.22.2
traceroute to 192.168.22.2 (192.168.22.2), 30 hops max, 60 byte packets
 1  192.168.21.1 (192.168.21.1)  0.254 ms  0.249 ms  0.245 ms
 2  192.168.22.2 (192.168.22.2)  0.503 ms  0.502 ms  0.499 ms

The routing table will show you all 4 ip’s, some directly attached and some using OSPF:

vyos@leaf01.leaf01# run show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

S>* 0.0.0.0/0 [210/0] via 192.168.136.65, eth0, 19:18:47
O>* 10.0.0.1/32 [110/10] via 192.168.11.1, eth1, 18:05:55
O>* 10.0.0.2/32 [110/10] via 192.168.21.1, eth2, 17:20:50
O   10.0.0.11/32 [110/0] is directly connected, lo, 18:06:16
C>* 10.0.0.11/32 is directly connected, lo, 18:06:16
O>* 10.0.0.12/32 [110/20] via 192.168.11.1, eth1, 17:20:50
  *                       via 192.168.21.1, eth2, 17:20:50

We did this in less then 5 minutes, remember the time where we had to configure this by hand? Now if i would require a new interface i can edit my playbook, run it and be done in 30 seconds.

Next: Devide this playbook in to nice roles.

Please follow and like us:
error