Browser Caching via .htaccess

A browser retrieves many resources from the webserver (css/js, etc). Cache allows websites to store this files in temporary storage to allow faster retrieval next time the file is needed.

mod_expires headers

Using these statements we can inform the browser that it can cache files for a longer period. Make sure mod_headers is enabled in apache.

<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType text/css "access 1 month"
ExpiresByType text/html "access 1 month"
ExpiresByType image/gif "access 1 year"
ExpiresByType image/png "access 1 year"
ExpiresByType image/jpg "access 1 year"
ExpiresByType image/jpeg "access 1 year"
ExpiresByType image/x-icon "access 1 year"
ExpiresByType application/pdf "access 1 month"
ExpiresByType application/javascript "access 1 month"
ExpiresByType text/x-javascript "access 1 month"
ExpiresByType application/x-shockwave-flash "access 1 month"
ExpiresDefault "access 1 month"
</IfModule>

Cache control in mod_headers

<ifModule mod_headers.c>
<filesMatch "\.(ico|jpe?g|png|gif|swf)$">
Header set Cache-Control "public"
</filesMatch>
<filesMatch "\.(css)$">
Header set Cache-Control "public"
</filesMatch>
<filesMatch "\.(js)$">
Header set Cache-Control "private"
</filesMatch>
<filesMatch "\.(x?html?|php)$">
Header set Cache-Control "private, must-revalidate"
</filesMatch>
</ifModule>

Turn off ETags

By removing the ETag header, you disable caches and browsers from being able to validate files, so they are forced to rely on your Cache-Control and Expires header.

<IfModule mod_headers.c>
   Header unset Etag
   Header set Connection keep-alive
</IfModule>
FileETag None

Deflating compression

Compression is implemented by the DEFLATE filter. The following directive will enable compression for documents in the container where it is placed; again, make sure the module is enabled in Apache.

<IfModule mod_deflate.c>
  # Compress HTML, CSS, JavaScript, Text, XML and fonts
  AddOutputFilterByType DEFLATE application/javascript
  AddOutputFilterByType DEFLATE application/rss+xml
  AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
  AddOutputFilterByType DEFLATE application/x-font
  AddOutputFilterByType DEFLATE application/x-font-opentype
  AddOutputFilterByType DEFLATE application/x-font-otf
  AddOutputFilterByType DEFLATE application/x-font-truetype
  AddOutputFilterByType DEFLATE application/x-font-ttf
  AddOutputFilterByType DEFLATE application/x-javascript
  AddOutputFilterByType DEFLATE application/xhtml+xml
  AddOutputFilterByType DEFLATE application/xml
  AddOutputFilterByType DEFLATE font/opentype
  AddOutputFilterByType DEFLATE font/otf
  AddOutputFilterByType DEFLATE font/ttf
  AddOutputFilterByType DEFLATE image/svg+xml
  AddOutputFilterByType DEFLATE image/x-icon
  AddOutputFilterByType DEFLATE text/css
  AddOutputFilterByType DEFLATE text/html
  AddOutputFilterByType DEFLATE text/javascript
  AddOutputFilterByType DEFLATE text/plain
  AddOutputFilterByType DEFLATE text/xml
  # Remove browser bugs (only needed for really old browsers)
  BrowserMatch ^Mozilla/4 gzip-only-text/html
  BrowserMatch ^Mozilla/4\.0[678] no-gzip
  BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
  Header append Vary User-Agent
</IfModule>

Vyos configuration using Ansible

The goal is to create configurations for VyOS devices and applying them using Ansible. I have used Vyos as my home router, VPN endpoint device (with 1300+ ipsec tunnels) as well as a datacenter router connected to the AMS-IX using 10Gbps uplinks.

Prerequisites

Make sure you have a Vyos installation (can be virtual, can be a box, can even be a Unify Edgerouter) with ssh enabled.

Inventory file

[vyos]
10.0.1.1 ansible_user=ansible-adm ansible_network_os=vyos

Sample Playbook

---

- name: VYOS | Config eth1
  hosts: vyos
  connection: network_cli
  tasks:
    - name: VYOS | BACKUP
      vyos_config:
      backup: yes
    - name: VYOS | Apply eth1 config
      vyos_l3_interface:
        name: eth1
        ipv4: 10.0.2.1/24
        state: present

Run ansible playbook:

$ ansible-playbook -i hosts vyos.yml --ask-pass
SSH password:

PLAY [VYOS | Config eth1] *************************************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************************
ok: [10.0.1.1]

TASK [VYOS | BACKUP ] *************************************************************************************************************************************************************
changed: [10.0.1.1]

TASK [VYOS | Apply eth1 config] *************************************************************************************************************************************************************
changed: [10.0.1.1]

PLAY RECAP **************************************************************************************************************************************************************
10.0.1.1               : ok=2    changed=2    unreachable=0    failed=0

Configure DNS server on VyOS

---
- hosts: vyos
  connection: network_cli
  tasks:
  - name: VYOS | DNS servers and hostname
    vyos_system:
      host_name: "{{inventory_hostname}}"
      domain_name: my.vyos.test
      name_server:
        - 1.1.1.1
        - 8.8.4.4

Manage your datacenter infrastructure with Ansible

Ansible is not only a great tool for simple tasks (like updating servers) but can be of much help deploying and automating the infrastructure underneath it. Ansible supports building your infrastructure from the ground up.

Ansible is compatible with almost anything, if you can use CLI – you can use Ansible. Out of the box it has lots of plugins for vendors like Cisco, HP/Aruba, Arista, Juniper, Netapp and many more. Want to take it to a higher level? There is also support for VmWare, Xenserver, RHEV and more. Nothing ansible cannot build for you.

Build our network topology with Ansible

I will show you how to build a leaf-spine topology using ansible. If you want to know more about leaf-spine network topology’s please refer to this article. In short; leafs are access switches connected to the spines using layer-3 routing (ospf/bgp).

Ansible Hosts file

In order to manage our entire infrastructure in one place we will create a hosts file with groups (spines, leafs, servers) and children objects (the actual devices). For now i use VyOS as swithes but this can be any Cisco, HP or Juniper switch of course.

[leafs]
leaf01 ansible_host=192.168.136.71 ansible_network_os=vyos
leaf02 ansible_host=192.168.136.76 ansible_network_os=vyos

[spines]
spine01 ansible_host=192.168.136.70 ansible_network_os=vyos
spine02 ansible_host=192.168.136.77 ansible_network_os=vyos

[vms]
server01 ansible_host=192.168.200.100
server02 ansible_host=192.168.200.101

[infrastructure:children]
leafs
spines

[datacenter:children]
leafs
spines
vms

In the above example you will see i have two leaf switches that i want to connect to my two spine switches. I grouped them under the two host categories and then created a new categorie “infrastructure” linking them together. With that setup i can run tasks on either a set of leafs or on both spines and leafs together. Don’t forget to create a local ansible.cfg pointing to the hosts file

[defaults]
inventory = ~/ansible-datacenter/hosts
filter_plugins = ~/ansible-datacenter/plugins/

Configuring the interfaces of leafs and spines

Let’s start with the easy part, configure all devices to have interfaces in the correct subnet so they can communicate with eachother. Also, i am giving them a loopback address on interface lo used for internal purposes and management. Let’s create the playbook ipaddr.yml

---
- hosts: infrastructure
  connection: network_cli
  vars:
    interface_data:
      leaf01:
          - { name: eth1, ipv4: 192.168.11.2/30 }
          - { name: eth2, ipv4: 192.168.21.2/30 }
          - { name: lo, ipv4: 10.0.0.11/32 }
      leaf02:
          - { name: eth1, ipv4: 192.168.12.2/30 }
          - { name: eth2, ipv4: 192.168.22.2/30 }
          - { name: lo, ipv4: 10.0.0.12/32 }
      spine01:
          - { name: eth1, ipv4: 192.168.11.1/30 }
          - { name: eth2, ipv4: 192.168.12.1/30 }
          - { name: lo, ipv4: 10.0.0.1/32 }
      spine02:
          - { name: eth1, ipv4: 192.168.21.1/30 }
          - { name: eth2, ipv4: 192.168.22.1/30 }
          - { name: lo, ipv4: 10.0.0.2/32 }
  tasks:
    - name: VyOS | Configure IPv4
      vyos_l3_interface:
        aggregate: "{{interface_data[inventory_hostname]}}"

Notice that in this case i am using the hostsgroup ‘infrastructure’ because i want to set these IP adresses on all the switches (leafs and spines). This saves time as i can now do this from only one playbook. So, run it and see if the leaf actually has the correct configuration now.

vyos@leaf01:~$ show interfaces
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface        IP Address                        S/L  Description
---------        ----------                        ---  -----------
eth0             192.168.136.71/28                 u/u
eth1             192.168.11.2/30                   u/u
eth2             192.168.21.2/30                   u/u
lo               127.0.0.1/8                       u/u
                 10.0.0.11/32
                 ::1/128

Looks great! So now i have 2 spines and 2 leaves that can ping eachother and are connected.

vyos@leaf01:~$ ping 192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.
64 bytes from 192.168.11.1: icmp_req=1 ttl=64 time=0.288 ms
64 bytes from 192.168.11.1: icmp_req=2 ttl=64 time=0.305 ms

Automated checking

Ansible is meant to automate stuff, so after applying the interface configuration it would be best to automaticly check if the devices are reachable. Let’s create a task to ping from within the playbook after applying the configuration.

    - name: VyOS | Test IPv4 connectivity
      vyos_command:
        commands:
          - "ping 192.168.11.2 count 5"
          - "ping 192.168.12.2 count 5"
      register: spine01_result
      when: 'inventory_hostname == "spine01"'
    - name: VyOS | Testresults Connectivity
      assert:
        that:
          - "'192.168.11.2' in spine01_result.stdout[0]"
          - "'0% packet loss' in spine01_result.stdout[0]"
          - "'192.168.12.2' in spine01_result.stdout[1]"
          - "'0% packet loss' in spine01_result.stdout[1]"
      when: 'inventory_hostname == "spine01"'

So now we have spines, leaves, everything is connected but we still need Layer3 routing. We can use either BGP or OSPF. In this example i will use ansible to push the OSPF configuration to VyOS and create a new “area 0”. There are two ways to accomplish this, using the CLI or just push the config file itself. I’m going to the easy way, push the file. So i create a template in ansible and save it as ospf_conf.j2;

protocols {
    ospf {
        area 0 {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] != "lo" -%}
            network {{ dict["ipv4"] | ipaddr("network") }}/{{ dict["ipv4"] | ipaddr("prefix") }}
            {% else -%}
            network {{ dict["ipv4"] }}
            {% endif -%}
            {% endfor -%}
        }
        parameters {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] == "lo" -%}
            router-id {{ dict["ipv4"] | ipaddr("address") }}
            {% endif -%}
            {% endfor -%}
        }
    }
}

interfaces {
    {% for dict in interface_data[inventory_hostname] -%}
    {% if dict["name"] != "lo" -%}
    ethernet {{ dict["name"] }} {
        ip {
            ospf {
                network point-to-point
            }
        }
    }
    {% endif -%}
    {% endfor -%}
}

So what this does is add each range from the interface_data to the ospf networks, and add the ospf parameter to the interface (eth1/2). So add this task to the playbook:

  - name: push ospf configurtion to vyos
    vyos_config:
      src: ./ospf_conf.j2
      save: yes

So, run the playbook. After this you will see that each device has a working ospf configuration and the leafs are now redundantly connected to each spine.

vyos@leaf01# show protocols ospf
 area 0 {
     network 10.0.0.11/32
     network 192.168.11.0/30
     network 192.168.21.0/30
 }
 parameters {
     router-id 10.0.0.11
 }
vyos@leaf01# run show ip ospf neighbor

Neighbor ID     Pri State           Dead Time Address         Interface            RXmtL RqstL DBsmL
10.0.0.1          1 Full/DROther      32.379s 192.168.11.1    eth1:192.168.11.2        0     0     0
10.0.0.2          1 Full/DROther      32.369s 192.168.21.1    eth2:192.168.21.2        0     0     0

vyos@leaf01# traceroute 192.168.12.2
traceroute to 192.168.12.2 (192.168.12.2), 30 hops max, 60 byte packets
 1  192.168.11.1 (192.168.11.1)  0.320 ms  0.318 ms  0.313 ms
 2  192.168.12.2 (192.168.12.2)  0.557 ms  0.553 ms  0.584 ms
[edit]
vyos@leaf01# traceroute 192.168.22.2
traceroute to 192.168.22.2 (192.168.22.2), 30 hops max, 60 byte packets
 1  192.168.21.1 (192.168.21.1)  0.254 ms  0.249 ms  0.245 ms
 2  192.168.22.2 (192.168.22.2)  0.503 ms  0.502 ms  0.499 ms

The routing table will show you all 4 ip’s, some directly attached and some using OSPF:

vyos@leaf01.leaf01# run show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

S>* 0.0.0.0/0 [210/0] via 192.168.136.65, eth0, 19:18:47
O>* 10.0.0.1/32 [110/10] via 192.168.11.1, eth1, 18:05:55
O>* 10.0.0.2/32 [110/10] via 192.168.21.1, eth2, 17:20:50
O   10.0.0.11/32 [110/0] is directly connected, lo, 18:06:16
C>* 10.0.0.11/32 is directly connected, lo, 18:06:16
O>* 10.0.0.12/32 [110/20] via 192.168.11.1, eth1, 17:20:50
  *                       via 192.168.21.1, eth2, 17:20:50

We did this in less then 5 minutes, remember the time where we had to configure this by hand? Now if i would require a new interface i can edit my playbook, run it and be done in 30 seconds.

Next: Devide this playbook in to nice roles.

Encrypt email with PGP

One of the most popular methods to encrypt messages is PGP, which is a cryptography system quite widespread on the Internet. Using PGP we can encrypt a message end-to-end. There are many tools that can help, i use Gpg4Win (Free tool, works with Outlook).

Download Gpg4win here.

Once the download is finished, fire up the installer. It’s pretty much next-next finish. Optionally you can select “browser integration” during the installation process.

After the installation open it for the first time and click “New Key Pair”, it will request your name and e-mail address. Hit “Create” so start the generation process. Also, it will ask a password to secure the private key. Once done, it will tell you “Key pair successfuly created” – you are good to go.

To access your public key, right click anywhere on the bar where it lists your name and email address. Select the option in the drop-down menu that says Export. Save the file somewhere, you can share this with other people you want to safely communicate with.

Now it’s time to find your private key. You will need it to decrypt messages that you receive. Right click on the bar where your certificate is displayed, then select Export Secret Keys. Save this file in a safe location!

In order to communicate safely with somebody you will have to import their public key in to Kleopatra. To search for someone’s public key, click on the Lookup on Server and simply search for e-mailaddresses. Found the person you were looking for? Right click and hit “Import”. It will ask for confirmation, if correct hit Yes.

Here comes the magic. Open up Outlook and create a new email. In the top bar you will find a new header (“GpgOl”). Add the person you just imported in the “TO” field, add some content in the email and hit “Encrypt”. If required, select the certicate that matches the recipient and hit “OK”. Now you will see the message completely crypted.

For receiving a crypted email it’s very simple, go to the top bar (GpgOL) and hit Decrypt. Remember, you must have this persons public key imported.

Ubuntu 18.04 – Apache2 – HTTP2

Today, I’m going to install the latest Apache2 and PHP7 on an Ubuntu 18.04 server and enable the HTTP/2 protocol. To upgrade an existing Apache system to use HTTP/2, follow these simple instructions:

$ sudo -i
$ apt-get install python-software-properties
$ add-apt-repository -y ppa:ondrej/apache2
$ apt-key update
$ apt-get update

The above commands add the latest apache2 repository to your system and updates the list of available packages your system is aware of.

$ apt-get upgrade

Now your system is up to date with the latest packages. I am assuming you already have Apache/php etc running. Now we can enable the module in Apache:

a2enmod http2

Now we have to edit the virtual host and add this protocol to it.

<VirtualHost *:443>
 # prefer http over http1
 Protocols h2 http/1.1
 .....
</VirtualHost>

Now restart Apache and you should be good to go !

Ansible – One role to rule them all

Ansible Role is a concept that deals with ideas rather than events. Its basically another level of abstraction used to organize playbooks. They provide a skeleton for an independent and reusable collection of variables, tasks, templates, files, and modules which can be automatically loaded into the playbook. Playbooks are a collection of roles. Every role has specific functionality.

For example, to install Nginx, we need to add a package repository, install the package and set up configuration. Roles allow us to create very minimal playbooks that then look to a directory structure to determine the configuration steps they need to perform.

Role directory structure

In order for Ansible to correctly handle roles, we should build a directory structure so that Ansible can find and understand. We can do this by creating a Roles directory in our working directory.

The directory structure for Roles looks like this:

rolename
 - files
 - handlers
 - meta
 - templates
 - tasks
 - vars

A role’s directory structure consists of files, handlers, meta, templates, tasks, and vars. These are the directories that will contain all of the code to implement our configuration. We may not use all of the directories, so in real practice, we may not need to create all of these directories.

Ansible will search for and read any yaml file called roles/nginx/tasks/main.yml automatically. Here is the main.yml file;

---
- name: Installs Nginx
  apt: pkg=nginx state=installed update_cache=true
  notify:
    - Start Nginx

- name: Upload default index.php for host
  copy: src=index.php dest=/usr/share/nginx/html/ mode=0644
  register: php
  ignore_errors: True

- name: Remove index.html for host
  command: rm /usr/share/nginx/html/index.html
  when: php|success

- name: Upload default index.html for host
  copy: src=index.html dest=/usr/share/nginx/html/ mode=0644
  when: php|failed

As we can see, the file just lists the steps that are to be performed, which makes it reads well.

We also made a change how we references external files in our configuration. Our src lines reference a static_files directory. This is unnecessary if we place all of our static files in the files subdirectory. Ansible will find them automatically.

Now that we have the task portion of the playbook in the tasks/main.yml file, we need to move the handlers section into a file located at handlers/main.yml.

- name: Start Nginx
  service: name=nginx state=started

Move index.html and index.php pages out of the static_files directory and put them into the roles/nginx/files directory.

So now we can create a very very simple playbook with the following content:

---
- hosts: test_group
  roles:
    - role: nginx

Run it!

$ ansible-playbook -s test.yml

PLAY [test_group] ******************************************************************** 

GATHERING FACTS *************************************************************** 
ok: [127.0.0.1]

TASK: [nginx | Installs Nginx] ************************************************ 
ok: [127.0.0.1]

TASK: [nginx | Upload default index.php for host] ***************************** 
ok: [127.0.0.1]

TASK: [nginx | Remove index.html for host] ************************************ 
changed: [127.0.0.1]

TASK: [nginx | Upload default index.html for host] **************************** 
skipping: [127.0.0.1]

PLAY RECAP ******************************************************************** 
127.0.0.1              : ok=4    changed=1    unreachable=0    failed=0  

Shopware + NGIX

Shopware is a widely used professional open source e-commerce software. Based on bleeding edge technologies like Symfony 3, Doctrine2 and Zend Framework Shopware comes as the perfect platform for your next e-commerce project.

Set up the timezone and make sure all updates are done and required packages are installed:

sudo dpkg-reconfigure tzdata
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget vim git unzip socat apt-transport-https

Install PHP and required packages

sudo apt install -y php7.0 php7.0-cli php7.0-fpm php7.0-common php7.0-mysql php7.0-curl php7.0-json php7.0-zip php7.0-gd php7.0-xml php7.0-mbstring php7.0-opcache

Install database server (mysql or mariadb)

sudo apt install -y mariadb-server
sudo mysql_secure_installation
Would you like to setup VALIDATE PASSWORD plugin? N
New password: your_secure_password
Re-enter new password: your_secure_password
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

Connect and create a user and database:

sudo mysql -u root -p
# Enter password
mysql> CREATE DATABASE dbname;
mysql> GRANT ALL ON dbname.* TO 'username' IDENTIFIED BY 'password';
mysql> FLUSH PRIVILEGES;
exit;

Install and configure NGIX

sudo apt install -y nginx
sudo nano /etc/nginx/sites-available/shopware.conf
server {
    listen 80;
    listen 443 ssl;

    ssl_certificate /etc/letsencrypt/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/example.com/private.key;
    ssl_certificate /etc/letsencrypt/example.com_ecc/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/example.com_ecc/private.key;
    
    server_name example.com;
    root /var/www/shopware;

    index shopware.php index.php;

    location / {
        try_files $uri $uri/ /shopware.php$is_args$args;
    }

    location /recovery/install {
      index index.php;
      try_files $uri /recovery/install/index.php$is_args$args;
    }

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
    }
}
sudo ln -s /etc/nginx/sites-available/shopware.conf /etc/nginx/sites-enabled
sudo systemctl reload nginx.service

Now it’s time to install Shopware;

sudo mkdir -p /var/www/shopware
sudo chown -R {your_user}:{your_user} /var/www/shopware
cd /var/www/shopware
wget https://releases.shopware.com/install_5.5.8_d5bf50630eeaacc6679683e0ab0dcba89498be6d.zip?_ga=2.141661361.269357371.1556739808-1418008019.1556603459 -O shopware.zip
unzip shopware.zip
rm shopware.zip
sudo chown -R www-data:www-data /var/www/shopware

You should alter the default PHP values of memory_limit = 256M and upload_max_filesize = 6M.

Now fire up a browser to your server and you will see the setup wizard of Shopware, ready to complete.

Configuration of HP IRF (Intelligent Resilient Framework)

HP’s Intelligent Resilient Framework (IRF) is an advanced technology that allows one to virtualize 2 or more switches into a single switching and routing system also known as a “virtual switch”. IRF is available on the new HP A series switches such as the A5120 model and the A5500-5800 models.

From Wikipedia:
Intelligent Resilient Framework (IRF) is a software virtualization technology developed by H3C (3Com). Its core idea is to connect multiple network devices through physical IRF ports and perform necessary configurations, and then these devices are virtualized into a distributed device. This virtualization technology realizes the cooperation, unified management, and non-stop maintenance of multiple devices.[1] This technology follows some of the same general concepts as Cisco’s VSS and vPC technologies.

Basic Configuration;

Step 1:
Login to the switch through the console port

Step 2:
Ensure that both switches are running the same software version

[H3C] system view
[H3C]display version

Step 3:
Reset the configuration of the switches.

reset saved-configuration
reboot

Step 4:
Assign a unit number to each S5800. Switch 1 or 2. (Later you will see the unit number on the right side the switch on the front panel led)

On unit 1:

[H3C]irf member 1 renumber 1
Warning: Renumbering the switch number may result in configuration change or loss. Continue?[Y/N]:y

On unit 2:

[H3C]irf member 1 renumber 2
Warning: Renumbering the switch number may result in configuration change or loss. Continue?[Y/N]:y

Step 5:
Save the configuration and reboot the switches

[H3C]quit
save irf.cfg
startup saved-configuration irf.cfg
reboot

Step 6:
Setting priority on Master S5800.
On unit 1:

[H3C]irf member 1 priority 32

Step 7:
Shutdown the 10 Gbps port that will form the IRF Group (on both switches)
On Unit 1:

[H3C]int TenGigabitEthernet 1/0/25
[H3C-Ten-GigabitEthernet1/0/25]shutdown
[H3C]int TenGigabitEthernet 1/0/26
[H3C-Ten-GigabitEthernet1/0/26]shutdown

On Unit 2:

[H3C]int TenGigabitEthernet 2/0/27
[H3C-Ten-GigabitEthernet2/0/27]shutdown
[H3C]int TenGigabitEthernet 2/0/28
[H3C-Ten-GigabitEthernet2/0/28]shutdown

Step 8:
Assign the 10 Gbps port to an IRF port group
On Unit 1:

[H3C]irf-port 1/1
[H3C-irf-port]port group interface TenGigabitEthernet 1/0/25
[H3C-irf-port]port group interface TenGigabitEthernet 1/0/26
[H3C-irf-port]quit

On Unit 2:

[H3C]irf-port 2/2
[H3C-irf-port]port group interface TenGigabitEthernet 2/0/27
[H3C-irf-port]port group interface TenGigabitEthernet 2/0/28
[H3C-irf-port]quit

Step 9:
Enable the 10 Gbps ports that will form the IRF (on both switches)
On unit 1:

[H3C]int TenGigabitEthernet 1/0/25
[H3C-Ten-GigabitEthernet1/0/25]undo shutdown
[H3C]int TenGigabitEthernet 1/0/26
[H3C-Ten-GigabitEthernet1/0/26]undo shutdown

On unit 2:

[H3C]int TenGigabitEthernet 2/0/27
[H3C-Ten-GigabitEthernet2/0/25]undo shutdown
[H3C]int TenGigabitEthernet 2/0/28
[H3C-Ten-GigabitEthernet2/0/26]undo shutdown
Step 10: Activate the IRF Port Configuration (on both switches)
[H3C]irf-port-configuration active

Step 11:
Save the configuration

[H3C]quit
save

Step 12:
Connect the 2 10GbE Direct Attach Cables (DACs) as Shown in the IRF Diagram
NOTE: The secondary switch (unit 2) will now reboot automatically.

Step 13:
The IRF stack should now be formed. Verify IRF operation

[H3C]display irf
[H3C]display irf configuration
[H3C]display irf topology
[H3C]display devices

Done!

Ubuntu Bonding (trunk) with LACP

Linux allows us to bond multiple network interfaces into single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface.

sudo apt-get install ifenslave-2.6

Now, we have to make sure that the correct kernel module bonding is present, and loaded at boot time.
Edit /etc/modules file:

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
bonding

As you can see we added “bonding”.
Now stop the network service:

service networking stop

Load the module (or reboot server):

sudo modprobe bonding

Now edit the interfaces configuration to support bonding and LACP.

auto eth1
iface eth1 inet manual
    bond-master bond0
 
auto eth2
iface eth2 inet manual
    bond-master bond0
 
auto bond0
iface bond0 inet static
    # For jumbo frames, change mtu to 9000
    mtu 1500
    address 192.31.1.2
    netmask 255.255.255.0
    network 192.31.1.0
    broadcast 192.31.1.255
    gateway 192.31.1.1
    bond-miimon 100
    bond-downdelay 200 
    bond-updelay 200 
    bond-mode 4
    bond-slaves none

Now start the network service again

service networking start

Verify the bond is up:

cat /proc/net/bonding/bond0

Output should be something like:

~$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
 
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
 
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 33
    Partner Key: 2
    Partner Mac Address: cc:e1:7f:2b:82:80
 
Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:c5
Aggregator ID: 1
Slave queue ID: 0
 
Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:cf
Aggregator ID: 1
Slave queue ID: 0

CGN: Carrier Grade NAT

Every network engineer with some experience knows RFC1918 address space from the top of their head. So no need to explain that almost every office, home user and some datacenter networks are using IP’s from this RFC. So far, so good. But, what if you have a large network with more then 10 physical locations and need to hook things together? This is where CGN comes in handy.

If you have multiple offices or locations and one of the NAT-performing routers has the same subnet on the inside as on the outside (the outside being the main office network here), no routing will be possible for this network. Specially when dealing with a lot of branch offices (and more IT personel) it becomes more difficult to know exactly what RFC1918 ranges are in use, and where. For example, i have worked for a large enterprise where somebody in Spain wanted to maintain control over the local network (idiot). He just figured it would be handy to configure 10.0.0.0/8 as local network and everything worked until he had to open a VPN tunnel to the main office in Amsterdam. As the main office network equipment was using the 10.0.10.0/24 things started to fall apart.

This is where RFC 6598 comes in handy. This RFC reserves an IPv4 prefix that can be used for internal addressing, separately from the RFC1918 addresses. Result: no overlap, yet no use of publicly routable addresses. The chosen prefix is 100.64.0.0/10.

It’s good to know that, for networking purposes, there is a complete /10 range that can be used (obviously isolated from anything else). CGN has drawbacks such as complexity and administation. But in a large enterprise CGN would definatly be the way to go.

Here you can find some great test results!