Ubuntu Bonding (trunk) with LACP

Linux allows us to bond multiple network interfaces into single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface.

sudo apt-get install ifenslave-2.6

Now, we have to make sure that the correct kernel module bonding is present, and loaded at boot time.
Edit /etc/modules file:

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
bonding

As you can see we added “bonding”.
Now stop the network service:

service networking stop

Load the module (or reboot server):

sudo modprobe bonding

Now edit the interfaces configuration to support bonding and LACP.

auto eth1
iface eth1 inet manual
    bond-master bond0
 
auto eth2
iface eth2 inet manual
    bond-master bond0
 
auto bond0
iface bond0 inet static
    # For jumbo frames, change mtu to 9000
    mtu 1500
    address 192.31.1.2
    netmask 255.255.255.0
    network 192.31.1.0
    broadcast 192.31.1.255
    gateway 192.31.1.1
    bond-miimon 100
    bond-downdelay 200 
    bond-updelay 200 
    bond-mode 4
    bond-slaves none

Now start the network service again

service networking start

Verify the bond is up:

cat /proc/net/bonding/bond0

Output should be something like:

~$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
 
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
 
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 33
    Partner Key: 2
    Partner Mac Address: cc:e1:7f:2b:82:80
 
Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:c5
Aggregator ID: 1
Slave queue ID: 0
 
Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:cf
Aggregator ID: 1
Slave queue ID: 0
Please follow and like us:

CGN: Carrier Grade NAT

Every network engineer with some experience knows RFC1918 address space from the top of their head. So no need to explain that almost every office, home user and some datacenter networks are using IP’s from this RFC. So far, so good. But, what if you have a large network with more then 10 physical locations and need to hook things together? This is where CGN comes in handy.

If you have multiple offices or locations and one of the NAT-performing routers has the same subnet on the inside as on the outside (the outside being the main office network here), no routing will be possible for this network. Specially when dealing with a lot of branch offices (and more IT personel) it becomes more difficult to know exactly what RFC1918 ranges are in use, and where. For example, i have worked for a large enterprise where somebody in Spain wanted to maintain control over the local network (idiot). He just figured it would be handy to configure 10.0.0.0/8 as local network and everything worked until he had to open a VPN tunnel to the main office in Amsterdam. As the main office network equipment was using the 10.0.10.0/24 things started to fall apart.

This is where RFC 6598 comes in handy. This RFC reserves an IPv4 prefix that can be used for internal addressing, separately from the RFC1918 addresses. Result: no overlap, yet no use of publicly routable addresses. The chosen prefix is 100.64.0.0/10.

It’s good to know that, for networking purposes, there is a complete /10 range that can be used (obviously isolated from anything else). CGN has drawbacks such as complexity and administation. But in a large enterprise CGN would definatly be the way to go.

Here you can find some great test results!

Please follow and like us:

The internet is broken?

Yesterday, 12th of Aug 2014, the internet grew passed the 512.000 BGP Routes. This was not something new, Cisco warned about this in May 2014:

It wasn’t that long ago (2008) that the table reached 256k routes, triggering action by network administrators to ensure the continued growth of the Internet. Now that the table has passed 500,000 routes, it’s time to start preparing for another significant milestone – the 512k mark.

A nice graph can be found on he.net also showing that the number of ASN’s grew passed the 48K.

If you accept full internet routes on your network this might be the time to verify your maximum table size on those components. Some equipment might need to be rebooted in order for this change to become active.

More information can be found here (read it!)

Please follow and like us: