Load Balancing Remote Desktop

Using HAProxy to loadbalance between RDS servers is usefull if you have more then one RDS servers and want users to connect to a single IP.

  1. Install Haproxy

    sudo apt-get update
    sudo apt-get install haproxy

  2. Add the RDP VIP (virtual IP) and RDP hosts

    defaults
    clitimeout 1h
    srvtimeout 1h
    listen VIP1 193.x.x.x:3389
    mode tcp
    tcp-request inspect-delay 5s
    tcp-request content accept if RDP_COOKIE
    persist rdp-cookie
    balance rdp-cookie
    option tcpka
    option tcplog
    server win2k19-A 192.168.10.5:3389 weight 10 check inter 2000 rise 2 fall 3
    server win2k19-B 192.168.10.6:3389 weight 10 check inter 2000 rise 2 fall 3
    option redispatch

Now we have HAProxy running on a 193.x.x.x ip address, when you connect to that IP it will direct you to one of the Windows 2019 machines. If one dies, it will remove it and you can reconnect to the last one that is online.

Installing phpIPAM

phpipam is an open-source web IP address management application (IPAM). Its goal is to provide light, modern and useful IP address management. It has lots of features and can be integrated with PowerDNS!

  1. Install Apache, PHP 7.2, MariaDB and GIT client

    apt-get install apache2 mariadb-server php7.2 libapache2-mod-php7.2 php7.2-curl php7.2-mysql php7.2-curl php7.2-gd php7.2-intl php-pear php7.2-imap php-memcache php7.2-pspell php7.2-recode php7.2-tidy php7.2-xmlrpc php7.2-mbstring php-gettext php7.2-gmp php7.2-json php7.2-xml git wget -y

  2. Run the secure installation for MariaDB

    mysql_secure_installation
    Enter current password for root (enter for none):
    Set root password? [Y/n]: N
    Remove anonymous users? [Y/n]: Y
    Disallow root login remotely? [Y/n]: Y
    Remove test database and access to it? [Y/n]: N
    Reload privilege tables now? [Y/n]: Y

  3. Create database and user

    MariaDB [(none)]> create database phpipam;
    MariaDB [(none)]> grant all on phpipam.* to phpipam@localhost identified by 'bontekoe123';

    MariaDB [(none)]> FLUSH PRIVILEGES;
    MariaDB [(none)]> EXIT;

  4. Clone it from Github

    cd /var/www/html
    git clone https://github.com/phpipam/phpipam.git /var/www/html/phpipam/
    cd /var/www/html/phpipam
    git checkout 1.3
    git submodule update --init --recursive

  5. Edit the config file

    cp config.dist.php config.php
    nano config.php

    Add your MariaDB database, username and password there.

  6. Import PHP IPADM Database

    mysql -u root -p phpipam < db/SCHEMA.sql

  7. Set the correct permissions

    chown -R www-data:www-data /var/www/html/phpipam
    chmod -R 755 /var/www/html/phpipam

  8. Apache configuration

    Either you use the default apache configuration or you can create a virtual host specificly for this. That i leave up to you.

  9. Finish the installer

    Go to http://x.x.x.x/phpipam and you will see the wizard, asking you to finish the installation. Follow the easy steps. After that you can login with admin/admin and change the default password.

  10. Create your first subnet

    Here you can create your first subnet. If can automaticly help you by discovery, checking hosts status and more.

Ceph Infrastructure Planning

Ceph is open source storage meant to facilitate highly scalable object, block and file-based storage. Because of it’s open source nature it can run on any type of hardware. The CRUSH algorithm will balance and replicate all data throughout the cluster.

At first i was running Ceph on regular servers (HP Proliant, DELL) but since +/- 2 years i decided to play around with custom configurations hoping to squeeze more iops out of it. As i found out, big part of getting the most out of Ceph is: network. My last Ceph configuration was so fast it took down all our servers 🙁

What was i doing?

In order to get the most out of ceph i created a setup using only NVMe drives. Here is my configuration for my ceph nodes:

Supermicro SYS-1028U-TN10RT (link)
Intel E5-2640v4 (10 Core, 2.4Ghz)
32GB DDR4-SDRAM
1x Intel SSD DC P4600
1x ASUS M.2 Hyper x16 Card
4x Samsung Pro 960 NVMe
1x Intel X520-DA2 Dual 10Gbe

I am using the Intel SSD DC P4600 as main (fast) storage and Samsung Pro NVMe’s for slightly slower storage. Everything was looking dandy.

So what happend?

One of our clients woke up and thought, “hey, let’s migrate some stuff today”. Generating only 1Gbps network traffic on the uplinks to the internet everything was looking fine. Until 15 minutes later everything went offline. All VM’s stopped and storage was down. Turns out that the client was migrating a set of DB’s to our network, after the scp copy stopped they extracted the files and imported them in the database server generating a lot of IO on that virtual server. That virtual server was however part of a database cluster so replicated all that data to 3 other VM’s.

As Ceph wrote the data to disk, it also replicated it across the Ceph pool and through the network generating way to much (20 Gbps per Ceph host) traffic on the access layer. Ceph was so fast, it took down the entire network.

How to resolve

As storage (specially Ceph 😉 ) is getting faster, network topology needs to change in order to handle the traffic. A design that is becoming very popular is the leaf-spine design.

This topology replaces the well known access/aggregation/core design and using more bandwidth on the connections to the spines we can handle much more traffic.

A 10 Gbit network is at minimum recommended for Ceph. 1 Gbit will work however the latency and bandwidth limitations will be almost unacceptable for a production network. 10 Gbit is a significant improvement for the latency. Also recommended is that each Ceph node is connected to two different leafs for external traffic (clients/vm’s) and internal traffic (replication).

Browser Caching via .htaccess

A browser retrieves many resources from the webserver (css/js, etc). Cache allows websites to store this files in temporary storage to allow faster retrieval next time the file is needed.

mod_expires headers

Using these statements we can inform the browser that it can cache files for a longer period. Make sure mod_headers is enabled in apache.

<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType text/css "access 1 month"
ExpiresByType text/html "access 1 month"
ExpiresByType image/gif "access 1 year"
ExpiresByType image/png "access 1 year"
ExpiresByType image/jpg "access 1 year"
ExpiresByType image/jpeg "access 1 year"
ExpiresByType image/x-icon "access 1 year"
ExpiresByType application/pdf "access 1 month"
ExpiresByType application/javascript "access 1 month"
ExpiresByType text/x-javascript "access 1 month"
ExpiresByType application/x-shockwave-flash "access 1 month"
ExpiresDefault "access 1 month"
</IfModule>

Cache control in mod_headers

<ifModule mod_headers.c>
<filesMatch "\.(ico|jpe?g|png|gif|swf)$">
Header set Cache-Control "public"
</filesMatch>
<filesMatch "\.(css)$">
Header set Cache-Control "public"
</filesMatch>
<filesMatch "\.(js)$">
Header set Cache-Control "private"
</filesMatch>
<filesMatch "\.(x?html?|php)$">
Header set Cache-Control "private, must-revalidate"
</filesMatch>
</ifModule>

Turn off ETags

By removing the ETag header, you disable caches and browsers from being able to validate files, so they are forced to rely on your Cache-Control and Expires header.

<IfModule mod_headers.c>
   Header unset Etag
   Header set Connection keep-alive
</IfModule>
FileETag None

Deflating compression

Compression is implemented by the DEFLATE filter. The following directive will enable compression for documents in the container where it is placed; again, make sure the module is enabled in Apache.

<IfModule mod_deflate.c>
  # Compress HTML, CSS, JavaScript, Text, XML and fonts
  AddOutputFilterByType DEFLATE application/javascript
  AddOutputFilterByType DEFLATE application/rss+xml
  AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
  AddOutputFilterByType DEFLATE application/x-font
  AddOutputFilterByType DEFLATE application/x-font-opentype
  AddOutputFilterByType DEFLATE application/x-font-otf
  AddOutputFilterByType DEFLATE application/x-font-truetype
  AddOutputFilterByType DEFLATE application/x-font-ttf
  AddOutputFilterByType DEFLATE application/x-javascript
  AddOutputFilterByType DEFLATE application/xhtml+xml
  AddOutputFilterByType DEFLATE application/xml
  AddOutputFilterByType DEFLATE font/opentype
  AddOutputFilterByType DEFLATE font/otf
  AddOutputFilterByType DEFLATE font/ttf
  AddOutputFilterByType DEFLATE image/svg+xml
  AddOutputFilterByType DEFLATE image/x-icon
  AddOutputFilterByType DEFLATE text/css
  AddOutputFilterByType DEFLATE text/html
  AddOutputFilterByType DEFLATE text/javascript
  AddOutputFilterByType DEFLATE text/plain
  AddOutputFilterByType DEFLATE text/xml
  # Remove browser bugs (only needed for really old browsers)
  BrowserMatch ^Mozilla/4 gzip-only-text/html
  BrowserMatch ^Mozilla/4\.0[678] no-gzip
  BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
  Header append Vary User-Agent
</IfModule>

Vyos configuration using Ansible

The goal is to create configurations for VyOS devices and applying them using Ansible. I have used Vyos as my home router, VPN endpoint device (with 1300+ ipsec tunnels) as well as a datacenter router connected to the AMS-IX using 10Gbps uplinks.

Prerequisites

Make sure you have a Vyos installation (can be virtual, can be a box, can even be a Unify Edgerouter) with ssh enabled.

Inventory file

[vyos]
10.0.1.1 ansible_user=ansible-adm ansible_network_os=vyos

Sample Playbook

---

- name: VYOS | Config eth1
  hosts: vyos
  connection: network_cli
  tasks:
    - name: VYOS | BACKUP
      vyos_config:
      backup: yes
    - name: VYOS | Apply eth1 config
      vyos_l3_interface:
        name: eth1
        ipv4: 10.0.2.1/24
        state: present

Run ansible playbook:

$ ansible-playbook -i hosts vyos.yml --ask-pass
SSH password:

PLAY [VYOS | Config eth1] *************************************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************************
ok: [10.0.1.1]

TASK [VYOS | BACKUP ] *************************************************************************************************************************************************************
changed: [10.0.1.1]

TASK [VYOS | Apply eth1 config] *************************************************************************************************************************************************************
changed: [10.0.1.1]

PLAY RECAP **************************************************************************************************************************************************************
10.0.1.1               : ok=2    changed=2    unreachable=0    failed=0

Configure DNS server on VyOS

---
- hosts: vyos
  connection: network_cli
  tasks:
  - name: VYOS | DNS servers and hostname
    vyos_system:
      host_name: "{{inventory_hostname}}"
      domain_name: my.vyos.test
      name_server:
        - 1.1.1.1
        - 8.8.4.4

Manage your datacenter infrastructure with Ansible

Ansible is not only a great tool for simple tasks (like updating servers) but can be of much help deploying and automating the infrastructure underneath it. Ansible supports building your infrastructure from the ground up.

Ansible is compatible with almost anything, if you can use CLI – you can use Ansible. Out of the box it has lots of plugins for vendors like Cisco, HP/Aruba, Arista, Juniper, Netapp and many more. Want to take it to a higher level? There is also support for VmWare, Xenserver, RHEV and more. Nothing ansible cannot build for you.

Build our network topology with Ansible

I will show you how to build a leaf-spine topology using ansible. If you want to know more about leaf-spine network topology’s please refer to this article. In short; leafs are access switches connected to the spines using layer-3 routing (ospf/bgp).

Ansible Hosts file

In order to manage our entire infrastructure in one place we will create a hosts file with groups (spines, leafs, servers) and children objects (the actual devices). For now i use VyOS as swithes but this can be any Cisco, HP or Juniper switch of course.

[leafs]
leaf01 ansible_host=192.168.136.71 ansible_network_os=vyos
leaf02 ansible_host=192.168.136.76 ansible_network_os=vyos

[spines]
spine01 ansible_host=192.168.136.70 ansible_network_os=vyos
spine02 ansible_host=192.168.136.77 ansible_network_os=vyos

[vms]
server01 ansible_host=192.168.200.100
server02 ansible_host=192.168.200.101

[infrastructure:children]
leafs
spines

[datacenter:children]
leafs
spines
vms

In the above example you will see i have two leaf switches that i want to connect to my two spine switches. I grouped them under the two host categories and then created a new categorie “infrastructure” linking them together. With that setup i can run tasks on either a set of leafs or on both spines and leafs together. Don’t forget to create a local ansible.cfg pointing to the hosts file

[defaults]
inventory = ~/ansible-datacenter/hosts
filter_plugins = ~/ansible-datacenter/plugins/

Configuring the interfaces of leafs and spines

Let’s start with the easy part, configure all devices to have interfaces in the correct subnet so they can communicate with eachother. Also, i am giving them a loopback address on interface lo used for internal purposes and management. Let’s create the playbook ipaddr.yml

---
- hosts: infrastructure
  connection: network_cli
  vars:
    interface_data:
      leaf01:
          - { name: eth1, ipv4: 192.168.11.2/30 }
          - { name: eth2, ipv4: 192.168.21.2/30 }
          - { name: lo, ipv4: 10.0.0.11/32 }
      leaf02:
          - { name: eth1, ipv4: 192.168.12.2/30 }
          - { name: eth2, ipv4: 192.168.22.2/30 }
          - { name: lo, ipv4: 10.0.0.12/32 }
      spine01:
          - { name: eth1, ipv4: 192.168.11.1/30 }
          - { name: eth2, ipv4: 192.168.12.1/30 }
          - { name: lo, ipv4: 10.0.0.1/32 }
      spine02:
          - { name: eth1, ipv4: 192.168.21.1/30 }
          - { name: eth2, ipv4: 192.168.22.1/30 }
          - { name: lo, ipv4: 10.0.0.2/32 }
  tasks:
    - name: VyOS | Configure IPv4
      vyos_l3_interface:
        aggregate: "{{interface_data[inventory_hostname]}}"

Notice that in this case i am using the hostsgroup ‘infrastructure’ because i want to set these IP adresses on all the switches (leafs and spines). This saves time as i can now do this from only one playbook. So, run it and see if the leaf actually has the correct configuration now.

vyos@leaf01:~$ show interfaces
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface        IP Address                        S/L  Description
---------        ----------                        ---  -----------
eth0             192.168.136.71/28                 u/u
eth1             192.168.11.2/30                   u/u
eth2             192.168.21.2/30                   u/u
lo               127.0.0.1/8                       u/u
                 10.0.0.11/32
                 ::1/128

Looks great! So now i have 2 spines and 2 leaves that can ping eachother and are connected.

vyos@leaf01:~$ ping 192.168.11.1
PING 192.168.11.1 (192.168.11.1) 56(84) bytes of data.
64 bytes from 192.168.11.1: icmp_req=1 ttl=64 time=0.288 ms
64 bytes from 192.168.11.1: icmp_req=2 ttl=64 time=0.305 ms

Automated checking

Ansible is meant to automate stuff, so after applying the interface configuration it would be best to automaticly check if the devices are reachable. Let’s create a task to ping from within the playbook after applying the configuration.

    - name: VyOS | Test IPv4 connectivity
      vyos_command:
        commands:
          - "ping 192.168.11.2 count 5"
          - "ping 192.168.12.2 count 5"
      register: spine01_result
      when: 'inventory_hostname == "spine01"'
    - name: VyOS | Testresults Connectivity
      assert:
        that:
          - "'192.168.11.2' in spine01_result.stdout[0]"
          - "'0% packet loss' in spine01_result.stdout[0]"
          - "'192.168.12.2' in spine01_result.stdout[1]"
          - "'0% packet loss' in spine01_result.stdout[1]"
      when: 'inventory_hostname == "spine01"'

So now we have spines, leaves, everything is connected but we still need Layer3 routing. We can use either BGP or OSPF. In this example i will use ansible to push the OSPF configuration to VyOS and create a new “area 0”. There are two ways to accomplish this, using the CLI or just push the config file itself. I’m going to the easy way, push the file. So i create a template in ansible and save it as ospf_conf.j2;

protocols {
    ospf {
        area 0 {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] != "lo" -%}
            network {{ dict["ipv4"] | ipaddr("network") }}/{{ dict["ipv4"] | ipaddr("prefix") }}
            {% else -%}
            network {{ dict["ipv4"] }}
            {% endif -%}
            {% endfor -%}
        }
        parameters {
            {% for dict in interface_data[inventory_hostname] -%}
            {% if dict["name"] == "lo" -%}
            router-id {{ dict["ipv4"] | ipaddr("address") }}
            {% endif -%}
            {% endfor -%}
        }
    }
}

interfaces {
    {% for dict in interface_data[inventory_hostname] -%}
    {% if dict["name"] != "lo" -%}
    ethernet {{ dict["name"] }} {
        ip {
            ospf {
                network point-to-point
            }
        }
    }
    {% endif -%}
    {% endfor -%}
}

So what this does is add each range from the interface_data to the ospf networks, and add the ospf parameter to the interface (eth1/2). So add this task to the playbook:

  - name: push ospf configurtion to vyos
    vyos_config:
      src: ./ospf_conf.j2
      save: yes

So, run the playbook. After this you will see that each device has a working ospf configuration and the leafs are now redundantly connected to each spine.

vyos@leaf01# show protocols ospf
 area 0 {
     network 10.0.0.11/32
     network 192.168.11.0/30
     network 192.168.21.0/30
 }
 parameters {
     router-id 10.0.0.11
 }
vyos@leaf01# run show ip ospf neighbor

Neighbor ID     Pri State           Dead Time Address         Interface            RXmtL RqstL DBsmL
10.0.0.1          1 Full/DROther      32.379s 192.168.11.1    eth1:192.168.11.2        0     0     0
10.0.0.2          1 Full/DROther      32.369s 192.168.21.1    eth2:192.168.21.2        0     0     0

vyos@leaf01# traceroute 192.168.12.2
traceroute to 192.168.12.2 (192.168.12.2), 30 hops max, 60 byte packets
 1  192.168.11.1 (192.168.11.1)  0.320 ms  0.318 ms  0.313 ms
 2  192.168.12.2 (192.168.12.2)  0.557 ms  0.553 ms  0.584 ms
[edit]
vyos@leaf01# traceroute 192.168.22.2
traceroute to 192.168.22.2 (192.168.22.2), 30 hops max, 60 byte packets
 1  192.168.21.1 (192.168.21.1)  0.254 ms  0.249 ms  0.245 ms
 2  192.168.22.2 (192.168.22.2)  0.503 ms  0.502 ms  0.499 ms

The routing table will show you all 4 ip’s, some directly attached and some using OSPF:

vyos@leaf01.leaf01# run show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

S>* 0.0.0.0/0 [210/0] via 192.168.136.65, eth0, 19:18:47
O>* 10.0.0.1/32 [110/10] via 192.168.11.1, eth1, 18:05:55
O>* 10.0.0.2/32 [110/10] via 192.168.21.1, eth2, 17:20:50
O   10.0.0.11/32 [110/0] is directly connected, lo, 18:06:16
C>* 10.0.0.11/32 is directly connected, lo, 18:06:16
O>* 10.0.0.12/32 [110/20] via 192.168.11.1, eth1, 17:20:50
  *                       via 192.168.21.1, eth2, 17:20:50

We did this in less then 5 minutes, remember the time where we had to configure this by hand? Now if i would require a new interface i can edit my playbook, run it and be done in 30 seconds.

Next: Devide this playbook in to nice roles.

Encrypt email with PGP

One of the most popular methods to encrypt messages is PGP, which is a cryptography system quite widespread on the Internet. Using PGP we can encrypt a message end-to-end. There are many tools that can help, i use Gpg4Win (Free tool, works with Outlook).

Download Gpg4win here.

Once the download is finished, fire up the installer. It’s pretty much next-next finish. Optionally you can select “browser integration” during the installation process.

After the installation open it for the first time and click “New Key Pair”, it will request your name and e-mail address. Hit “Create” so start the generation process. Also, it will ask a password to secure the private key. Once done, it will tell you “Key pair successfuly created” – you are good to go.

To access your public key, right click anywhere on the bar where it lists your name and email address. Select the option in the drop-down menu that says Export. Save the file somewhere, you can share this with other people you want to safely communicate with.

Now it’s time to find your private key. You will need it to decrypt messages that you receive. Right click on the bar where your certificate is displayed, then select Export Secret Keys. Save this file in a safe location!

In order to communicate safely with somebody you will have to import their public key in to Kleopatra. To search for someone’s public key, click on the Lookup on Server and simply search for e-mailaddresses. Found the person you were looking for? Right click and hit “Import”. It will ask for confirmation, if correct hit Yes.

Here comes the magic. Open up Outlook and create a new email. In the top bar you will find a new header (“GpgOl”). Add the person you just imported in the “TO” field, add some content in the email and hit “Encrypt”. If required, select the certicate that matches the recipient and hit “OK”. Now you will see the message completely crypted.

For receiving a crypted email it’s very simple, go to the top bar (GpgOL) and hit Decrypt. Remember, you must have this persons public key imported.

Ubuntu 18.04 – Apache2 – HTTP2

Today, I’m going to install the latest Apache2 and PHP7 on an Ubuntu 18.04 server and enable the HTTP/2 protocol. To upgrade an existing Apache system to use HTTP/2, follow these simple instructions:

$ sudo -i
$ apt-get install python-software-properties
$ add-apt-repository -y ppa:ondrej/apache2
$ apt-key update
$ apt-get update

The above commands add the latest apache2 repository to your system and updates the list of available packages your system is aware of.

$ apt-get upgrade

Now your system is up to date with the latest packages. I am assuming you already have Apache/php etc running. Now we can enable the module in Apache:

a2enmod http2

Now we have to edit the virtual host and add this protocol to it.

<VirtualHost *:443>
 # prefer http over http1
 Protocols h2 http/1.1
 .....
</VirtualHost>

Now restart Apache and you should be good to go !

Ansible – One role to rule them all

Ansible Role is a concept that deals with ideas rather than events. Its basically another level of abstraction used to organize playbooks. They provide a skeleton for an independent and reusable collection of variables, tasks, templates, files, and modules which can be automatically loaded into the playbook. Playbooks are a collection of roles. Every role has specific functionality.

For example, to install Nginx, we need to add a package repository, install the package and set up configuration. Roles allow us to create very minimal playbooks that then look to a directory structure to determine the configuration steps they need to perform.

Role directory structure

In order for Ansible to correctly handle roles, we should build a directory structure so that Ansible can find and understand. We can do this by creating a Roles directory in our working directory.

The directory structure for Roles looks like this:

rolename
 - files
 - handlers
 - meta
 - templates
 - tasks
 - vars

A role’s directory structure consists of files, handlers, meta, templates, tasks, and vars. These are the directories that will contain all of the code to implement our configuration. We may not use all of the directories, so in real practice, we may not need to create all of these directories.

Ansible will search for and read any yaml file called roles/nginx/tasks/main.yml automatically. Here is the main.yml file;

---
- name: Installs Nginx
  apt: pkg=nginx state=installed update_cache=true
  notify:
    - Start Nginx

- name: Upload default index.php for host
  copy: src=index.php dest=/usr/share/nginx/html/ mode=0644
  register: php
  ignore_errors: True

- name: Remove index.html for host
  command: rm /usr/share/nginx/html/index.html
  when: php|success

- name: Upload default index.html for host
  copy: src=index.html dest=/usr/share/nginx/html/ mode=0644
  when: php|failed

As we can see, the file just lists the steps that are to be performed, which makes it reads well.

We also made a change how we references external files in our configuration. Our src lines reference a static_files directory. This is unnecessary if we place all of our static files in the files subdirectory. Ansible will find them automatically.

Now that we have the task portion of the playbook in the tasks/main.yml file, we need to move the handlers section into a file located at handlers/main.yml.

- name: Start Nginx
  service: name=nginx state=started

Move index.html and index.php pages out of the static_files directory and put them into the roles/nginx/files directory.

So now we can create a very very simple playbook with the following content:

---
- hosts: test_group
  roles:
    - role: nginx

Run it!

$ ansible-playbook -s test.yml

PLAY [test_group] ******************************************************************** 

GATHERING FACTS *************************************************************** 
ok: [127.0.0.1]

TASK: [nginx | Installs Nginx] ************************************************ 
ok: [127.0.0.1]

TASK: [nginx | Upload default index.php for host] ***************************** 
ok: [127.0.0.1]

TASK: [nginx | Remove index.html for host] ************************************ 
changed: [127.0.0.1]

TASK: [nginx | Upload default index.html for host] **************************** 
skipping: [127.0.0.1]

PLAY RECAP ******************************************************************** 
127.0.0.1              : ok=4    changed=1    unreachable=0    failed=0  

Shopware + NGIX

Shopware is a widely used professional open source e-commerce software. Based on bleeding edge technologies like Symfony 3, Doctrine2 and Zend Framework Shopware comes as the perfect platform for your next e-commerce project.

Set up the timezone and make sure all updates are done and required packages are installed:

sudo dpkg-reconfigure tzdata
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget vim git unzip socat apt-transport-https

Install PHP and required packages

sudo apt install -y php7.0 php7.0-cli php7.0-fpm php7.0-common php7.0-mysql php7.0-curl php7.0-json php7.0-zip php7.0-gd php7.0-xml php7.0-mbstring php7.0-opcache

Install database server (mysql or mariadb)

sudo apt install -y mariadb-server
sudo mysql_secure_installation
Would you like to setup VALIDATE PASSWORD plugin? N
New password: your_secure_password
Re-enter new password: your_secure_password
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

Connect and create a user and database:

sudo mysql -u root -p
# Enter password
mysql> CREATE DATABASE dbname;
mysql> GRANT ALL ON dbname.* TO 'username' IDENTIFIED BY 'password';
mysql> FLUSH PRIVILEGES;
exit;

Install and configure NGIX

sudo apt install -y nginx
sudo nano /etc/nginx/sites-available/shopware.conf
server {
    listen 80;
    listen 443 ssl;

    ssl_certificate /etc/letsencrypt/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/example.com/private.key;
    ssl_certificate /etc/letsencrypt/example.com_ecc/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/example.com_ecc/private.key;
    
    server_name example.com;
    root /var/www/shopware;

    index shopware.php index.php;

    location / {
        try_files $uri $uri/ /shopware.php$is_args$args;
    }

    location /recovery/install {
      index index.php;
      try_files $uri /recovery/install/index.php$is_args$args;
    }

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
    }
}
sudo ln -s /etc/nginx/sites-available/shopware.conf /etc/nginx/sites-enabled
sudo systemctl reload nginx.service

Now it’s time to install Shopware;

sudo mkdir -p /var/www/shopware
sudo chown -R {your_user}:{your_user} /var/www/shopware
cd /var/www/shopware
wget https://releases.shopware.com/install_5.5.8_d5bf50630eeaacc6679683e0ab0dcba89498be6d.zip?_ga=2.141661361.269357371.1556739808-1418008019.1556603459 -O shopware.zip
unzip shopware.zip
rm shopware.zip
sudo chown -R www-data:www-data /var/www/shopware

You should alter the default PHP values of memory_limit = 256M and upload_max_filesize = 6M.

Now fire up a browser to your server and you will see the setup wizard of Shopware, ready to complete.