Categories
Linux

Regenerate Dockerfile from a Docker Image

It may happen that you have accidentally deleted your Dockerfile, or that you want to know how a Docker was built to learn from it. In this case it is useful to see what the Dockerfile looked like, which files have been modified or copied, and which packages have been installed.

With the command below you can easily view the Docker file of an image (run as root):

alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 nginx:latest

The above commands show the steps from the Dockerfile:

[email protected]:~# alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
[email protected]:~# dfimage -sV=1.36 nginx:latest
Unable to find image 'alpine/dfimage:latest' locally
latest: Pulling from alpine/dfimage
Status: Downloaded newer image for alpine/dfimage:latest
latest: Pulling from library/nginx
[..]
Status: Downloaded newer image for nginx:latest
Analyzing nginx:latest
Docker Version: 20.10.7
GraphDriver: overlay2
Environment Variables
|PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|NGINX_VERSION=1.21.5
|NJS_VERSION=0.7.1
|PKG_RELEASE=1~bullseye

Open Ports
|80

Image user
|User is root

Potential secrets:
Dockerfile:
CMD ["bash"]
LABEL maintainer=NGINX Docker Maintainers <[email protected]>
ENV NGINX_VERSION=1.21.5
ENV NJS_VERSION=0.7.1
ENV PKG_RELEASE=1~bullseye

COPY file:65504f71f5855ca017fb64d502ce873a31b2e0decd75297a8fb0a287f97acf92 in /
        docker-entrypoint.sh

COPY file:0b866ff3fc1ef5b03c4e6c8c513ae014f691fb05d530257dfffd07035c1b75da in /docker-entrypoint.d
        docker-entrypoint.d/
        docker-entrypoint.d/10-listen-on-ipv6-by-default.sh

COPY file:0fd5fca330dcd6a7de297435e32af634f29f7132ed0550d342cad9fd20158258 in /docker-entrypoint.d
        docker-entrypoint.d/
        docker-entrypoint.d/20-envsubst-on-templates.sh

COPY file:09a214a3e07c919af2fb2d7c749ccbc446b8c10eb217366e5a65640ee9edcc25 in /docker-entrypoint.d
        docker-entrypoint.d/
        docker-entrypoint.d/30-tune-worker-processes.sh

ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 80
STOPSIGNAL SIGQUIT
CMD ["nginx" "-g" "daemon off;"]

[email protected]:~#

To keep it readable I've cut bits out of the output, but I suspect the idea is clear. This cannot be copied 1 on 1, but a Dockerfile can simply be rebuilt on the basis of this.

Categories
Linux

Pacemaker and Corosync HA

In this setup we will setup a HA failover solution using Corosync and Pacemake, in a Active/Passive setup.

Installation and Setup

Prerequisites

  • Hosts or DNS resolvers
  • NTP Must be installed and configured on all nodes
1
2
3
cat /etc/hosts
10.0.1 10   ha1 server01
10.0.1.11   ha2 server02

Installation
We will install pacemaker, it should install corosync as an dependency, if not install it.

1
apt-get install pacemaker

Edit corosync.conf. The bind address is the network address, NOT the IP. The mcastaddr is default, which is fine.

1
2
3
4
5
6
7
8
cat /etc/corosync/corosync.conf
interface {
        # The following values need to be set based on your environment
        ringnumber: 0
        bindnetaddr: 10.0.1.0
        mcastaddr: 226.94.1.1
        mcastport: 5405
   }

We also want corosync to start pacemaker automatically. If we do not do this, we will have to start pacemaker manually.
ver: 0 Indicates corosync to start pacemaker automatically. Setting it to 1, will require manually start of pacemaker!

1
2
3
4
5
6
cat /etc/corosync/corosync.conf
service {
    # Load the Pacemaker Cluster Resource Manager
    ver:       0
    name:      pacemaker
}

Copy/paste the content of corosync.conf, or scp the file to the second node.

1
scp /etc/corosync/corosync.conf 10.0.1.11:/etc/corosync/corosync.conf

Make corosync starts at boot time.

1
2
3
cat /etc/default/corosync
# start corosync at boot [yes|no]
START=yes

Start corosync

1
/etc/init.d/corosync start

Check the status of the cluster

1
2
3
4
5
6
7
8
Last updated: Fri Jun  9 11:02:55 2017          Last change: Wed Jun  7 14:26:06 2017 by root via cibadmin on server01
Stack: corosync
Current DC: server01 (version 1.1.14-70404b0) - partition with quorum
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ server01 ]

Copy the config file to the second node

1
scp /etc/corosync/corosync.conf server02:/etc/corosync/

Now on the second node, try to start corosync

1
/etc/init.d/corosync start

Check the status again. We should now hopefully see the second node joining. If this fails check the firewall settings and hosts file (they must be able to resolve).

We are getting some warnings. Use the following commands:

1
2
3
crm configure property stonith-enabled=false
sudo crm configure property no-quorum-policy=ignore
crm_verify -L

Now add a virtual IP to the cluster.

1
crm configure primitive VIP ocf:IPaddr2 params ip=10.0.1.100 nic=eth0 op monitor interval=10s

Now we should have added an VIP/Floating IP, we can test this by a simple ping. Should respond from both nodes.

Adding Resources: Services

Now we are ready to add a service to our cluster. In this example we use a postfix service (smtp) that we want to failover. Postfix must be installed on both nodes

1
crm configure primitive HA-postfix lsb:postfix op monitor interval=15s

Check the status.

1
crm status

As we have not linked the IP to the service yet, postfix could be running on server02 while the IP is on server01. We need to set them both in one HA group.

1
crm configure group HA-Group VIP HA-postfix

If we check the status again, we can see that the two resources are now running on the same server.

1
2
3
4
5
Online: [ server01 server02 ]
 Resource Group: HA-Group
     VIP    (ocf::heartbeat:IPaddr2):   Started server01
     HA-postfix (lsb:postfix):  Started server01

Looks good !

If an resource fails, for some reason, like postfix crashes, and cannot start again, we want to migrate to another server.
Per default the migration-threshold is not defined/set to infinity, which will never migrate it.

When we have 3 fails, migrate the node, and expire the failed resource after 60 seconds. This will allow it to automatically to move it back to this node.

1
2
3
primitive HA-postfix lsb:postfix \
        op monitor interval="15s" \
        meta target-role="Started" migration-threshold="3" failure-timeout=60s

Now we are DONE!

Some extra commands that might be usefull when managing the cluster:

Deleting a resource

1
2
crm resource stop HA-XXXX
crm configure delete HA-XXXX

Where XXXX is the name of the HA cluster.

Migrate / Move Resource

1
crm_resource --resource HA-Group --move --node server02

View configuration

1
crm configure show

View status and fail counts

1
crm_mon -1 --fail