docker update --restart=no $(docker ps -a -q)
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
SPF and DKIM with Postfix
SPF (Sender Policy Framework) record specifies which hosts or IP addresses are allowed to send emails on behalf of a domain. You should allow only your own email server or your ISP’s server to send emails for your domain.
DKIM (DomainKeys Identified Mail) uses a private key to add a signature to emails sent from your domain. Receiving SMTP servers verify the signature by using the corresponding public key, which is published in your DNS manager.
Create SPF record in DNS zone
n your DNS management interface, create a new TXT record like below.
TXT @ v=spf1 mx ~all
Some DNS managers require you to wrap the SPF record with quotes like below.
TXT @ "v=spf1 mx ~all"
Keep in mind that it can take up to an hour for the new record to be available.
Configure Postfix for SPF
First, install required packages:
sudo apt install postfix-policyd-spf-python
Edit the Postfix master process configuration file located at /etc/postfix/master.cf. Add these lines to the end:
policyd-spf unix - n n - 0 spawn
user=policyd-spf argv=/usr/bin/policyd-spf
Now open up the configuration file at /etc/postfix/main.cf. Add these lines to the end of the file:
policyd-spf_time_limit = 3600
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_unauth_destination,
check_policy_service unix:private/policyd-spf
Now restart postfix
sudo systemctl restart postfix
Configure DKIM
sudo apt install opendkim opendkim-tools
Add the Postfix user to the OpenDKIM group
sudo gpasswd -a postfix opendkim
Now open the configuration of OpenDKIM and enable or add these lines:
Canonicalization simple
Mode sv
SubDomains no
AutoRestart yes
AutoRestartRate 10/1M
Background yes
DNSTimeout 5
SignatureAlgorithm rsa-sha256
Go to the end of the file and add these lines:
#OpenDKIM user
# Remember to add user postfix to group opendkim
UserID opendkim
# Map domains in From addresses to keys used to sign messages
KeyTable refile:/etc/opendkim/key.table
SigningTable refile:/etc/opendkim/signing.table
# Hosts to ignore when verifying signatures
ExternalIgnoreList /etc/opendkim/trusted.hosts
# A set of internal hosts whose mail should be signed
InternalHosts /etc/opendkim/trusted.hosts
We will need to create the signing table, key table and the trusted hosts file.
sudo mkdir /etc/opendkim
sudo mkdir /etc/opendkim/keys
sudo chown -R opendkim:opendkim /etc/opendkim
sudo chmod go-rw /etc/opendkim/keys
Now create the signing table, using your domain. Open the file and add the second line in it:
sudo nano /etc/opendkim/signing.table
*@bontekoe.technology default._domainkey.bontekoe.technology
Now create the key table
sudo nano /etc/opendkim/key.table
default._domainkey.bontekoe.technology bontekoe.technology:default:/etc/opendkim/keys/bontekoe.technology/default.private
Now create the trusted hosts file:
sudo nano /etc/opendkim/trusted.hosts
127.0.0.1
localhost
*.bontekoe.technology
Generating DKIM Keypair
Create a separate folder for the domain.
sudo mkdir /etc/opendkim/keys/bontekoe.technology
Generate keys using opendkim-genkey
tool.
sudo opendkim-genkey -b 2048 -d bontekoe.technology -D /etc/opendkim/keys/bontekoe.technology -s default -v
sudo chown opendkim:opendkim /etc/opendkim/keys/bontekoe.technology/default.private
Display the public key that was generated:
sudo cat /etc/opendkim/keys/bontekoe.technology/default.txt
This file contains the entire DNS record that should be published. Copy everything, startking with the v=DKIM1 and in your DNS record. After 15 minutes, test is the record has been successfully published:
sudo opendkim-testkey -d bontekoe.technology -s default -vvv
Result:
opendkim-testkey: using default configfile /etc/opendkim.conf
opendkim-testkey: checking key 'default._domainkey.bontekoe.technology'
opendkim-testkey: key secure
opendkim-testkey: key OK
Connecting Postfix to OpenDKIM
sudo mkdir /var/spool/postfix/opendkim
sudo chown opendkim:postfix /var/spool/postfix/opendkim
Open the configuration file at /etc/opendkim.conf, replace the socket (if defined, or add it):
Socket local:/var/spool/postfix/opendkim/opendkim.sock
Open /etc/postfix/main.cf and add the following to the end:
# Milter configuration
milter_default_action = accept
milter_protocol = 6
smtpd_milters = local:opendkim/opendkim.sock
non_smtpd_milters = $smtpd_milters
Now restart Postfix and OpenDKIM:
sudo systemctl restart opendkim postfix
Due to the growth of our database (> 1TB), the 'housekeeper' no longer worked properly. The best solution to this problem is to apply Database Partitioning, however with a database of this size this takes a lot of time if you want to keep the data. We tried this action in several ways, the one below was the only way we were able to implement partitioning without downtime.
The example below must be repeated for each table and takes several hours per table.
# Create temporary partition
CREATE TABLE `history_log_tmp` LIKE `history_log`;
# Apply partitioning
CALL partition_maintenance('zabbix', 'history_log_tmp', 30, 24, 3);
# Rename tables so the new empty table will be used by Zabbix. Leaving the old one as backup
BEGIN;
RENAME TABLE history_log TO history_backup_log;
RENAME TABLE history_log_tmp TO history_log;
COMMIT;
# Output all data from backup table to file
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_backup_log.sql' FROM history_backup_log;
# Open MySQL Shell and start import
mysqlsh
shell.connect('localhost:3306')
util.importTable("/var/lib/mysql-files/history_backup_log.sql", {schema: "zabbix", table: "history_log", columns: ["itemid","clock","value","ns"], dialect: "default", skipRows: 0, showProgress: true, fieldsOptionallyEnclosed: false, linesTerminatedBy: "\n",threads: 2, bytesPerChunk: "50M", maxRate: "10M"})
SELECT
TABLE_NAME AS `Table`,
ROUND((DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024) AS `Size (MB)`
FROM
information_schema.TABLES
WHERE
TABLE_SCHEMA = "zabbix"
ORDER BY
(DATA_LENGTH + INDEX_LENGTH)
DESC;
Within playbooks you occasionally connect to external applications or services, in my case Zabbix and ServiceNow. Because I also need login details and do not want to leave this plain text in playbooks, I use a 'Custom Credentials Type'. The advantage of this is that I can use the login details within a playbook (as a macro) and they are stored encrypted in Ansible Tower.
I first create a new credential type by defining the fields it will have and how these will be passed to my playbook. Credential types consist of two parts – “inputs” and “injectors“.
- Inputs:
define the value types that are used for this credential – such as a username, a password, a token, or any other identifier that’s part of the credential. - Injectors:
describe how these credentials are exposed for Ansible (or us) to use – this can be Ansible extra variables, environment variables, or templated file content.
Both these configurations are specified as YAML or as JSON. In my case, the new credential type is called "ServiceNow" and i’m providing the instance, username and password as part of this credential type:
fields:
- id: instance
type: string
label: ServiceNow Instance
- id: username
type: string
label: ServiceNow Username
- id: password
type: string
label: ServiceNow password
secret: true
required:
- instance
- username
- password
Then in the Injector configuration:
extra_vars:
snow_instance: '{{ instance }}'
snow_password: '{{ password }}'
snow_username: '{{ username }}'
Now go to Credentials and add a new one, selecting "ServiceNow" as Credential Type:
Thats it! When you link this credential to your host, or playbook, you can use this credentials from within your playbook!
Enable ‘Previous Versions’
Anyone who’s ever trashed a spreadsheet, or accidentally deleted a file, will appreciate the 'previous versions' function. However, you will only find out that this is not enabled by default when it is already too late.
You can enable previous versions by enabling shadow copies at a ‘volume’ level, Server Manager> Tools> Computer Management > Share Folders > Configure Shadow Copies > Select the Volume > Enable. It will take about 15% of your space, so make sure you have enough room.
In my case i want a copy each hour, go to Advanced Schedule Options interface, select Repeat task, and then set the frequency to every 1 hours, then Select Time, and then change the time value to 2:58 AM.
Enable LLDP on Windows Server 2016/2019
The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on a local area network based on IEEE 802 technology, principally wired Ethernet. The protocol is formally referred to by the IEEE as Station and Media Access Control Connectivity Discovery specified in IEEE 802.1AB and IEEE 802.3 section 6 clause 79. More info here
The following will install the DatacenterBridging feature and enable lldp and all interfaces:
Enable-WindowsOptionalFeature -Online -FeatureName 'DataCenterBridging'
Get-NetAdapter | Where-Object { $_.Name -like "*Ethernet*" -and $_.Status -eq 'Up' } | ForEach { Enable-NetLldpAgent -NetAdapterName $_.Name -Verbose }
Mysql Clear Diskspace
When you are running out of diskspace you can purge the MySQL binary logs to free up some space
mysql> PURGE BINARY LOGS BEFORE 'yyyy-mm-dd hh:mm:ss';
Sometimes you are already on 99% disk space and need more drastic methods. This requires manually removing the logfiles.
systemctl stop mysql
cd /var/llog/mysql && a=`ls |grep -v relay |grep bin.index` && b=`wc -l <$a` ; c=`echo $(($b/2))` |xargs -l rm ; echo $c | head -n $b $a |cut -d "/" -f2 && sed 1,$c\d $a -i
systemctl start mysql
Enable NTP Server in Windows 2019
The Windows Time service uses the Network Time Protocol (NTP) to help synchronize time across a network. It's as easy as 3 commands using powershell:
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpServer" -Name "Enabled" -Value 1
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\services\W32Time\Config" -Name "AnnounceFlags" -Value 5
Restart-Service w32Time
Make sure nvme-cli is installed:
$ sudo apt install nvme-cli
Check for availible nvme disks:
$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S4EVNFXXXXXXXX9972H Samsung SSD 970 EVO Plus 500GB 1 26,60 GB / 500,11 GB 512 B + 0 B 2B2XXXXXM7
With nvme-cli you can now check the internal temperature, disk usage, power cycles, and much more:
$ sudo nvme smart-log /dev/nvme0
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning : 0
temperature : 40 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 0%
data_units_read : 90935
data_units_written : 119679
host_read_commands : 4491381
host_write_commands : 2370351
controller_busy_time : 8
power_cycles : 34
power_on_hours : 9
unsafe_shutdowns : 1
media_errors : 0
num_err_log_entries : 0
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 40 C
Temperature Sensor 2 : 38 C
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0