Automating OS Security Updates
I run a virtual machine used for ad hoc dev testing. My goal is to prevent security vulnerabilities and to make sure that the security updates are applied automatically. As its only one server, it could be less tedious if I run the updates manually but I would like a process that allows me practice my python coding skills.
I initially considered unattended-upgrades, a feature which automatically downloads security updates onto your installed packages which is actually a good option but unattended-upgrades will only apply updates if needed and I wouldn’t necessarily need to write my own commands.
Cron Jobs
So I looked at Cron jobs which are tasks scheduled to run automatically by Cron. My Cron job would be a number of commands which Cron would run everyday. This allows me write some python code which is fantastic!
With Cron, I can also have my script do other tasks such as auto-remove previously installed packages, prune excess docker images and send me alerts via a Mattermost web hook so I know if my updates ran successfully or not. I can also have the logs directed into a logfile in /etc/cron
so I can check for errors afterwards. For these reasons, I decided to use Cron.
My Python Script
In my git repo, I created a python script called server_patch.py
which looks like this:
#!/usr/bin/env python
"""
Description: Server patching script.
Usage:
./server_patch.py
"""
# standard imports
import os
import sys
# internal imports
from mattersend import MatterMostClient
def run_command(cmd: str) -> None:
"""
Install non-interactive updates
"""
try:
if os.system(cmd) != 0:
raise RuntimeError(f"Failed to execute {cmd}")
except RuntimeError as err:
mclient = MatterMostClient(str(err), section='warning')
mclient.send_message()
sys.exit(1)
if __name__ == '__main__':
# set environment variable first
os.environ["DEBIAN_FRONTEND"] = 'noninteractive'
run_command('sudo apt-get update')
run_command('sudo apt-get upgrade -y')
apply_system_command('sudo apt-get autoremove -y') # prev installed pkgs
apply_system_command('echo y | docker system prune') # prune images,containers,volumes,network
client = MatterMostClient("Successfully patched", section='happy')
client.send_message()
At the beginning of my script, I needed to import 2 standard python modules – the os module (to execute my commands and set my environment variable to non-interactive) and sys module (to exit the script with a code of 1 if unsuccessful). I also imported our internal mattersend module (where our MatterMostClient class and send_message function are located – see our previous blog if you’d like to know how we created our MatterMost webhooks).
I then created a function called def run_command()
which will run my commands.
If the exit status of a command is not 0 i.e. if the command was unsuccessful, it will catch the error as an exception and send a message to our Mattermost channel.
The if __name__ == "__main__":
block makes sure that if my script is run directly, my shell environment variables are firstly set to non-interactive (because my shell will be running commands from a script not from user input). Then my def run_command()
function will be called to run each of my commands one at a time.
Finally if all goes well with my OS patch, I will receive a ‘Successfully patched’ message in Mattermost.
My Cron Job
In the same git repo, I created 2 directories to manage the task schedule and the rotation of my patch logs.
The cron.d
file
I firstly created the /etc
directory where my cron.d file will be located. I then created my cron.d file called patch
_cron – This is where my python script’s running schedule will be defined as below:
SHELL=/bin/bash
TZ=Europe/London
# Daily security updates
# m h dom mon dow user command
0 9 * * * root cd /opt/Git/example_git_repo && pipenv run server_patch.py >> /var/log/server_patch.log
My cron.d file will automatically run the command under root user permissions everyday at 9am by:
- firstly changing into my git repo directory (
example_git_repo)
- running my python patch script using the pipenv shell
- and directing the patching logs to a file I named
server_patch.log
within my/var/log/
directory for me to refer back to in case of errors.
The logrotate.d
file
My patch script will run daily, so a large number of log files will be generated. I would like a system that helps me rotate the log files so I don’t have to worry about doing this manually which is where Logrotate comes in.
I created my /logrotate
directory in the same repo, and within this directory I created a logrotate.d file (called patch_rotate)
to define how I want the log files to be managed. Here’s what I came up with:
/var/log/server_patch.log {
rotate 12
monthly
compress
missingok
notifempty
}
- The patch_rotate file starts off by defining which file I want Logrotate to manage and rotate which is
/var/log/server_patch.log
(as defined in my cron.d file above) rotate 12
: followed by how many files I want generated before rotating. In this case I want logrotate to keep no more than 12 copiesmonthly
: I want the file copies rotated monthlycompress
: logrotate should always compress the files when they get too bigmissingok
: its ok to skip missing filesnotifempty
: do not keep empty files
That’s it, now my patch script, patch schedule and log rotation files are created! Once pushed to git, I now get a little ping in Mattermost everyday at 9am letting me know if my patch was successful or not which is great.
For continuous integration, I also created a unit test to run via git hub actions which is useful for making sure I follow coding best practices each time I make any code changes to my repo.
I hope you found this useful particularly if you are new to security update automation and feel free to drop me a comment if you would like to learn more about any of the steps I took.