Intro to Systemd

I have a rocky relationship with Systemd.  On the one hand I love how powerful and extensive it is.  On the other hand, I hate how cumbersome and clunky it can sometimes feel.  There are a TON of moving components and it is very confusing to use if you have no experience with it.  The aim of this post is NOT to debate relative merits of Systemd but instead to take users through a few basic examples of how to accomplish tasks with Systemd and get familiar with how to manage systems with this framework.

My background is primarily with Debian/Ubuntu so moving over to this init system has been a learning curve.  The problem I had when I first made the transition is that there aren’t a lot of great resources currently for making the change.

Most of my knowledge has been pieced together from various blog posts and sites, the best of which is over at the Arch wiki.  As Systemd continues to mature it is becoming increasingly easier to find good guides and resources but the Arch wiki is still the defacto, go to place to find resources about how to use Systemd.

Another thing that has helped is forcing myself to use CoreOS, which, for better or worse has forced me to learn how to use Systemd.  It seems that most modern Linux distro’s are moving towards Systemd, so it’s probably worth it to at least begin experimenting with it at this point.  Ubuntu 15.04 as well as Debian 8 have both made the jump, along with other RedHat based distro’s, with more to follow.

While the learning curve can be a little steep to start with, you can learn about 80% of the things that Systemd can do with about 20% of the effort.  Getting the basics down takes a little bit of effort but will more than be enough to manage a system.  Before starting, as a disclaimer, I do not claim to be an expert but I have learned the hard way how things work and sometimes don’t work with Systemd.

Basics of Systemd

So now that we have a little bit of background stuff out of the way, we can look at some of the meat and potatoes of Systemd.  The first thing to do is get an idea of what services are running on your system.

systemctl

This will give you a listing of everything running on your system.  Usually there are a very large number of units listed.  This isn’t really important but say for example, there is a failed unit.  You can filter systemctl to only spit out failed units.

systemctl --failed

That’s pretty cool.  If, for examle there is a failed unit file you can drill down in to that unit specifically.

systemctl status <unit>

This will give you process and service information for the unit file and the last 10 lines of logs for the unit as well, which can be really handy for troubleshooting.

One of the easier ways to work with unit files is to use the edit subcommand.  This method removes much of the complexity of having to know exactly where all of the unit files live on the system.

systemctl edit --full <unit>

You can manage and interact with Systemd and the unit files in a straight forward way.  We will examine a few examples.

systemctl start <unit> - start a stopped service
systemctl stop <unit> - stop a running service
systemctl restart <unit> - restart a service
systemctl disable <unit> - stop unit from loading on boot
systemctl enable <unit>  - load unit on boot

If you make a change to a service or unit file (we will go over this later) you need to reload the Systemd daemon for it to pick up the changes to the file.

sytemctl daemon-reload

This is super important (on CoreOS at least) when troubleshooting because if you don’t run the reload your process will not update and will not change its behavior.

There are many many more tips and tricks for working with systemctl but this should give users a basic grasp on interacting with components of the system.  Again, for lots more details I advise that you check out the wiki.

Journalctl

If you are coming from older Debian/Ubuntu distros you will need to get familiar with using journalctl to view and manipulate your log files.

One of the first changes that people notice right away is that there aren’t any beloved log files /var/log in Systemd any more.  Well, there are still logs, but it isn’t what most folks are used to.

The files haven’t gone away, they are simply disguised as a binary file and are accessible via journald.  To check how much space your logs are taking up on disk you can use the following command.

journalctl --disk-usage

Journalctl is a powerful tool for doing just about everything you can think of with log files.  Because journald writes logs as a binary file you need this tool to interact with them.  That’s okay though because journalctl has pretty much all of the features needed to look through logs with its built in flags.  For example, if you want to tail a log file, you can use the “-f” flag to follow “-u” to specify the systemd unit and the systemd unit name to follow it as illustrated below (this is a CoreOS system so your results may vary on a different OS).

journalctl -f -u motdgen

To follow all the entries that journalctl is picking up (pretty much just tailing all the logs as an equivalent) you can just use this.

journalctl -f

Sometimes you only want to view the last X number of entries in a log file.  To look at older entries you can use the “-n” flag.

journalctl -n 100 -u motdgen

You can filter logs to only show messages since the last boot.

journalctl -b

There are many more options and the filtering capabilities of journalctl are very powerful.  Additionally, journalctl offers JSON outputs, filtering by log priority, granular filtering by time with –unit and –since flags, filtering on different boots, and more.

I suggest exploring on your own a little bit to see how far the capabilities can go.  Here is a link to all of the various flags that journalctl has as well as a good writeup to get a good idea of some of the more advanced capabilities of journalctl.

Drop-in Units

Writing unit files is somewhat of an art form.  I feel that it can quickly become such a complicated topic that I won’t discuss it in full detail here.

By default, Systemd unit files are written to /usr/lib/systemd/system and custom defined unit files are written to /etc/systemd/system.  Systemd looks at both locations for which files to load up.  Because of this, you can extend base units that live in the /usr/lib path by augmenting them in /etc/systemd.

This has a few implications.  First, it makes management of unit files a little bit easier to work with.  It also makes extending units a little bit easier.  You don’t need to worry about manipulting the system level units, you just add functionality on top of them in /etc/systemd.

Drop-in units are a handy feature if you need to extend a basic unit.  To make the drop-in work, it must follow the format of /etc/systemd/system/unit.d/override.conf, where unit.d is the full name of the service that you are extending.

The following example extends the CoreOS baked in etcd2 service

/etc/systemd/system/etcd2.service.d/30-configuration.conf

[Service]
# General settings
Environment=ETCD_DEBUG=true

Replacing Cron

Well not exactly.  Systemd doesn’t use cron jobs so if you want to get something to run on a schedule you will need to use a timer unit and associate a service with it.

[Unit]
Description=Log file cleaner (runs daily at midnight)

[Timer]
OnCalendar=daily

Notice that there is a log-cleaner.timer as well as a log-cleaner.service (below).  The easiest way I have found is to create a service/timer pair, which ensures that the timer runs the service based on its schedule.

[Unit]
Description=Log file cleaner

[Service]
ExecStart=/usr/bin/bash -c "truncate -s 1m /data/*.log"

The only thing that the service is doing is running a shell command which will get run based on the timer directive in the timer unit file.

The timer directive in the timer unit is pretty flexible.  For example, you can tell the timer to run based on the calendar either by day, week or month or alternately you can have the unit run services based on the system time by seconds, minutes or hours.

One of the down sides of timers is that there is not an easy way to email results.  With cron this is easily configurable.  Likewise, timers don’t have a random delay to run jobs like cron has.  So, timers are definitely no replacement for cron but have a lot of usable functionality, it is just confusing to learn at first, especially if you are used to cron.

Here is more involved tutorial for working with timers.  Also, the Arch wiki once again is great and has a dedicated section for timers.

Conclusion

This introduction is really just the beginning.  There are so many more pieces to the Systemd puzzle, I have just covered the basics, in fact I have deliberately chosen not to cover many of the topics and concepts because they can convolute the learning process and slow down the knowledge.  I chose to focus instead on the most basic ideas that are used most frequently in day to day administration.  Feel free to go do your own research and play around with the more advanced topics.

Systemd aims to solve a lot of problems but brings about a lot of complexity as well to accomplish this task.  In my opinion it is definitely worth investing the time and energy to learn this system and service manager though, pretty much all the Linux distro’s are moving towards it in the future and you would be doing yourself a disservice by not learning it.

Other areas of Systemd that aren’t covered in this post but are worth researching and playing with are:

  • Targets (analogous to runlevels)
  • Managing NTP with timedateclt
  • Network management with networkd
  • Hostname management with hostnamectl
  • Username management with loginctl

One of the easiest ways to get started with systemd is by grabbing a Vagrant box that leverages Systemd.  I like the CoreOS Vagrant box but there are many other options for getting started.  One of the best tools for learning is by doing, so take some of these commands and resources and go start playing around with Systemd.  Feel free if you have any extra additions or need help getting started with Systemd.

Read More

CoreOS Tips and Tricks

One thing that was never clear to me when I started learning CoreOS were techniques for rapidly testing out different CoreOS features.  I will spend some time walking folks through a few of tips and tricks that I have learned so far along the way learning about CoreOS.

The folks at CoreOS have an awesome repo for testing out features locally, called coreos-vagrant.  If you haven’t heard of it or used it, go check it out.  Another great resource for getting started with the CoreOS Vagrant project are the docs on the CoreOS website, you should be able to find most of the use cases there.

So in this post I will be going over some of what is already detailed in the docs and README but will additionally fill readers in with a few extra tips and tricks I have discovered so far along the way.  I am surprised by all of the hidden secrets I frequently discover buried in CoreOS and its documentation.  It is always fun to find new features and capabilities of the OS that you didn’t know existed.

Vagrant

So to get started we need to briefly cover Vagrant.  Vagrant has made things soooo much easier to test.  If you haven’t heard of Vagrant, definitely go check it out and get familiar with it.  It is basically an interface for controlling VM’s and their various components locally.

When I first starting testing things out with CoreOS I would spin boxes up in either Digital Ocean or AWS with a cloud-config that I would agonize over because I was afraid of screwing up small details or provisioning the server incorrectly.  It is also more of a hassle to provision a cloud server because it involves some additional authentication keys for command line tools or manually creating instances via GUI tools.  However, when testing VM’s you often destroy and recreate instances and so that additional overhead can become tedious.

Using Vagrant I can quickly and easily make changes to a configuration or even test out entirely different CoreOS versions in minutes and not care about getting small details wrong since a) it doesn’t cost me anything extra to run the instance locally and b) I can blow out and reprovision in a few seconds.

I think this local Vagrant approach also makes you a better CoreOS citizen because it forces you to look at what you’re doing and fix issues more often because you are iterating more frequently and therefore testing more features and options of CoreOS out (at least this has been my experience so far).

cloud-config

Cloud-config was initially a painful part of the learning process for me but I have grown to love it.  I like to test out new cloud-configs quite a bit and at first it was frustrating to screw up configs because that meant I had to redo the entire server bootstrap process in DO or AWS.  Terraform makes the provisioning process less painful but it is still a little bit of a hassle, especially when you are smoke testing configs to make sure they work.

Luckily it is dead simple to set up cloud-configs using Vagrant locally.  The repo comes with a “user-data.sample” file that you can copy to “user-data” and away you go, make any modifications you may need or config changes you want to test out.  The local testing via cloud-config discovery alone was a game changer for me.

To fix a problem with your cloud-config you can simply edit the user-data file that was copied in to place on the server and then rerun cloud-init to fix the provisioning.  Below is an example of how to do this cloud-init provisioning.

Before you provision any of you cloud configs though, I recommend testing them out by running them through the CoreOS cloud-config validator tool to help identify any potential problems your config might have before you even run it.  There is an experimental validation flag option in the cloud-init binary shipped with the OS if you want to try it out as well.  Most of the time I find it just as easy to copy the config in to the online checker but there are definitely scenario’s and use cases where it might be a good idea to test locally, I just haven’t needed to yet.

Next, if you have an existing config on your server and would like to modify the existing content and reprovision the server with the updated cloud-config, without destroying and recereating the server, use the following.

sudo /usr/bin/coreos-cloudinit --from-file /path/to/user-data

Then you can watch the logs to make sure they are doing what you expect.

journalctl -b _EXE=/usr/bin/coreos-cloudinit

If you don’t want to muck around with the cloud-config stuff on the server you can easily blow up the server, modify the user-data file on the host and just reprovision the Vagrant machine.  Obviously this method will take a little bit longer but it isn’t a significant penalty and is also is easier to keep track of since you know exactly what user-data values are being passed in the Vagrant machine from the host and can more easily stay on top of the changes you are making.

config.rb

The config section in Vagrant gives you a great deal of flexibility when testing CoreOS out locally.  For example, you can control most options that CoreOS gets provisioned with, including the version release with this,

$image_version = "723.1.0"

You can specify any version inside the quotes to bootstrap the CoreOS instance.  This is handy for testing out new alpha features or things are broken in one release.  Quickly changing versions gives you and easy way to check if they are fixed yet by either rolling back or forward easily.

In the config.rb file you can also specify server level details for things like the hostname,

$instance_name_prefix="core"

How many instances to provision,

$num_instances=1

Custom memory or cpu’s for the instance,

$vm_memory = 1024
$vm_cpus = 1

Shared folders, forwarded ports etc.  Granted these are Vagrant level configurations, it still makes working with CoreOS much easier in my opinion.

Additionally, there is an option to provision the instance with an etcd/2 discovery token to bootstrap etcd when the server gets created.  If you have ever dealt with testing out etcd, this is an option way for quickly bringing servers up and down without ever having to worry about reissuing the discovery tokens, etc.

Tips and Tricks

I have found a few other tips and tricks along the way that can be used when testing CoreOS locally or after it has been deployed.

The first tip is getting the OS version to update manually (without reprovisioning via Vagrant).  For most testing puproses I usually turn off automatic reboots using the following key in my cloud configs.

coreos:
  update:
    group: alpha
    reboot-strategy: off

This will tell CoreOS to try to use the latest alpha (if a version is not specified in your config.rb) and tell CoreOS to not reboot.

Sometimes it is easier to just manually updated the OS than destroy the VM and specify a new version.  To update manually you can run the following commnads.

update_engine_client -check_for_update
journalctl -f (this will follow the update progress)
sudo reboot (after the updated version is downloaded)

After you see that the newest release has been downloaded you can reboot the server and it should boot up with the newest updates.

Another cool trick is to customize the toolbox on CoreOS.  I’ve written about this before but figured I might as well mention it again since it is a useful trick.

By default the toolbox runs Fedora, but we are mainly an Ubuntu/Debian shop so are much more comfortable using the tools bundled with those distros.  It is pretty simple to configure the toolbox to automatically use Debian when the instance is provisioned using the following key in your cloud-config.

-write_files:
  - path: /home/core/.toolboxrc
    owner: core
    content: |
      TOOLBOX_DOCKER_IMAGE=ubuntu
      TOOLBOX_DOCKER_TAG=14.04

When you run the “toolbox” command it will look for Ubuntu instead of the default Fedora image.

Another trick I have used a few times is overriding the update strategy on a server that has already been provisioned using environment variables.

As I have discovered, much of the configuration that takes place happens via environment variables.  So to update the reboot strategy you can modify the /etc/coreos/update.conf file.  The contents should look something like this:

GROUP=beta
REBOOT_STRATEGY=off

If you’d like to have the server use alpha images change the key to GROUP=alpha, etc. for the keys inside the configuration.  After making your changes, you will need to restart the update-engine service.

sudo systemctl restart update-engine

The system should pick up the changes you made and you should be good to go.

The last trick I will highlight in this post is how to get “drop in” services working.  This is a core part of how systemd (especially on CoreOS) works, but so few realize how it works.  By creating a drop in you are simply extending a service to read in extra bits of configuration.  For example, the following unit file extends the system etcd2 service.

Create the following file,

/etc/systemd/system/etcd2.service.d/30-configuration.conf

The etcd2 service will look in this location for its extra configuration.

[Service]
# General settings
Environment=ETCD_NAME=etcd-config
Environment=ETCD_VERBOSE=1

Inside the unit file we are just setting some extra environment variables that etcd2 can then use as flags to instruct it how to run.

There are obviously a lot more CoreOS tricks.  I have just highlighted a few of my favorites here.  I suggest looking at the CoreOS docs, there is a lot of good information over there.  Feel free to comment with your own tricks and I will be sure to try them out and get them added here.

Read More

Test Kitchen style testing for Salt

If you are already familiar with Test Kitchen then a lot of this guide should be straight forward.  ChefDK has most of the needed tools bundled up for you already, I recommend installing ChefDK and then extending it to work with Salt.

In addition to the Test Kitchen install dependencies, you will need to install the following (additional) gems in order to get Test Kitchen working with Salt:

  • kitchen-vagrant
  • kitchen-salt

Then create a “.kitchen.yml” file in your /srv/salt directory.  This file tells Test Kitchen how to load in its configuration so it can test out your Salt configurations.

Here is a sample of what your .kitchen.yml file might look like.

---
driver:
  name: vagrant

provisioner:
  name: salt_solo
  is_file_root: True
  pillars-from-files:
    base.sls: /srv/pillar/base.sls
  pillars:
  top.sls:
  base:
    '*':
      - base

platforms:
  - name: ubuntu-14.04

suites:
  - name: default

There is a good reference that describes the various options in the kitchen-salt docs.

I had to play around with this config to get things working correctly so you may need to make your own adjustments.  The key components are described in the “provisioner” section.  “is_file_root” is important because it tells the minion where to look for its configuration, it essentially says look at the top.sls file on the server that runs Test Kitchen.

Use “pillars-from-files” to manually add in any custom pillar data you have.  I had issues getting the default configuration to automatically add in pillar data so used this approach as a workaround.

Another caveat to mention here is that in order to get this method working I had to break the best practice of storing external Salt formulas in /srv/formulas and instead copy them directly in to the “root” diretory of /srv/salt.  So basically all of the logic and formulas will live in this base location.  If this point isn’t clear let me know and I can post more details.

Vagrant style testing

The next best alternative I have found to using the Salt driver for Test Kitchen is manually spinning up a customized vagrant box to test communication with the salt master or alternatively connecting via salt-ssh to run.

This method is a great compliment if you aren’t interested in running Salt in local mode and instead learning about and testing the salt-master and/or salt-ssh.  This method is also straight forward.

Here is what the custom config looks like for Vagrant.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|

 # OS config
 config.vm.box = "ubuntu/trusty64"
 config.vm.hostname = "salt-minion"
 config.vm.network :private_network, ip: "192.168.33.10"

 # Copy Salt master files for masterless provisioning
 config.vm.synced_folder "/srv/salt/", "/srv/salt/"

 # Install/config Salt
 config.vm.provision :salt do |salt|
 salt.minion_config = "/etc/salt/minion"
 salt.run_highstate = false
 install_type = "daily"
 colorize = true

 # For remote master preseeding
 salt.minion_key = "salt-minion.pem"
 salt.minion_pub = "salt-minion.pub"

 # Debugging
 #salt.bootstrap_options = "-D"
 #salt.verbose = true

 end

 # Additional configuration
 config.vm.provision "shell", inline: "echo '192.168.1.170 salt' >> /etc/hosts"
 config.vm.provision "shell", inline: "apt-get install salt-ssh"

end

This config will do a few different things:

  • Configure a static address to make some testing easier
  • Dummy a host entry for your salt master
  • Bootstrap the salt installation
  • Copy over a centrally managed minion file (if you want to customize how the minion behaves)
  • Install salt-ssh if you want to play around with ssh functionality

Note:  To use salt-ssh you will need to create and entry in /etc/salt/roster for the Vagrant machine and set up credentials to connect.  All of the configuration options can be found in the Vagrant docs.  Obviously much more can be done in Vagrant but you will have to test all of the various options yourself to see what suits your needs.

To check current Salt keys, run the following commands on the master.  This should not return anything yet since we haven’t created the keys.

sudo salt-key -L

So with this configuration we are generating a key once and reusing it so we only need to accept the key once from the Salt master.  To generate the keys needed run the following command from the root vagrant directory.

sudo salt-key --gen-keys=salt-minion

Then to add the new entry on the Master (after bringing up the Vagrant box!):

sudo salt-key -a 'salt-minion'

Once this set of keys has been accepted, we can bring the minion VM up and down without having to worry about adding and deleting keys every time you need to test something.  Obviously this approach should not be taken outside of testing environments in to production.

Lastly, use this command to delete an old minion:

sudo salt-key -d salt-minion

Conclusion

Being new to Salt I found the combination of using the custom Vagrant box coupled with the Test Kitchen provisioner a great way to learn and also how to test Salt configurations.  The best part about using this method is that there is no additional work to getting the two methods to work together.  For example, after you have your directory structure set up correctly on the host system (master confg) then you will already have everything ready to go for the Test Kitchen as well as the Vagrant box method of testing.

I have found the combination to be very useful in my own learning so far of Salt.  Obviously this wont’ address all of the complexity of a deployment but is a great and easy way to get introduced to many of the concepts and ideas of Salt.

I am really enjoying Salt so far and I hope that readers can put some of my findings to help with their learning as well.

Read More

Quicktip: Manage Memory Usage with Supervisord

I have been using Supervisord for process management for quite a while now but had no idea it could manage memory usage (among other things) until just recently.

There is a Python project called Superlance which essentially adds some extra functionality to supervisord for managing processes and memory.  The docs are a little thin so I thought it would be a good idea to highlight some of the functionality for folks that just want a few examples of how it works or can be used in a useful way.

Obviously you will want to have supervisor installed and configured already.  That can be done with pip or via apt-get.  You will also need to make sure you have a proper [unix_http_server] section in your /etc/supervisor/supervisord.conf file.

To install Superlance (on Ubuntu 14.04).

sudo pip install superlance

This will download and install a handful of Python scripts that can then be plugged in to Supervisor.  Check the link above if you are interested in the other plugins.

Then you will need to add a section to your supervisor config for memmon to manage memory usgae.

[eventlistener:memmon]
command=memmon -p <program_name>=3GB
events=TICK_60

The “-p <program_name>” corresponds to the program header in your supervisor configuration.  There are other options available to manage group processes, etc. for more advanced use cases but this should cover most basic scenarios.

You will need to reload the supervisor configuration after your changes have been made.  Unforunately the supervisor process needs to be fully reloaded.

sudo supervisorctl reload

If you want to check that the the memmon script is available before restarting supervisor you can use reread.

sudo supervisorctl reread

I would suggest reading through the Superlance docs and checking out the other scripts.  This additional functionality really helps add another layer of functionality to supervisord that I didn’t know existed.

Read More

Change CoreOS default toolbox

This is a little trick that allows you to override the default base OS in the CoreOS “toolbox“.  The toolbox is a neat trick to allow you to debug and troubleshoot issues inside containers on CoreOS without having to do any outside work of setting up a container.

The default toolbox OS defaults to Fedora, which we’re going to change to Ubuntu.  There is a custom configuration file that will get read in via the .toolboxrc file, located at /home/core/.toolboxrc by default.  To keep things simple we will only be changing the few pieces of the config to get the toolbox to behave how we want.  More can be changed but we don’t really need to override anything else.

TOOLBOX_DOCKER_IMAGE=ubuntu
TOOLBOX_DOCKER_TAG=14.04

That’s pretty cool, but what if we want to have this config file be in place for all servers?  We don’t want to have to manually write this config file for every server we log in to.

To fix this issue we will add a simple configuration in to the user-data file that gets fed in to the CoreOS cloud-config when the server is created.  You can find more information about the CoreOS cloud-configs here.

The bit in the cloud config that needs to change is the following.

-write_files:
  - path: /home/core/.toolboxrc
    owner: core
    content: |
      TOOLBOX_DOCKER_IMAGE=ubuntu
      TOOLBOX_DOCKER_TAG=14.04

If you are already using cloud-config then this change should be easy, just add the bit starting with -path to your existing -write_files section.  New servers using this config will have the desired toolbox defaults.

This approach gives us an automated, reproducible way to clone our custom toolbox config to every server that uses cloud-config to bootstrap itself.  Once the config is in place simply run the “toolbox” command and it should use the custom values to pull the desired Ubuntu image.

Then you can run your Ubuntu commands and debugging tools from within the toolbox.  Everything else will be the same, we just use Ubuntu now as our default toolbox OS.  Here is the post that gave me the idea to do this originally.

Read More