Sometimes when provisioning a server you may want to configure and provision storage as part of the bootstrapping and booting process. For example, the other day I ran into an issue where I needed to define a disk, partition it, mount it to a specified location and then create a few directories in it. It turned out to be surprisingly not straight forward to provision this storage and I learned quite a few things that I thought were worth sharing.
I’d just like to mention that ignition works like magic. If you aren’t familiar, Ignition is basically a tool to help provision and configure servers, very similar to cloud-config except by default Ignition only runs once, on first boot. The magic of Ignition is that it injects itself into initramfs before the OS ever eve boots and manipulating the system. Ignition can be read in from remote URL so that it can easily be provisioned in bare metal infrastructures. There were several pieces to this puzzle.
The first was getting down all of the various ignition configuration components in Terraform. Nothing was particularly complicated, there was just a lot of trial and error to get everything working. Terraform has some really nice documentation for working with Ignition configurations, I’d recommend starting there and just playing around to figure out some of the various bits and pieces of configuration that Ignition can do. There is some documentation on Ignition troubleshooting as well which I found to be helpful when things weren’t working correctly.
Below each portion of the Ignition configuration gets declared inside of a “ignition_config” block. The Ignition configuration then points towards each invidual component that we want Ignition to configure. e.g. systemd, filesystem, directories, etc.
data "ignition_config" "staging_rancher_host_stateful" { systemd = [ "${data.ignition_systemd_unit.mount_data.id}", ] filesystems = [ "${data.ignition_filesystem.data_fs.id}", ] directories = [ "${data.ignition_directory.data_dir.id}", ] disks = [ "${data.ignition_disk.data_disk.id}", ] }
This part of the setup is pretty straight forward. Create a data block with the needed ignition configuration to mount the disk to the correct location, format the device if it hasn’t already been formatted and create the desired directory and then create the Systemd unit to configure the mount point for the OS. Here’s what each of the data blocks might look like.
data "ignition_filesystem" "data_fs" { name = "data" mount { device = "/dev/xvdb1" format = "ext4" } } data "ignition_directory" "data_dir" { filesystem = "data" path = "/data" uid = 500 gid = 500 } data "ignition_disk" "data_disk" { device = "/dev/xvdb" partition { number = 1 start = 0 size = 0 } }
Next, create the Systemd unit.
data "ignition_systemd_unit" "mount_data" { content = "${file("./data.mount")}" name = "data.mount" }
Another challenge was getting the Systemd unit to mount the disk correctly. I don’t work with Systemd frequently so initially had some trouble figuring this part out. Basically, Systemd expects the service/unit definition name to EXACTLY match what’s declared inside the “Where” clause of the service definition.
For example, the following configuration needs to be named data.mount because that is what is defined in the service.
[Unit] Description=Mount /data Before=local-fs.target [Mount] What=/dev/xvdb1 Where=/data Type=ext4 [Install] WantedBy=local-fs.target
After all the kinks have been worked out of the Systemd unit(s) and other above Terraform Ignition configuration you should be able to deploy this and have Ignition provision disks for you automatically when the OS comes up. This can be extended as much as needed for getting initial disks set up correctly and is a huge step in automating your infrastructure in a nice repeatable way.
There is currently an open issue with Ignition currently where it breaks when attempting to re-provision a previously configured disk on a new machine. Basically the Ignition process chokes because it sees the device has already been partitioned and formatted and can’t do it again. I ran into this scenario where I was trying to create a basically floating persistent data EBS volume that gets attached to servers in an autoscaling group and wanted to allow the volume to be able to move around freely if the server gets killed off.