Create Windows Server 2019 AMIs using Packer

There are quite a few blog posts out there detailing this, but none of them seem to be up to date for use with the HCL style syntax, introduced in Packer 1.5, which has a number of advantages over using the original JSON syntax, namely that it is much more human friendly, flexible, and easy to work with.

So in this post I will share a very quick and dirty example packer configuration that uses the HCL syntax to create, provision and upload a Windows server AMI to AWS.

You’ll need the following bootstrap_win.txt file in place in order for Packer to be able to provision the server using PowerShell commands over WinRM.

<powershell>

write-output "Running User Data Script"
write-host "(host) Running User Data Script"

Set-ExecutionPolicy Unrestricted -Scope LocalMachine -Force -ErrorAction Ignore

# Don't set this before Set-ExecutionPolicy as it throws an error
$ErrorActionPreference = "stop"

# Remove HTTP listener
Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse

# Create a self-signed certificate to let ssl work
$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "packer"
New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force

# WinRM
write-output "Setting up WinRM"
write-host "(host) setting up WinRM"

cmd.exe /c winrm quickconfig -q
cmd.exe /c winrm set "winrm/config" '@{MaxTimeoutms="1800000"}'
cmd.exe /c winrm set "winrm/config/winrs" '@{MaxMemoryPerShellMB="1024"}'
cmd.exe /c winrm set "winrm/config/service" '@{AllowUnencrypted="true"}'
cmd.exe /c winrm set "winrm/config/client" '@{AllowUnencrypted="true"}'
cmd.exe /c winrm set "winrm/config/service/auth" '@{Basic="true"}'
cmd.exe /c winrm set "winrm/config/client/auth" '@{Basic="true"}'
cmd.exe /c winrm set "winrm/config/service/auth" '@{CredSSP="true"}'
cmd.exe /c winrm set "winrm/config/listener?Address=*+Transport=HTTPS" "@{Port=`"5986`";Hostname=`"packer`";CertificateThumbprint=`"$($Cert.Thumbprint)`"}"
cmd.exe /c netsh advfirewall firewall set rule group="remote administration" new enable=yes
cmd.exe /c netsh firewall add portopening TCP 5986 "Port 5986"
cmd.exe /c net stop winrm
cmd.exe /c sc config winrm start= auto
cmd.exe /c net start winrm

</powershell>

Here is what the full Packer HCL configuration looks like.

variable "aws_region" {
  type    = string
  default = "us-west-2"
}

variable "instance_type" {
  type    = string
  default = "t3.medium"
}

variable "subnet_id" {
  type = string
}

variable "vpc_id" {
  type = string
}

variable "ami_users" {
  type = string
}

source "amazon-ebs" "windows_server" {
  ami_description             = "A custom Windows Server AMI"
  ami_name                    = "windows-example"
  ami_users                   = ["${var.ami_users}"]
  associate_public_ip_address = true
  communicator                = "winrm"
  instance_type               = "${var.instance_type}"
  region                      = "${var.aws_region}"
  force_deregister            = true
  force_delete_snapshot       = true
  source_ami_filter {
    filters = {
      architecture        = "x86_64"
      name                = "Windows_Server-2019-English-Full-ContainersLatest-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["801119661308"]
  }
  subnet_id      = "${var.subnet_id}"
  user_data_file = "./bootstrap_win.txt"
  vpc_id         = "${var.vpc_id}"
  winrm_insecure = true
  winrm_port     = 5986
  winrm_use_ssl  = true
  winrm_username = "Administrator"
}

build {
  sources = ["source.amazon-ebs.windows_server"]

  # Extra configuration
  provisioner "file" {
    destination = "C:\\ProgramData\\someconfig.txt"
    source      = "./myconfig.txt"
  }

  provisioner "powershell" {
    # Reinitialize the server to generate a random password on first boot
    inline = [
      "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\SendWindowsIsReady.ps1 -Schedule",
      "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
      "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\SysprepInstance.ps1 -NoShutdown"
    ]
  }
}

Notice the two provisioners. The first enables a way for you to inject local configurations into the image before baking it, useful for adding extra configurations. The second is specific to AWS Windows Server images but essentially allows the machine to act like it is being booted for the first time, using the SendWindowsIsReady.ps1, InitializeInstance.ps1 and SysprepInstance.ps1 scripts. These scripts are important pieces to ensuring that this AMI can be created and started the exact same way every time.

Some of the configuration options may not be necessary so you will need to play around with the configuration to make it suit your needs. For example, you may want to create a unique image on every build, which can be done using timestamps. If you use unique identifiers, the force_deregister and force_delete_snapshot options can be omitted.

Likewise, since this Packer image is specific to Windows, it uses a number of winrn_ options, which would be replaced by ssh_ options if this were being provisioned for Linux.

One other trick that I found to be helpful was setting some of these variables on the fly via a script. You can use PKR_VAR_<var> via shell to set some environment variables. This is especially useful when the configuration needs to be shared across different environments for things that change, e.g. vpc_id and subnet_id.

Read More

Immutable WordPress Installations with Kubernetes

wordpress on kubernetes

In this post I will describe some of the interesting discoveries I have made for a recent side project I have been working on, which include a few nice discoveries to automate and manage WordPress deployments with Kubernetes.

Automating WordPress

I am very happy with the patterns that have emerged from this project. One thing that I have always struggled with (I’m sure others have as well) in the world of WP has been finding a good way to create completely reproducible and immutable code and configurations. For example, managing plugins and themes has been a painful experience because WP was designed back in a time before configuration as code and infrastructure as code. Due to this different paradigm, WP is usually stood up once, then managed via it

Tools have evolved since then and a perfect example of one of the bridges between the old and new way is the wp-cli. The wp-cli is basically a way to automate all kinds of things you would otherwise do in the UI. For example, wp-cli provides a way to manage plugin and themes, which as I mentioned has been notoriously difficult to do in the past.

The next step forward is the combination of a tool called bedrock and its accompanying way of modifying it and building your own Docker image. The roots/bedrock method provides the wp-cli in the build scripts so if needed, you can automate tasks using extra entrypoint scripts and/or wp-cli commands, which is just a nice extra touch and shows that the maintainers of the project are putting a lot of effort into it.

A few other bells and whistles include a way to build custom plugins into Docker images for portability rather than relying on some external persistent storage solution which can quickly add overhead and complexity to a project, as well as modern tools like PHP Composer and Packagist which provide a way to install packages (Composer) and a way to manage WP plugins via the Composer package manager (packagist).

Sidenote

There are several other ways of deploying WP into Kubernetes, unfortunately most of these methods do not address multitenancy. If multitenancy is needed, a much more complicated approach is needed involving either NFS or some other many -> many volume mapping.

Deploying with Kubernetes

The tricky part to all of this is the fact that I was unable to find any examples of others using Kubernetes to deploy bedrock managed Docker containers. There is a docker-compose.yaml file in the repo that works perfectly well, but the next step beyond that doesn’t seem to be a topic that has been covered much.

Luckily it is mostly straight forward to bring the docker-compose configuration into Kubernetes, there are just a few minor adjustments that need to be made. The below link should provide the basic scaffolding needed to bring bedrock into a Kubernetes cluster. This method will even expose a way to create and manage WP multisite, another notoriously difficult aspect of WP to manage.

https://gist.github.com/jmreicha/aae3ce024c13be1c561189946f1a0efc

There are a couple of things to note with this configuration. You will need build/maintain your own Docker image based off the roots/bedrock repo linked above. You will also need to have some knowledge of Kubernetes and a working Kubernetes cluster in place. The configuration will require certificates, and DNS so cert-manager and external-dns will most likely need to be deployed into the cluster.

Finally, in the configuration the password, domain (example.com), environment variables for configuring the database and Docker image will need to be updated to reflect your own environment. This method assumes that the WordPress database has already been split out to another location, so will require the Kubernetes cluster to be able to communicate with wherever the database is hosted.

To see some of the magic, change the number of replicas in the Kubernetes manifest configuration from 1 to 2, and you should be able to see a new, completely identical container come up with all the correct configurations and code and start taking traffic.

Conclusion

Switching to the immutable infrastructure approach with WP nets a big win. By adopting these new methods and workflows you can control everything with code, which removes the need for manually managing WP instances and instead allows you to create workflows and pipelines to do all of the heavy lifting.

These benefits include much more visibility in controlling changes, because now Git becomes the central source of truth which allows you to get a better picture of the what, when and why than any other system I have found. This new paradigm also enables the use of Continuous Integration as it is intended – the automatic builds and deploys because of Docker and Kubernetes integrations producing immutable artifacts (Docker), and deployments (Kubernetes manifests) create a clean and simple way to manage the aspects of running the WordPress site.

Read More

Terraform Testing Tools

terraform testing tools

What begins with “T”? I have been thinking about various ways of testing infrastructure and resources lately and have been having a difficult time parsing out the various tools that are available. This post is meant to be a reference for “finding the right tool for the right job” as part of testing various infrastructures.

Often times when you start talking about testing, you will hear about the testing pyramid, which is described along with some other interesting aspects of testing Terraform in this blog post, it covers a lot of the pitfalls and gotchas you might run into.

My aim originally was to find a good tool for unit testing Terraform and as part of that adventure have uncovered a number of other interesting projects, that while not directly applicable, could be very useful for a number of different testing scenarios. Read the above blog post for more info but suffice to say, unit testing (the bottom layer of the testing pyramid) is quite difficult to do with Infrastructure as Code and Terraform and at this point is mostly not a solved problem.

Here is the list of tools that I uncovered in my research. Please let me know if there are any missing.

Terratest

Terratest is written and maintained by the folks at Terragrunt and it provides a comprehensive testing experience for deploying Infrastructure as Code, testing that it works as expected and then tears down the IaC when it is finished. From what I can tell this is probably the most comprehensive testing tool out there, still doesn’t cover unit tests, but is a great way to ensure things are working end to end.

Kitchen-Terraform

Follows the BDD philosophy and spins up, tests and spins down various Terraform resources. Works in a similar fashion to Terratest to bring up the environment, test and then tear things down.

serverspec

This testing tool is one of the original tools I ran across when originally mulling over the idea of unit testing infrastructure back in the days when configuration management tools like Chef, Puppet and Salt ruled the earth.

inspec (aws-resources)

Very similiar to awspec, this tool provides a framework for testing various AWS resources. This one is nice because it uses inspec to build on and so it has a lot of extra capabilities.

awspec

Very similar to the other “spec” tools, awspec is built in the same style as serverspec/inspec and provides a very nice interface for testing various AWS resource types. Since it is modeled after serverspec, you will need to deal with Ruby.

Testinfra

Another Python based unit testing style framework. This tools varies from Terraform Validate in that it mainly focuses on testing the lower level server and OS, but it does have integration for testing a number of other things.

terraform-compliance

A nice tool for testing (and enforcing Terraform compliance rules).

Terraform Validate

This is a nice tool for expressing various test conditions as compliance, especially if you are already familiar with Python and its testing landscape. This tool parses configs using pyhcl and allows you to write familiar unittest style tests for Terraform configurations.

Pulumi

This one is a bit of a wild card but could prove interesting to some readers. Pulumi provides direct integration into your language of choice (Python for me) to enable you to write pulumi code using language native unit tests. Obviously Pulumi isn’t Terraform but I had to mention it here because there is a significant amount of crossover between the tools.

Read More

Exploring Linux Terminals

I have been experimenting with Linux on a recently acquired Lenovo X1 Carbon and as part of the process I have been exploring and testing out various productivity tools, including terminal emulators.

The first thing I discovered in this process is that there’s a lot of terminals out there. And they’re all slightly different, and in my experience, almost none of them were able to do all of the tweaks I like.

Here is the list of terminals I have tested out (so far).

As you can see, that is a bunch of terminals. To keep things short, in my exploration and testing, the best 3 I found were as follows.

Alacritty – 3rd

It is fast, defaults are great, works perfectly with the tiling window managers. This one would would probably have been first on the list if it was able to provide a blinking cursor but as I found with many of the options there seemed to almost always be a gotcha.

A few high points for this terminal, written in Rust, it uses the GPU to offload rendering, cross platform, and maybe my favorite feature, the configuration file is yaml based. Docs are also good and the community is really taking off so I have a strong feeling this one will continue improving.

urxvt – 2nd

The biggest problem with this terminal for me is that it is painful to configure. There are no preferences configured out of the box, which to some is preferred, but also, digging through old blog posts and perl scripts is not how I like to spend my time.

This terminal is second on the list because it was the only other terminal that was able to do all of the small little tweaks and adjustments that suit my preferences. And it is fast and light weight, which are good things.

Here is the configuration I ended up with if interested.

Kitty – 1st

This terminal was a clear winner. It was able to do all of my custom tweaks and settings and because it has nice defaults I only needed to add about 10 lines of extra configuration.

This terminal has a lot of other stuff going for it. It is written in Rust, which makes it fast, it uses the GPU to offload rendering, which also makes it fast, easy to configure, and for the most part just works. Only gotcha I found was that I needed to explicitly turn on copy to selection in my configuration, but that was easy.

Here is the configuration I ended up with if interested.

Conclusion

I plan on keeping this list updated to some extent as I find more Terminals to try out. As you can see, there are many different options and seem to be more and more all the time.

Your experience may differ so obviously take these musings with a pinch of salt, and please do look through the various options and try things out to see if they will work for you. That said, I do think I have a specific enough use case that these recommendations should be helpful in guiding most users.

Here is the repo with all of my various configurations if you want to check something out. There were a few terminal configs that didn’t end up there just because they were too minimal or I didn’t like them.

Read More

Idempotent Shell Scripts with Terraform

One challenge when dealing with Terraform is keeping things clean and repeatable. My current favorite approach to accomplish this task using shell scripts is by using a combination of null_resources and triggers to control when scripts should be updated. These controls combined with Terraform provisioners and template_files provide a nice flexible way to deal with otherwise potentially messy scripts.

I like to provide more than one trigger so that they can be recomputed when either a variable that gets passed into the script changes, or, the script itself changes. This trick is handy when you need to bend Terraform into doing something that it usually doesn’t handle, like some startup or bootstrap process.

Below is the full example with templated scripts and triggers in Terraform v0.11. The same logic should also work with Terraform v0.12 with minimal changes.

data "template_file" "cool_script" {
  template = "${file("${path.module}/script.sh")}"

  vars {
    my_cool_var = "${var.my_cool_var}"
  }
}

resource "null_resource" "script" {

  # Trigger when the script when variables change
  triggers = {
    my_trigger = "${var.my_cool_var}"
    script_sha = "${sha256(file("${path.module}/script.sh"))}"
  }

  provisioner "local-exec" {
    command   = "${data.template_file.cool_script.rendered}"
    interpreter = ["/bin/bash", "-c"]
  }
}

You can provide as many variables to the template_file as the script needs and any time those get updated, the null_resource will pick these changes up and update/rerun your script for you. Notice that the provisioner in the null_resource is basically set up to call bash against the rendered script that we created, using the values in the vars.

If you call the script again, nothing should change because we already computed the SHA of the rendered script and told Terraform to keep track of that state. The below example is a simple way for telling Terraform to keep track of changes to the script/template.

script_sha = "${sha256(file("${path.module}/script.sh"))}"

Once the script is updated, the next time the script gets called, its values and output should update accordingly. Likewise, since we are also using a var as a trigger, when that changes the script will also be updated.

Terraform is flexible enough to allow us to do things like this because often times there are situations and edge cases where Terraform can’t really perform some actions. The above example is a common workaround for provisioning resources that either don’t have an API that Terraform can tap into, or are just tasks that are only handled via some startup or bootstrap script/process.

Read More