Deploy AWS SSM agent to CoreOS

If you have been a CoreOS user for long you will undoubtedly have noticed that there is no real package management system.   If you’re not familiar, the philosophy of CoreOS is to avoid using a package manager and instead rely heavily on leveraging the power of Docker containers along with a few system level tools to manage servers.  The problem that I just recently stumbled across is that the AWS SSM agent is packaged into debian and RPM formats and is assumed to be installed with a package manager, which obviously won’t work on CoreOS.  In the remainder of this post I will describe the steps that I took to get the SSM agent working on a CoreOS/Dockerized server.  Overall I am very happy with how well this solution turned out.

To get started, there is a nice tutorial here for using the AWS Session Manager through the the console.  The most important thing that needs to be done before “installing” the SSM agent on the CoreOS host is to set up the AWS instance with the correct permissions for the agent to be able to communicate with AWS.  For accomplishing this, I created a new IAM role and attached the AmazonEC2RoleForSSM policy to it through the AWS console.

After this step is done, you can bring up the ssm-agent.

Install the ssm-agent

After ensuring the correct permissions have been applied to the server that is to be manager, the next step is to bring up the agent.  To do this using Docker, there are some tricks that need to be used to get things working correctly, notably, fixing the PID 1 zombie reaping problem that Docker has.

I basically lifted the Dockerfile from here originally and adapted it into my own public Docker image at jmreicha/ssm-agent:latest.  In case readers want to go try this, my image is a little bit newer than the original source and has a few tweaks.  The Dockerfile itself is mostly straight forward, the main difference is that the ssm-agent process won’t reap child processes in the default Debian image.

In order to work around the child reaping problem I substituted the slick Phusion Docker baseimage, which has a very simple process manager that allows shells spawned by the ssm-agent to be reaped when they get terminated.  I have my Dockerfile hosted here if you want to check out how the phusion baseimage version works.

Once the child reaping problem was solved, here is the command I initially used to spin up the container, which of course still didn’t work out of the box.

docker run \
  -v /var/run/dbus:/var/run/dbus \
  -v /run/systemd:/run/systemd \
 jmreicha/ssm-agent:latest

I received the following errors.

2018-11-05 17:42:27 INFO [OfflineService] Starting document processing engine...
2018-11-05 17:42:27 INFO [OfflineService] [EngineProcessor] Starting
2018-11-05 17:42:27 INFO [OfflineService] [EngineProcessor] Initial processing
2018-11-05 17:42:27 INFO [OfflineService] Starting message polling
2018-11-05 17:42:27 INFO [OfflineService] Starting send replies to MDS
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] starting long running plugin manager
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute
2018-11-05 17:42:27 INFO [HealthCheck] HealthCheck reporting agent health.
2018-11-05 17:42:27 INFO [MessageGatewayService] Starting session document processing engine...
2018-11-05 17:42:27 INFO [MessageGatewayService] [EngineProcessor] Starting
2018-11-05 17:42:27 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck
2018-11-05 17:42:27 INFO [StartupProcessor] Executing startup processor tasks
2018-11-05 17:42:27 INFO [StartupProcessor] Unable to open serial port /dev/ttyS0: open /dev/ttyS0: no such file or directory
2018-11-05 17:42:27 INFO [StartupProcessor] Attempting to use different port (PV): /dev/hvc0
2018-11-05 17:42:27 INFO [StartupProcessor] Unable to open serial port /dev/hvc0: open /dev/hvc0: no such file or directory
2018-11-05 17:42:27 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory
2018-11-05 17:42:27 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory. Retrying in 5 seconds...
2018-11-05 17:42:27 INFO [MessageGatewayService] Successfully created ssm-user
2018-11-05 17:42:27 ERROR [MessageGatewayService] Failed to add ssm-user to sudoers file: open /etc/sudoers.d/ssm-agent-users: no such file or directory
2018-11-05 17:42:27 INFO [MessageGatewayService] [EngineProcessor] Initial processing
2018-11-05 17:42:27 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0d33006836710e7ef, requestId: 2975fe0d-846d-4256-9d50-57932be03925
2018-11-05 17:42:27 INFO [MessageGatewayService] listening reply.
2018-11-05 17:42:27 INFO [MessageGatewayService] Opening websocket connection to: %!(EXTRA string=wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d33006836710e7ef?role=subscribe&stream=input)
2018-11-05 17:42:27 INFO [MessageGatewayService] Successfully opened websocket connection to: %!(EXTRA string=wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d33006836710e7ef?role=subscribe&stream=input)
2018-11-05 17:42:27 INFO [MessageGatewayService] Starting receiving message from control channel
2018-11-05 17:42:32 INFO [StartupProcessor] Unable to open serial port /dev/ttyS0: open /dev/ttyS0: no such file or directory
2018-11-05 17:42:32 INFO [StartupProcessor] Attempting to use different port (PV): /dev/hvc0
2018-11-05 17:42:32 INFO [StartupProcessor] Unable to open serial port /dev/hvc0: open /dev/hvc0: no such file or directory
2018-11-05 17:42:32 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory
2018-11-05 17:42:32 ERROR [StartupProcessor] Error opening serial port: open /dev/hvc0: no such file or directory. Retrying in 5 seconds...
2018-11-05 17:42:35 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds.

The first error that jumped out in logs is the “Unable to open serial port”.  There is also an error referring to not being able to add the ssm-user to the sudoers file.

The fix for these issues is to add a Docker flag to the CoreOS serial device, “–device=/dev/ttyS0” and a volume mount to the sudoers path, “-v /etc/sudoers.d:/etc/sudoers.d”.  The full Docker run command is shown below.

docker run -d --restart unless-stopped --name ssm-agent \
  --device=/dev/ttyS0 \
  -v /var/run/dbus:/var/run/dbus \
  -v /run/systemd:/run/systemd \
  -v /etc/sudoers.d:/etc/sudoers.d \
  jmreicha/ssm-agent:latest

After fixing the errors found in the logs, and bringing up the containerized SSM agent, go ahead and create a new session in the AWS console.

ssm session

The session should come up pretty much immediately and you should be able to run commands like you normally would.

The last thing to (optionally) do is run the agent as a systemd service to take advantage of some capabilities to start it up automatically if it dies or start it if the server gets rebooted.  You can probably just get away with using the docker restart policy too if you aren’t interested in configuring a systemd service, which is what I have chosen to do for now.

You could even adapt this Docker image into a Kubernetes manifest and run it as a daemonset on each node of the cluster if desired to simplify things and add another layer of security.  I may return to the systemd unit and/or Kubernetes manifest in the future if readers are interested.

Conclusion

session history

The AWS Session manager is a fantastic tool for troubleshooting/debugging as well as auditing and security.

With SSM you can make sure to never expose specific servers to the internet directly, and you can also keep track of what kinds of commands have been run on the server.  As a bonus, the AWS console helps keeps track of all the previous sessions that were created and if you hook up to Cloudwatch and/or S3 you can see all the commands and times that they were run with nice simple links to the log files.

SSM allows you to do a lot of other cool stuff like run scripts against either a subset of servers which can be filtered by tags or against all servers that are recognized by SSM.  I’m sure there are some other features as well, I just haven’t found them yet.

Read More

Mount a volume using Ignition and Terraform

Sometimes when provisioning a server you may want to configure and provision storage as part of the bootstrapping and booting process.  For example, the other day I ran into an issue where I needed to define a disk, partition it, mount it to a specified location and then create a few directories in it.  It turned out to be surprisingly not straight forward to provision this storage and I learned quite a few things that I thought were worth sharing.

I’d just like to mention that ignition works like magic.  If you aren’t familiar, Ignition is basically a tool to help provision and configure servers, very similar to cloud-config except by default Ignition only runs once, on first boot.  The magic of Ignition is that it injects itself into initramfs before the OS ever eve boots and manipulating the system.  Ignition can be read in from  remote URL so that it can easily be provisioned in bare metal infrastructures.  There were several pieces to this puzzle.

The first was getting down all of the various ignition configuration components in Terraform.  Nothing was particularly complicated, there was just a lot of trial and error to get everything working.  Terraform has some really nice documentation for working with Ignition configurations, I’d recommend starting there and just playing around to figure out some of the various bits and pieces of configuration that Ignition can do.  There is some documentation on Ignition troubleshooting as well which I found to be helpful when things weren’t working correctly.

Below each portion of the Ignition configuration gets declared inside of a “ignition_config” block.  The Ignition configuration then points towards each invidual component that we want Ignition to configure. e.g. systemd, filesystem, directories, etc.

data "ignition_config" "staging_rancher_host_stateful" {
  systemd = [
     "${data.ignition_systemd_unit.mount_data.id}",
  ]

  filesystems = [
    "${data.ignition_filesystem.data_fs.id}",
  ]

  directories = [
    "${data.ignition_directory.data_dir.id}",
  ]

  disks = [
    "${data.ignition_disk.data_disk.id}",
  ]
}

This part of the setup is pretty straight forward.  Create a data block with the needed ignition configuration to mount the disk to the correct location,  format the device if it hasn’t already been formatted and create the desired directory and then create the Systemd unit to configure the mount point for the OS.  Here’s what each of the data blocks might look like.

data "ignition_filesystem" "data_fs" {
   name = "data"

  mount {
    device = "/dev/xvdb1"
    format = "ext4"
  }
}

data "ignition_directory" "data_dir" {
  filesystem = "data"
  path = "/data"
  uid = 500
  gid = 500
}

data "ignition_disk" "data_disk" {
  device = "/dev/xvdb"

  partition {
    number = 1
    start = 0
    size = 0
  }
}

Next, create the Systemd unit.

data "ignition_systemd_unit" "mount_data" {
  content = "${file("./data.mount")}"
  name = "data.mount"
}

Another challenge was getting the Systemd unit to mount the disk correctly.  I don’t work with Systemd frequently so initially had some trouble figuring this part out.  Basically, Systemd expects the service/unit definition name to EXACTLY match what’s declared inside the “Where” clause of the service definition.

For example, the following configuration needs to be named data.mount because that is what is defined in the service.

[Unit]
Description=Mount /data
Before=local-fs.target

[Mount]
What=/dev/xvdb1
Where=/data
Type=ext4

[Install]
WantedBy=local-fs.target

After all the kinks have been worked out of the Systemd unit(s) and other above Terraform Ignition configuration you should be able to deploy this and have Ignition provision disks for you automatically when the OS comes up.  This can be extended as much as needed for getting initial disks  set up correctly and is a huge step in automating your infrastructure in a nice repeatable way.

There is currently an open issue with Ignition currently where it breaks when attempting to re-provision a previously configured disk on a new machine.  Basically the Ignition process chokes because it sees the device has already been partitioned and formatted and can’t do it again.  I ran into this scenario where I was trying to create a basically floating persistent data EBS volume that gets attached to servers in an autoscaling group and wanted to allow the volume to be able to move around freely if the server gets killed off.

Read More

Configure S3 to store load balancer logs using Terraform

If you’ve ever encountered the following error (or similar) when setting up an AWS load balancer to write its logs to an s3 bucket using Terraform then you are not alone.  I decided to write a quick note about this problem because it is the second time I have been bitten by this and had to spend time Googling around for an answer.  The AWS documentation for creating and attaching the policy makes sense but the idea behind why you need to do it is murky at best.

aws_elb.alb: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: <my-bucket> Please check S3bucket permission
status code: 409, request id: xxxx

For reference, here are the docs for how to manually create the policy by going through the AWS console.  This method works fine for manually creating and attaching to the policy to the bucket.  The problem is that it isn’t obvious why this needs to happen in the first place and also not obvious to do in Terraform after you figure out why you need to do this.  Luckily Terraform has great support for IAM, which makes it easy to configure the policy and attach it to the bucket correctly.

Below is an example of how you can create this policy and attach it to your load balancer log bucket.

data "aws_elb_service_account" "main" {}

data "aws_iam_policy_document" "s3_lb_write" {
    policy_id = "s3_lb_write"

    statement = {
        actions = ["s3:PutObject"]
        resources = ["arn:aws:s3:::<my-bucket>/logs/*"]

        principals = {
            identifiers = ["${data.aws_elb_service_account.main.arn}"]
            type = "AWS"
        }
    }
}

Notice that you don’t need to explicitly define the principal like you do when setting up the policy manually.  Just use the ${data.aws_elb_service_account.main.arn} variable and Terraform will figure out the region that the bucket is in and pick out the correct parent ELB ID to attach to the policy.  You can verify this by checking the table from the link above and cross reference it with the Terraform output for creating and attaching the policy.

You shouldn’t need to update anything in the load balancer config for this to work, just rerun the failed command again and it should work.  For completeness here is what that configuration might look like.

...
access_logs {
    bucket = "${var.my_bucket}"
    prefix = "logs"
    enabled = true
}
...

This process is easy enough but still begs the question of why this seemingly unnecessary process needs to happen in the first place?  After searching around for a bit I finally found this:

When Amazon S3 receives a request—for example, a bucket or an object operation—it first verifies that the requester has the necessary permissions. Amazon S3 evaluates all the relevant access policies, user policies, and resource-based policies (bucket policy, bucket ACL, object ACL) in deciding whether to authorize the request.

Okay, so it basically looks like when the load balancer gets created, the load balancer gets associated with an AWS owned ID, which we need to explicitly give permission to, through IAM policy:

If the request is for an operation on an object that the bucket owner does not own, in addition to making sure the requester has permissions from the object owner, Amazon S3 must also check the bucket policy to ensure the bucket owner has not set explicit deny on the object

Note

A bucket owner (who pays the bill) can explicitly deny access to objects in the bucket regardless of who owns it. The bucket owner can also delete any object in the bucket.

There we go.  There is a little bit more information in the link above but now it makes more sense.

Read More

Top Five Reasons to Use a Hybrid Cloud

Hybrid cloud

Guest post by Aventis Systems

Cloud computing is increasing in popularity as business users have become more comfortable with cloud capabilities. According to a recent VMware report, some 15% of workloads currently reside in the public cloud, and 50% are projected to be running in the public cloud by 2030.

Some business customers will move completely to the public cloud, drawn by its ability to help them respond to changing business needs, align costs and stay on the cutting edge of innovation. But a complete move isn’t the best option for all customers. Some processes still simply run more efficiently and securely on on-premise hardware.

For many businesses, there is no one-size-fits-all solution. Ultimately, using a mix of public clouds and private infrastructures is the best way for many companies to make the most of their resources while optimizing performance and productivity.

What Is a Hybrid Cloud?
A hybrid cloud is a combination of a private cloud platform designed for use by a specific organization and a public cloud provider like Amazon Web Services (AWS) or Google Cloud. These public clouds are shared by customers all over the world and are a cheaper alternative to buying physical servers.

Though the public and private cloud platforms operate independently from one another, they can communicate over an encrypted connection.

A hybrid approach enables data and applications to move between the public and private infrastructures. These are independent platforms, so businesses can store protected data on the private cloud while still leveraging applications that rely on that protected data on the public cloud.

In other words, your sensitive data stays out of the public cloud and on the private platform. The challenge is integrating the different public and private clouds and technologies in a way that is seamless for business users.

Here are some of the top benefits of a hybrid cloud approach, according to the experts at Aventis Systems:

1. Workload Flexibility
With hybrid cloud technology, your IT team has the flexibility to match resources with the infrastructure that best serves the needs of your business.

For example, an integrated hybrid cloud approach with VMware Cloud on AWS enables you to decide where to most effectively run workloads based on cost, risk and changing business needs. With the flexibility to move workloads onsite or offsite as needed, IT is able to better serve the business as a whole.

The ability for organizations to easily transition applications without having to re-platform them — along with the ability to effortlessly access and leverage native cloud services — enables businesses to create a flexible infrastructure in a constantly evolving IT landscape. When new technology becomes available or new trends emerge, businesses are agile enough to take advantage of them quickly.

2. Consistency and Scalability
VMware Cloud on AWS enables companies to leverage operational consistency, along with scalability, on one streamlined platform. By maintaining security and networking policies, along with consistent resource utilization both on- and off-premise, businesses can benefit most from a hybrid infrastructure.

Customers can strategically leverage and allocate company resources to get the most out of system functionality, while becoming better positioned for growth. As your business and capacity needs grow, a hybrid cloud infrastructure offers an easy way to scale to fit these complex needs.

3. Improved Security
Maintaining secure customer transaction data and personal information with a hybrid cloud infrastructure also offers a major benefit over an exclusively public platform. The hybrid approach enables specified servers to be isolated from specific security threats by allowing devices to be configured to communicate with them on a private network.

Where some compliance requirements prevent businesses from running payments in the cloud, for example, a hybrid cloud platform allows you to house secure customer data on a dedicated server, while maintaining the flexibility and convenience of online transactions.

4. Maximized Skillsets and Cost Optimization
Not only are hybrid clouds less expensive to manage, with VMware Cloud on AWS, business customers can also reap the benefits of utilizing their existing IT investments.

Hybrid cloud offerings integrate with your existing IT and use many of the same tools as those used on-premise. You can leverage the resources you already have without having to adopt new tools or acquire new hardware.

Additionally, an integrated hybrid cloud approach enables customers to better align their costs to business needs. Upfront costs can be balanced with recurring expenses, depending on the requirements.

5. Innovation
With a hybrid cloud approach, your business will have access to all of the resources on the public cloud without the burden of big upfront investments. With access to all the newest technologies and innovations, you can stay on the forefront of the latest capabilities.

As businesses become more comfortable and reliant on cloud capabilities, more and more companies will look for the right mix of public cloud and on-premise infrastructure models to increase efficiency and performance.

Read More

Tips for monitoring Rancher Server

Last week I encountered an interesting bug in Rancher that managed to cause some major problems across my Rancher infrastructure.  Basically, the bug was causing of the Rancher agent clients to continuously bounce between disconnected/reconnected/finished and reconnecting states, which only manifested itself either after a 12 hour period or by deactivating/activating agents (for example adding a new host to an environment).  The only way to temporarily fix the issue was to restart the rancher-server container.

With some help, we were eventually able to resolve the issue.  I picked up a few nice lessons along the way and also became intimately acquainted with some of the inner workings of Rancher.  Through this experience I learned some tips on how to effectively monitor the Rancher server environment that I would otherwise not have been exposed to, which I would like to share with others today.

All said and done, I view this experience as a positive one.  Hitting the bug has not only helped mitigate this specific issue for other users in the future but also taught me a lot about the inner workings of Rancher.  If you’re interested in the full story you can read about all of the details about the incident, including steps to reliably reproduce and how the issue was ultimately resolved here.  It was a bug specific to Rancher v1.5.1-3, so upgrading to 1.5.4 should fix this issue if you come across it.

Before diving into the specifics for this post, I just want to give a shout out to the Rancher community, including @cjellik, @ibuildthecloud, @ecliptok and @shakefu.  The Rancher developers, team and community members were extremely friendly and helpful in addressing and fixing the issue.  Between all the late night messages in the Rancher slack, many many logs, countless hours debugging and troubleshooting I just wanted to say thank you to everyone for the help.  The small things go a long way, and it just shows how great the growing Rancher community is.

Effective monitoring

I use Sysdig as the main source of container and infrastructure monitoring.  To accomplish the metric collection, I run the Sysdig agent as a systemd service when a server starts up so when a server dies and goes away or a new one is added, Sysdig is automatically started up and begins dumping that metric data into the Sysdig Cloud for consumption through the web interface.

I have used this data to create custom dashboards which gives me a good overview about what is happening in the Rancher server environment (and others) at any given time.

sysdig dashboard

The other important thing I discovered through this process, was the role that the Rancher database plays.  For the Rancher HA setup, I am using an externally hosted RDS instance for the Rancher database and was able to fine found some interesting correlations as part of troubleshooting thanks to the metrics in Sysdig.  For example, if the database gets stressed it can cause other unintended side effects, so I set up some additional monitors and alerts for the database.

Luckily Sysdig makes the collection of these additional AWS metrics seamless.  Basically, Sysdig offers an AWS integration which pull in CloudWatch metrics and allows you to add them to dashboards and alert on them from Sysdig, which has been very nice so far.

Below are some useful metrics in helping diagnose and troubleshoot various Rancher server issues.

  • Memory usage % (server)
  • CPU % (server)
  • Heap used over time (server)
  • Number of network connections (server)
  • Network bytes by application (server)
  • Freeable memory over time (RDS)
  • Network traffic over time (RDS)

As you can see, there are quite a few things you can measure with metrics alone.  Often though, this isn’t enough to get the entire picture of what is happening in an environment.

Logs

It is also important to have access to (useful) logs in the infrastructure in order to gain insight into WHY metrics are showing up the way they do and also to help correlate log messages and errors to what exactly is going on in an environment when problems occur.  Docker has had the ability for a while now to use log drivers to customize logging, which has been helpful to us.  In the beginning, I would just SSH into the server and tail the logs with the “docker logs” command but we quickly found that to be cumbersome to do manually.

One alternative to tailing the logs manually is to configure the Docker daemon to automatically send logs to a centralized log collection system.  I use Logstash in my infrastructure with the “gelf” log driver as part of the bootstrap command that runs to start the Rancher server container, but there are other logging systems if Logstash isn’t the right fit.  Here is what the relevant configuration looks like.

...
--log-driver=gelf \
--log-opt gelf-address=udp://<logstash-server>:12201 \
--log-opt tag=rancher-server \
...

Just specify the public address of the Logstash log collector and optionally add tags.  The extra tags make filtering the logs much easier, so I definitely recommend adding at least one.

Here are a few of the Logstash filters for parsing the Rancher logs.  Be aware though, it is currently not possible to log full Java stack traces in Logstash using the gelf input.

if [tag] == "rancher-server" {
    mutate { remove_field => "command" }
    grok {
      match => [ "host", "ip-(?<ipaddr>\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3})" ]
    }

    # Various filters for Rancher server log messages
    grok {
     match => [ "message", "time=\"%{TIMESTAMP_ISO8601}\" level=%{LOGLEVEL:debug_level} msg=\"%{GREEDYDATA:message_body}\"" ]
     match => [ "message", "%{TIMESTAMP_ISO8601} %{WORD:debug_level} (?<context>\[.*\]) %{GREEDYDATA:message_body}" ]
     match => [ "message", "%{DATESTAMP} http: %{WORD:http_type} %{WORD:debug_level}: %{GREEDYDATA:message_body}" ]
   }
 }

There are some issues open for addressing this, but it doesn’t seem like there is much movement on the topic, so if you see a lot of individual messages from stack traces that is the reason.

One option to mitigate the problem of stack traces would be to run a local log collection agent (in a container of course) on the rancher server host, like Filebeat or Fluentd that has the ability to clean up the logs before sending it to something like Logstash, ElasticSearch or some other centralized logging.  This approach has the added benefit of adding encryption to the logs, which GELF does not have (currently).

If you don’t have a centralized logging solution or just don’t care about rancher-server logs shipping to it – the easiest option is to tail the logs locally as I mentioned previously, using the json-file log format.  The only additional configuration I would recommend to the json-file format is to turn on log rotation which can be accomplished with the following configuration.

...
 --log-driver=json-file \
 --log-opt max-size=100mb \
 --log-opt max-file=2 \
...

Adding these logging options will ensure that the container logs for rancher-server will never full up the disk on the server.

Bonus: Debug logs

Additional debug logs can be found inside of each rancher-server container.  Since these debug logs are typically not needed in day to day operations, they are sort of an easter egg, tucked away.  To access these debug logs, they are located in /var/lib/cattle/logs/ inside of the rancher-server container.  The easiest way to analyze the logs is to get them off the server and onto a local machine.

Below is a sample of how to do this.

docker exec -it <rancher-server> bash
cd /var/lib/cattle/logs
cp cattle-debug.log /tmp

Then from the host that the container is sitting on you can docker cp the logs out of the container and onto the working directory of the host.

docker cp <rancher-server>:/tmp/cattle-debug.log .

From here you can either analyze the logs in a text editor available on the server, or you can copy the logs over to a local machine.  In the example below, the server uses ssh keys for authentication and I chose to copy the logs from the server into my local /tmp directory.

 scp -i ~/.ssh/<rancher-server-pem> user@rancher-server:/tmp/cattle-debug.log /tmp/cattle-debug.log

With a local copy of the logs you can either examine the logs using your favorite text editor or you can upload them elsewhere for examination.

Conclusion

With all of our Rancher server metrics dumping into Sysdig Cloud along with our logs dumping into Logstash it has made it easier for multiple people to quickly view and analyze what was going on with the Rancher servers.  In HA Rancher environments with more than one rancher-server running, it also makes filtering logs based on the server or IP much easier.  Since we use 2 hosts in our HA setup we can now easily filter the logs for only the server that is acting as the master.

As these container based grow up, they also become much more complicated to troubleshoot.  With better logging and monitoring systems in place it is much easier to tell what is going on at a glance and with the addition of the monitoring solution we can be much more proactive about finding issues earlier and mitigating potential problems much faster.

Read More