Last week I encountered an interesting bug in Rancher that managed to cause some major problems across my Rancher infrastructure. Basically, the bug was causing of the Rancher agent clients to continuously bounce between disconnected/reconnected/finished and reconnecting states, which only manifested itself either after a 12 hour period or by deactivating/activating agents (for example adding a new host to an environment). The only way to temporarily fix the issue was to restart the rancher-server container.
With some help, we were eventually able to resolve the issue. I picked up a few nice lessons along the way and also became intimately acquainted with some of the inner workings of Rancher. Through this experience I learned some tips on how to effectively monitor the Rancher server environment that I would otherwise not have been exposed to, which I would like to share with others today.
All said and done, I view this experience as a positive one. Hitting the bug has not only helped mitigate this specific issue for other users in the future but also taught me a lot about the inner workings of Rancher. If you’re interested in the full story you can read about all of the details about the incident, including steps to reliably reproduce and how the issue was ultimately resolved here. It was a bug specific to Rancher v1.5.1-3, so upgrading to 1.5.4 should fix this issue if you come across it.
Before diving into the specifics for this post, I just want to give a shout out to the Rancher community, including @cjellik, @ibuildthecloud, @ecliptok and @shakefu. The Rancher developers, team and community members were extremely friendly and helpful in addressing and fixing the issue. Between all the late night messages in the Rancher slack, many many logs, countless hours debugging and troubleshooting I just wanted to say thank you to everyone for the help. The small things go a long way, and it just shows how great the growing Rancher community is.
Effective monitoring
I use Sysdig as the main source of container and infrastructure monitoring. To accomplish the metric collection, I run the Sysdig agent as a systemd service when a server starts up so when a server dies and goes away or a new one is added, Sysdig is automatically started up and begins dumping that metric data into the Sysdig Cloud for consumption through the web interface.
I have used this data to create custom dashboards which gives me a good overview about what is happening in the Rancher server environment (and others) at any given time.
The other important thing I discovered through this process, was the role that the Rancher database plays. For the Rancher HA setup, I am using an externally hosted RDS instance for the Rancher database and was able to fine found some interesting correlations as part of troubleshooting thanks to the metrics in Sysdig. For example, if the database gets stressed it can cause other unintended side effects, so I set up some additional monitors and alerts for the database.
Luckily Sysdig makes the collection of these additional AWS metrics seamless. Basically, Sysdig offers an AWS integration which pull in CloudWatch metrics and allows you to add them to dashboards and alert on them from Sysdig, which has been very nice so far.
Below are some useful metrics in helping diagnose and troubleshoot various Rancher server issues.
- Memory usage % (server)
- CPU % (server)
- Heap used over time (server)
- Number of network connections (server)
- Network bytes by application (server)
- Freeable memory over time (RDS)
- Network traffic over time (RDS)
As you can see, there are quite a few things you can measure with metrics alone. Often though, this isn’t enough to get the entire picture of what is happening in an environment.
Logs
It is also important to have access to (useful) logs in the infrastructure in order to gain insight into WHY metrics are showing up the way they do and also to help correlate log messages and errors to what exactly is going on in an environment when problems occur. Docker has had the ability for a while now to use log drivers to customize logging, which has been helpful to us. In the beginning, I would just SSH into the server and tail the logs with the “docker logs” command but we quickly found that to be cumbersome to do manually.
One alternative to tailing the logs manually is to configure the Docker daemon to automatically send logs to a centralized log collection system. I use Logstash in my infrastructure with the “gelf” log driver as part of the bootstrap command that runs to start the Rancher server container, but there are other logging systems if Logstash isn’t the right fit. Here is what the relevant configuration looks like.
... --log-driver=gelf \ --log-opt gelf-address=udp://<logstash-server>:12201 \ --log-opt tag=rancher-server \ ...
Just specify the public address of the Logstash log collector and optionally add tags. The extra tags make filtering the logs much easier, so I definitely recommend adding at least one.
Here are a few of the Logstash filters for parsing the Rancher logs. Be aware though, it is currently not possible to log full Java stack traces in Logstash using the gelf input.
if [tag] == "rancher-server" { mutate { remove_field => "command" } grok { match => [ "host", "ip-(?<ipaddr>\d{1,3}-\d{1,3}-\d{1,3}-\d{1,3})" ] } # Various filters for Rancher server log messages grok { match => [ "message", "time=\"%{TIMESTAMP_ISO8601}\" level=%{LOGLEVEL:debug_level} msg=\"%{GREEDYDATA:message_body}\"" ] match => [ "message", "%{TIMESTAMP_ISO8601} %{WORD:debug_level} (?<context>\[.*\]) %{GREEDYDATA:message_body}" ] match => [ "message", "%{DATESTAMP} http: %{WORD:http_type} %{WORD:debug_level}: %{GREEDYDATA:message_body}" ] } }
There are some issues open for addressing this, but it doesn’t seem like there is much movement on the topic, so if you see a lot of individual messages from stack traces that is the reason.
One option to mitigate the problem of stack traces would be to run a local log collection agent (in a container of course) on the rancher server host, like Filebeat or Fluentd that has the ability to clean up the logs before sending it to something like Logstash, ElasticSearch or some other centralized logging. This approach has the added benefit of adding encryption to the logs, which GELF does not have (currently).
If you don’t have a centralized logging solution or just don’t care about rancher-server logs shipping to it – the easiest option is to tail the logs locally as I mentioned previously, using the json-file log format. The only additional configuration I would recommend to the json-file format is to turn on log rotation which can be accomplished with the following configuration.
... --log-driver=json-file \ --log-opt max-size=100mb \ --log-opt max-file=2 \ ...
Adding these logging options will ensure that the container logs for rancher-server will never full up the disk on the server.
Bonus: Debug logs
Additional debug logs can be found inside of each rancher-server container. Since these debug logs are typically not needed in day to day operations, they are sort of an easter egg, tucked away. To access these debug logs, they are located in /var/lib/cattle/logs/ inside of the rancher-server container. The easiest way to analyze the logs is to get them off the server and onto a local machine.
Below is a sample of how to do this.
docker exec -it <rancher-server> bash cd /var/lib/cattle/logs cp cattle-debug.log /tmp
Then from the host that the container is sitting on you can docker cp the logs out of the container and onto the working directory of the host.
docker cp <rancher-server>:/tmp/cattle-debug.log .
From here you can either analyze the logs in a text editor available on the server, or you can copy the logs over to a local machine. In the example below, the server uses ssh keys for authentication and I chose to copy the logs from the server into my local /tmp directory.
scp -i ~/.ssh/<rancher-server-pem> user@rancher-server:/tmp/cattle-debug.log /tmp/cattle-debug.log
With a local copy of the logs you can either examine the logs using your favorite text editor or you can upload them elsewhere for examination.
Conclusion
With all of our Rancher server metrics dumping into Sysdig Cloud along with our logs dumping into Logstash it has made it easier for multiple people to quickly view and analyze what was going on with the Rancher servers. In HA Rancher environments with more than one rancher-server running, it also makes filtering logs based on the server or IP much easier. Since we use 2 hosts in our HA setup we can now easily filter the logs for only the server that is acting as the master.
As these container based grow up, they also become much more complicated to troubleshoot. With better logging and monitoring systems in place it is much easier to tell what is going on at a glance and with the addition of the monitoring solution we can be much more proactive about finding issues earlier and mitigating potential problems much faster.