Writing For Tech

As my career has progressed, I have discovered writing to be an invaluable skill to develop and polish as an engineer.  The skill of writing well translates to a number of areas outside of tech including things like writing good emails, networking and chat using real time collaboration tools (IRC, Slack, etc.), writing documentation, writing specs, or even just asking for help in online formats like message boards or communicating on social media sites.

For example, when asking for help in a technical public forum, e.g. GitHub issues or Stack Overflow, knowing exactly what the problem you are having and describing it in a way that makes sense to others (who often don’t speak English as a first language) is much more difficult than it looks.  It takes time and practice to learn how to craft questions well and to frame technical problems in easy to understand ways.  In my own experience, people are almost always happy to help but I’ve seen so many bad questions on Stack Overflow.

There are two books that I recently read that have had a tremendous impact on how I think about and approach writing, which has helped me grow as a writer, engineer, and technical collaborator, which I’d like to share with readers today.  These books have been around for a long time so if you’ve already heard of them or it has been some time since reading them, I encourage you to reread or at least skim through them again.

The first book, On Writing Well: The Classic Guide to Writing Nonfiction is a great book and really forced me think a lot about my writing and what I could be doing better.  Instead of focusing on a lot of the mechanics and building blocks of writing (it does touch on these a little bit), On Writing Well focuses mainly on the style and how to make your writing better by making it more interesting and less wasteful.  The book teaches readers that often times, more is less in writing, and it focuses on teaching lessons of simplicity as well brevity, boiling things down to their simplest forms and avoiding certain traps and pitfalls.

The second book is called The Elements of Style, Fourth Edition.  There are some really good tricks and tidbits in this book that 100% improved my writing fundamentals and mechanics, even without much practice outside of reading the book.  I would highly recommend this book for anybody that is interesting in improving the foundations of their writing, from things like improving vocabulary to improving the structure and overall quality of their writing.  The book is fairly short so doesn’t take long to work through and is a great tool for improvement and you will more than likely find some tips that are immediately useful in your own writing style.

Getting better at writing is a process, just like learning any other skill.  The more time you spend thinking about it and practicing, the better you will get.  Obviously in my own personal experience, having this blog has been a great way for me to learn and grow my writing skills.  Not every blog post is a success in my eyes but I have learned lessons from doing things over and over again and discovered things that work or don’t work.

One lesson from On Writing Well that has really stuck with me is the idea that your writing should be written for yourself.  Instead of thinking about things that other people want, or what you think they want, just write about things that are interesting or that have personal meaning and the writing process will be much more rewarding.  Applying this concept makes the process of writing much more enjoyable and keeps the gears turning.

Another idea from the book that stuck with me is that everybody has their own style of writing and none of them are bad.  So if you feel pressure to write or create a certain way, don’t.  Your writing process works best for you and that is fine, you just need to find it if you don’t know what it is already.  One of the most important lessons in writing that I have discovered over the years is that I’m not really interested in writing my blog posts according to any set of formulas or criteria.

In my own writing process, I usually like to find a problem that is interesting or challenging to me, sit down and just start writing.  This process helps me internalize and understand the problem I am attempting to solve better, as well gives me a platform to help others.  I attribute my own process and writing style to a lot of practice and just using the lessons I have learned to eventually build up my own style, which works for me.

Good luck and happy writing.

Read More

Top Five Reasons to Use a Hybrid Cloud

Hybrid cloud

Guest post by Aventis Systems

Cloud computing is increasing in popularity as business users have become more comfortable with cloud capabilities. According to a recent VMware report, some 15% of workloads currently reside in the public cloud, and 50% are projected to be running in the public cloud by 2030.

Some business customers will move completely to the public cloud, drawn by its ability to help them respond to changing business needs, align costs and stay on the cutting edge of innovation. But a complete move isn’t the best option for all customers. Some processes still simply run more efficiently and securely on on-premise hardware.

For many businesses, there is no one-size-fits-all solution. Ultimately, using a mix of public clouds and private infrastructures is the best way for many companies to make the most of their resources while optimizing performance and productivity.

What Is a Hybrid Cloud?
A hybrid cloud is a combination of a private cloud platform designed for use by a specific organization and a public cloud provider like Amazon Web Services (AWS) or Google Cloud. These public clouds are shared by customers all over the world and are a cheaper alternative to buying physical servers.

Though the public and private cloud platforms operate independently from one another, they can communicate over an encrypted connection.

A hybrid approach enables data and applications to move between the public and private infrastructures. These are independent platforms, so businesses can store protected data on the private cloud while still leveraging applications that rely on that protected data on the public cloud.

In other words, your sensitive data stays out of the public cloud and on the private platform. The challenge is integrating the different public and private clouds and technologies in a way that is seamless for business users.

Here are some of the top benefits of a hybrid cloud approach, according to the experts at Aventis Systems:

1. Workload Flexibility
With hybrid cloud technology, your IT team has the flexibility to match resources with the infrastructure that best serves the needs of your business.

For example, an integrated hybrid cloud approach with VMware Cloud on AWS enables you to decide where to most effectively run workloads based on cost, risk and changing business needs. With the flexibility to move workloads onsite or offsite as needed, IT is able to better serve the business as a whole.

The ability for organizations to easily transition applications without having to re-platform them — along with the ability to effortlessly access and leverage native cloud services — enables businesses to create a flexible infrastructure in a constantly evolving IT landscape. When new technology becomes available or new trends emerge, businesses are agile enough to take advantage of them quickly.

2. Consistency and Scalability
VMware Cloud on AWS enables companies to leverage operational consistency, along with scalability, on one streamlined platform. By maintaining security and networking policies, along with consistent resource utilization both on- and off-premise, businesses can benefit most from a hybrid infrastructure.

Customers can strategically leverage and allocate company resources to get the most out of system functionality, while becoming better positioned for growth. As your business and capacity needs grow, a hybrid cloud infrastructure offers an easy way to scale to fit these complex needs.

3. Improved Security
Maintaining secure customer transaction data and personal information with a hybrid cloud infrastructure also offers a major benefit over an exclusively public platform. The hybrid approach enables specified servers to be isolated from specific security threats by allowing devices to be configured to communicate with them on a private network.

Where some compliance requirements prevent businesses from running payments in the cloud, for example, a hybrid cloud platform allows you to house secure customer data on a dedicated server, while maintaining the flexibility and convenience of online transactions.

4. Maximized Skillsets and Cost Optimization
Not only are hybrid clouds less expensive to manage, with VMware Cloud on AWS, business customers can also reap the benefits of utilizing their existing IT investments.

Hybrid cloud offerings integrate with your existing IT and use many of the same tools as those used on-premise. You can leverage the resources you already have without having to adopt new tools or acquire new hardware.

Additionally, an integrated hybrid cloud approach enables customers to better align their costs to business needs. Upfront costs can be balanced with recurring expenses, depending on the requirements.

5. Innovation
With a hybrid cloud approach, your business will have access to all of the resources on the public cloud without the burden of big upfront investments. With access to all the newest technologies and innovations, you can stay on the forefront of the latest capabilities.

As businesses become more comfortable and reliant on cloud capabilities, more and more companies will look for the right mix of public cloud and on-premise infrastructure models to increase efficiency and performance.

Read More

Remote Jenkins builds using Github auth

Having the ability to call Jenkins jobs remotely is pretty slick and adds some extra flexibility and allows for some interesting applications.  For example, you could use remote builds to call a script from a chat app or from some other web application.  I have chosen to write a quick bash script as a proof of concept, but this could easily be extended or written in a different language via one of its language specific libraries.

The instructions for the method I am using assume that you are using the Jenkins freestyle build as I haven’t experimented much yet with pipelines for remote builds yet.

The first step is to enable remote builds for the Jenkins job that will be triggered.  There is an option in the job for “Trigger builds remotely” which allows the job to be called from a script.

trigger remote builds

The authentication token can be any arbitrary string you choose.  Also note the URL below, you will need that later as part of the script to call this job.

With the authentication token configured and the Jenkins URL recorded, you can begin writing the script.  The first step is to populate some variables for kicking off the job.  Below is an example of how you might do this.

jenkins_url="https://jenkins.example.com"
job_name="my-jenkins-job"
job_token="xxxxx"
auth="username:token"
my_repo="some_git_repo"
git_tag="abcd123"

Be sure to fill in these variables with the correctly corresponding values.  job_token should correspond to the string you entered above in the Jenkins job, auth should correspond to your github username/token combination.  If you are not familiar with Github tokens you can find more information about setting them up here.

As part of the script, you will want to create a Jenkins “crumb” using your Github credentials that will be used to prevent cross-site scripting attacks.  Here’s what the creation of the crumb looks like (borrowed from this Stackoverflow post).

crumb=$(curl -s 'https://'auth'@jenkins.example.com/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')

Once you have your variables configured and your crumb all set up, you can test out the Jenkins job.

curl -X POST -H "$crumb" $jenkins_url/job/$job_name/build?token=$job_token \
  --user $auth \
  --data-urlencode json='
  {"parameter":
    [
      {"name":"parameter1", "value":"test1"},
      {"name":"parameter2", "value":"test2"},
      {"name":"git_repo", "value":"'$my_repo'"},
      {"name":"git_tag", "value":"'$git_tag'"}
    ]
  }'

In the example job above, I am using several Jenkins parameters as part of the build.  The json name values correspond to the parameters.  Notice that I am using variables for a few of the values above, make sure those variables are wrapped in singe quotes to correctly escape the json.  The syntax for variables is slightly different but allows for some additional flexibility in the job configuration and also allows the script to be called with dynamic values.

If you call this script now, it should kick off a Jenkins job for you with all of the values you have provided.

Read More

Monitoring email flow with MFM

This is a sponsored post by the folks over at EveryCloud.  They have recently developed and released a new tool to help manage and troubleshoot email issues, which is starting to get some traction, especially among Exchange environments.  As a mail admin in a previous life, I can sympathize with desire for better monitor tools.  Here’s their post.


Managing mail flow is a challenge for every systems administrator, and the price of a mistake is very high. Any interruption in mail flow can spell disaster for a company, disrupting daily operations and leaving the management team, the IT team and the systems administrator scrambling for solutions.

While there are a number of mail flow solutions on the market, they tend to be quite pricey, making it difficult for systems administrators, especially those who work for small businesses and start-ups, to justify the cost.

For those who do not already know, the makers behind the EveryCloud mail flow monitor have recently launched a free service – Mail Flow Monitor (MFM). EveryCloud MFM tool is the only free round-trip mail flow monitor on the market, giving systems administrators the ability to observe their organizations’ email systems 24 hours a day, 7 days a week and 365 days a year, all without spending a penny.

mfm dashboard

Some of the features of Mail Flow Monitor include:

  • A full-featured round trip monitor, with start-to-finish email tracking and monitoring
  • Systems administrators can receive real-time text and email alerts whenever a delay or rejection occurs – to your cell phone as well an email or to your alternative email address.
  • Timely monitoring means issues can be addressed quickly, before they spiral out of control
  • The system sends a test email every few minutes to a monitoring mailbox on your server. You set up a forward to send the emails back and the Everycloud team does the rest.
  • MFM is cloud based, which means there is nothing to update or manage.
  • MSP’s and IT Resellers can create an account and manage as many customers as they wish via the EveryCloud Partner Area, all completely free!

When you consider that competing mail filtering solutions generally cost about $30 a month, it is easy to see the saving potential. That $360 annual cost savings may not seem like much, but since it is assessed on a domain level, the charges can add up quickly. In addition, the per-domain charges can make managing a complex IT operation difficult, an extra level of hard work that systems administrators do not need.

From the smallest startups to the largest multinational corporations, modern businesses live and die on their email. An unexpected email breakdown, significant bottleneck or major failure could make the firm’s email inaccessible and unreliable for hours or even days, and every minute of downtime is costing the company money.

Read More

Fix Google Analytics search queries in WordPress

I embarrassingly discovered the other day that I was not receiving metrics or analytics about keyword queries in the Google Analytics console.  It turns out that problem was twofold.  First, I didn’t have the SSL version of my site enabled in the Google webmaster tools and second, I was serving a cached version of my sitemap that was several months out of date.

To give you an idea of how this issue manifested itself, and how I discovered that there were issues in the first place – here’s what my search keywords were showing as in the Google Analytics console.

search queries

Clearly the data is less than useful.  The solution to this problem is pretty easy to fix at least.

Fixing the webmaster properties

Open up the Google webmasters site (you should have this setup already, if not go ahead and get signed up and add your WordPress site).  If you have recently updated your site to use https, make sure you add a new property in the webmaster tools for the https version of your site that matches your http version.

Doing this will tell google to keep track of search queries for the https version of your site, which should be the default after swtiching.  After adding the new https property and indexing it, give it a day or two to start collecting metrics, and check back.  Now when you check your search query traffic in the webmaster tools you should start to see all of the search results.

search queries

Also be sure to update properties to use https in both the Webmaster console as well as the Analytics console.  For example, in the Analytics console under Admin -> Property settings -> default URL, there is an option to use http or https.  Likewise, in the webmaster console there is an option for defaulting to http, which is buried in the Google Analytics interface under Admin -> Property settings -> Search Console.  Make sure you update ALL of the site settings to use https.

NOTE: It can take some time for queries to begin showing up in the Google Analytics console (it took about two days for them to start showing up for my site after fixing all the https references).

Fixing sitemaps issues

If you find that Google isn’t indexing and using all of your posts and pages, the next thing to look at the sitemaps.  A quick way to know if you your sitemaps file is doing its job is to pull open the sitemaps, which can be found under the Crawl -> Sitemaps menu.

webmaster tools sitemap

The above shows what a healthy sitemap index looks like (after I corrected the problem).  There is a button located in the top left of this view that can help you test your sitemap while you are updating your settings.  First check for any items in the “issues” column.  Also, if the “processed” date here isn’t recent then there is probably an issue.  One last thing to check – if there are either no entries in this view or fewer then you expect, something is probably not working.

There are many more knobs and dials you can adjust in the webmaster tools, so if you haven’t played with them much I would recommend spending some time and poking around.

I should quickly mention that my solution assumes that you are using the Google XML Sitemaps plugin.  If you’re not using this plugin, and you are either 1) new to WordPress or 2) don’t want to manage your sitemaps file manually, I suggest you enable it.  It makes sitemap management so much easier.

After you have the plugin turned on, navigate to your blog settings for sitemaps, which can be located in Dashboard -> Settings -> XML-Sitemap.  Clicking the popup should bring up a page similar to the following.

xml sitemaps

First, make sure everything looks correct in the settings.  If you are setting this up for the first time you might need to configure some of the settings.  For example, make sure the site name matches the listing, and the options to notify search indexers are all turned on.

When I was troubleshooting the search queries not getting set, I navigated to this menu and immediately noticed that the plugin was showing a warning about using a cached version of my sitemaps.xml file.  To fix this warning, there should be an option to remove the cached versions.

Next, there should be an option near the top of the sitemap settings to “notify search engines about your sitemap”.  After you have adjusted all the sitemap settings and cleared the cached sitemaps file, click that link to trigger a ping to the search indexers.

Be aware that the crawling process may take up to a few days to index and update so be patient.

Read More