Book Recommendations

Books I’ve read recently and would recommend:

Nudge: Improving Decisions about Health, Wealth, and Happiness –

Culture Code: The Secrets of Highly Successful Groups –

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win –

Rework –

A Seat at the Table: IT Leadership in the Age of Agility –

Configuring Persistent Storage with Docker and Kubernetes

With DevOps becoming one of the most widely-used buzzwords in the industry, automated configuration management tools like Docker, Ansible, and Chef have been rapidly increasing in popularity and adoption. In particular, Docker, which allows teams to create containerized versions of their applications that can be run without additional virtualization overhead, and is widely supported by PaaS providers. Rather than revisit the value and challenges of using Docker, which is widely written about on the web (good example here:, I’ll talk about a specific aspect of using Docker that can be tricky depending on where your Docker container is running – Persistent Storage.

If you’ve worked with Docker in the past or followed the link above, you’d know that one of the big advantages of using Docker is the ability to deploy managed, self-contained deployments of your application. In this scenario, services like Kubernetes, Docker Swarm, and Apache Mesos can be used to build elastic infrastructure – infrastructure that scales automatically when under peak loads, and that contracts when idle, thereby meeting the demands of customers while utilizing infrastructure in a very efficient manner. One thing to note when using Docker is that while it’s very easy to roll out upgrades and changes to containers, when a container is upgraded, it is recreated from scratch. This means anything that is saved to disk that is not part of the Docker manifest is deleted. Depending on the container manager you’re using, the configuration required to enable persistent storage can very greatly. In a very simple example, I’ll detail how to enable persistent storage using standalone Docker, as well as while using Kubernetes on the Google Cloud Platform. This example assumes you have Docker installed, and have a basic understanding of Docker and Kubernetes concepts.

For this post, we’ll start with a simple Dockerfile based on the standard httpd image. The code for this example can be found on Github at:


If you’re starting from scratch, create a simple Dockerfile in your project directory:


FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

RUN mkdir /data

You can see this will create an image based on the standard httpd image, copy the contents of the local /public-html folder into the htdocs directory, and then create a folder at the OS root called data.

From our project directory, we can build an image based on this Dockerfile named “docker-storage-test” using the following command:

docker build -t docker-storage-test .

We can create a container using that image and run it on the fly using the following command:

docker run -t -i --name docker-storage-test-container docker-storage-test

That will create a container named “docker-storage-test-container” using our image named “docker-storage-test”. Because the -i flag puts us in interactive mode, after executing that command, we should be greeted with a command prompt on our host machine. At that prompt, if we navigate to /data, we should find an empty directory.

root@c1522a53c755:/# cd data
root@c1522a53c755:/data# ls -a
.  ..

Let’s say we wanted to create some files in that /data folder and preserve them when upgrading our image. We’ll simulate that by doing the following:

root@c1522a53c755:/data# touch important-file.txt
root@c1522a53c755:/data# ls -a
.  ..  important-file.txt

To preserve our important files between upgrades, we’ll need to create persistent storage for our image. One way to do that with standalone Docker is to create data volume container. We’ll reuse the same image from our original container, and create a data volume container named “docker-storage-test-volume” mapped to the /data folder using the following command:

docker create -v /data --name docker-storage-test-volume docker-storage-test /bin/true

Before we can use our new data volume, we have to remove our old container using the following command:

docker rm docker-storage-test-container

To attach that data volume container to a new instance of our base container and attach, we use the following command:

docker run -t -i --volumes-from docker-storage-test-volume --name docker-storage-test-container docker-storage-test

Same as before, we can navigate to our /data directory and create our important file using the following commands:

root@b170d2f08ff3:/# cd /data/
root@b170d2f08ff3:/data# touch important-file.txt
root@b170d2f08ff3:/data# ls -a
.  ..  important-file.txt

Now, we can upgrade the docker-storage-test image and create new containers based off it, and that file will be preserved:

docker rm docker-storage-test-container
docker run -t -i --volumes-from docker-storage-test-volume --name docker-storage-test-container docker-storage-test
root@00f17622393f:/# cd /data
root@00f17622393f:/data# ls -a
.  ..  important-file.txt


Google Cloud Platform’s Container Engine can be used to run Docker containers. As Google’s documentation states, the Container Engine is powered by Kubernetes. Kubernetes is an open-source container cluster manager originally written by Google. As previously mentioned, Kubernetes can be used to easily create scalable container based solutions. This portion of the example assumes you have a Google Cloud Platform account with the appropriate gcloud and kubectl tools installed. If you don’t, directions can be found at the links below:

For this example, I’ll be using a project called “docker-storage-test-project”. I’ll call out where project names are to be used in the examples below. To enable persistent storage on the Google Cloud Platform’s Container Engine, we must first create a new container cluster.

From the Google Cloud Platform Container Engine view, click “Create Cluster”.

Screen Shot 2016-06-05 at 8.31.07 PM

For this example, my cluster’s name will be “docker-storage-test-cluster”, with a size of 1, using 1 vCPU machines.

After creating the cluster, we’ll prepare our image for upload to Google Cloud Platform’s private Container Registry by tagging it using the following command:

docker tag docker-storage-test

After tagging, push the image to your private Google Cloud container registry using the following command:

gcloud docker push

Create a persistent disk named “docker-storage-test-disk” using the gcloud SDK command below:

gcloud compute disks create --size 10GB docker-storage-test-disk

Verify the kubectl tool is configured correctly to connect to your newly create cluster. To do this, I used the following command:

gcloud container clusters get-credentials docker-storage-test-cluster

Run the image we’ve uploaded in our newly created cluster:

kubectl run docker-storage-test --port=80

At this point, a Kubernetes deployment file is created for us automatically. To mount the persistent disk we created earlier, we have to edit the deployment. The easiest way to do this is to edit the file with vim and create a local copy of it using the current file’s contents. To do that bring up the contents of the current deployment file using the following command:

kubectl edit deployment docker-storage-test

Copy and past that content into a new file. For this example, I’ve pasted the contents of this file into a file named “kubernetes_deployment.yml” on my project folder.

Add a volumes entry to the spec config – this should be at the same level as “containers:”. I added mine at the bottom. Note that “pdName” must equal the name of the persistent disk you created earlier, and “name” must map to the section we’ll create next:

  - name: docker-storage-test-disk
      # This disk must already exist.
      pdName: docker-storage-test-disk
      fsType: ext4

Now add a volumeMount entry to the container config:

          # This name must match the below.
          - name: docker-storage-test-disk
          mountPath: /data
        resources: {}

Delete and recreate our deployments, this time using the new kubernetes deployment file we’ve created, by using the following commands:

kubectl delete service,deployments docker-storage-test
kubectl create -f kubernetes_deployment.yml

Now, let’s test our configuration by attaching to docker-storage-test container in the pod we’ve just created, create a file in the /data directory, recreate the deployment, and check for the file’s presence by using the following commands:

First, get your pod name:

kubectl get pods

Then attach to the pod and container. My pod’s name is “docker-storage-test-846338785-jbjk8”

kubectl exec -it docker-storage-test-846338785-jbjk8 -c docker-storage-test bash

root@docker-storage-test-846338785-jbjk8:/usr/local/apache2# cd /data
root@docker-storage-test-846338785-jbjk8:/data# touch important-file.txt
root@docker-storage-test-846338785-jbjk8:/data# ls -l
total 16
-rw-r--r-- 1 root root     0 Jun  6 04:04 important-file.txt
drwx------ 2 root root 16384 Jun  6 04:02 lost+found
root@docker-storage-test-846338785-jbjk8:/data# exit

We’ve got an important file – now delete the container, and recreate it, this is simulating the effect upgrading your container’s image would have:

kubectl delete service,deployments docker-storage-test
kubectl create -f kubernetes_deployment.yml

Get your pod name again. Mine is “docker-storage-test-846338785-u2jji”. Connect to the pod and browse to the data directory. We’ll see if our file is there:

kubectl exec -it docker-storage-test-846338785-u2jji -c docker-storage-test bash

root@docker-storage-test-846338785-u2jji:/usr/local/apache2# cd /data
root@docker-storage-test-846338785-u2jji:/data# ls -l
total 16
-rw-r--r-- 1 root root     0 Jun  6 04:04 important-file.txt
drwx------ 2 root root 16384 Jun  6 04:02 lost+found


This is just two of the many ways to configure persistent storage using Docker, and container-related technologies. These are just the two that I had to figured out in my recent explorations. Many more can be found in both the Docker and Kubernetes documentation.
The next post may not be out for a while, but based on the trends of my current work, it’s sure to be IoT-based. Stay tuned for more.

Discovering ievms For Testing With IE

I know it, you know it. We don’t like to talk about it. The app is slow in IE. We need to support a particular version of IE. The app doesn’t function properly in IE. Who checked in the trailing comma?

You’re developing in OSX, and you need to test your app using IE. While there are plenty of ways to accomplish this, thanks to ievms (, there is at least one easy, free, recently-improved way to do this.

It’s all on the page – all you need is curl and VirtualBox to get started. The author, by default, provides an automated, copy/paste, single command way to install WinXP w/IE6, IE7, and IE8, Win7 w/ IE9, and Win8 w/IE10 in a vanilla VirtualBox install. It’s literally a copy, a paste, and then you click enter in a Terminal window. It’s beautiful. While the default command reuses the WinXP vm for IE6, IE7 and IE8 (thus saving disk space), there are instructions on the page to install all of the versions of Windows and their debut versions of IE as well, which also gives you IE7 on Vista and IE8 on Win7 by using the REUSE_XP environment variable.

It’s working great so far for me. Here’s the end result:


Using Jenkins build parameters in SCM fields

At my current client, I’m doing some prototype work and controlling my CI process with Jenkins. I’ve been using Jenkins/Hudson for years, but usually with a group of folks, and not while flying solo. After installing the requisite Chuck Norris plugin (5 projects in a row!), and setting up all of the typical jobs, I thought it might be handy to have a job that would prepare a ‘release’- a checkout and build of a specific tag from CVS, where the tag is provided via a parameter given to the job via the parameterized build style that Jenkins provides.

So, I set up a parameterized build and created a string parameter called ‘TAG’. I thought I’d just drop ‘${TAG}’ into the ‘branch/tag’ field in the ‘Branch’ field in the Source Code Management section of my job config. Shortly thereafter, I ran into my first unresolved Jenkins issue – JENKINS-3230. Build parameters aren’t available in the Source Code Management section of the job config. The ticket doesn’t appear to have moved in a while, and being in a jam, I thought I’d work around first, then revisit contributing to a fix or finding a more permanent solution later.

Fortunately, the solution is simple. Just remove the calls from the SCM section of the job config (change that section to ‘None’) and replace with equivalent (in my case) instances of ‘Execute Windows batch command’.

For me, the command ended up looking like this:

cvs -Q -z3 -d :pserver:me@myserver:2402/cvs/repositories/myrepository co -P -r %TAG%

If it’s a maven build, and you’re executing top-level POMs, you’ll also have to change subsequent ‘Invoke top-level Maven targets’ sections in your config to specify the parent project name in the POM field (, instead of just using the default (pom.xml).

It’s just a bit more maintenance (and kind of a hassle to have to remember all of the CVS parameters), but it works!

…on the virtues of numbers divisible by many other numbers (especially 960)

Many months ago, I came across an article about the 960 grid system. I excitedly read more about it thinking that it could help me modernize my page design skills relatively easy as well as provide a quick way to get prototype layouts up. I’m finally getting a chance to use it now while working on a site for a relative’s small business in my spare time. While it’s great to write the comprehensive, go-to post for quick reference on an emerging tool or technology, for the 960 grid system, it’s already been done.

I’ll link to it instead. From Six Revisions:

The 960 Grid System Made Easy

Between this article and the examples that come with the download on – which includes templates in a whole slew of formats – you design-challenged back-end developers should be in luck. I certainly was.