Getting Started with Docker on Azure

Intro & Prereqs
We’re seeing a pretty significant uptick in the use of Azure as the cloud provider of choice among clients. As organizations move to hybrid and/or multi-provider clouds, Docker plays a key role in abstracting underlying platform configuration details away from implementations, allowing developers to build consistently-functioning solutions that can be tested and run in identical configurations, and that can be reliably deployed to disparate environments. In this post we’ll cover running Docker containers on Azure using Docker Machine and using Docker storage volumes for persistent storage.

Since Azure doesn’t yet have a dedicated container service like AWS and GCP, we’ll need to rely on Docker Machine to get the job done. Docker Machine lets us install and control the Docker service on local and remote VMs. We’ll configure a Docker host and use the same Dockerfile we’ve used in previous posts to test our solution. Before we jump in, we’ll need to have the following items installed:

Docker:

Mongo: https://docs.mongodb.com/manual/installation/

  • (the mongod command should be available from your CLI)

MEANjs.org Yeoman Generator: http://meanjs.org/generator.html

  • (the yo command should be available from your CLI, and you should have a generator named

Setting up the project
To get started, navigate to an empty directory and generate a new MEAN-stack app using the MEANjs.org Yeoman generator and the following command and options:

Command:

yo meanjs

Options:

You're using the official MEAN.JS generator.
? What mean.js version would you like to generate? 0.4.2
0.4.2
? In which folder would you like the project to be generated? This can be changed later. mean
Cloning the MEAN repo.......
? What would you like to call your application? mean-test
? How would you describe your application? Full-Stack JavaScript with MongoDB, Express, AngularJS, and Node.js
? How would you describe your application in comma seperated key words? MongoDB, Express, AngularJS, Node.js
? What is your company/author name? Justin Rodenbostel
? Would you like to generate the article example CRUD module? Yes
? Would you like to generate the chat example module? No
Running npm install for you....
This may take a couple minutes.

------------------------------------------
Your MEAN.js application is ready!

To Get Started, run the following command:

cd mean && grunt

Happy Hacking!
------------------------------------------

Next, we need to create directories to house our Mongo data, and we need to start the Mongo server. Navigate to your project directory (using the names above, it should be in the ‘mean’ directory relative to where you ran the last command) and create directories so that your project contains the following folders:

  • <project dir>/data
  • <project dir>/data/db

While in your project directory, start the Mongo server using the following command:

mongod --dbpath data/db

To confirm your database has properly started, you should see output similar to the output below:

2016-08-24T22:03:19.039-0500 I JOURNAL  [initandlisten] journal dir=data/db/journal
2016-08-24T22:03:19.040-0500 I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
2016-08-24T22:03:19.054-0500 I JOURNAL  [durability] Durability thread started
2016-08-24T22:03:19.054-0500 I JOURNAL  [journal writer] Journal writer thread started
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] MongoDB starting : pid=7300 port=27017 dbpath=data/db 64-bit host=Justins-MacBook-Pro.local
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten]
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] db version v3.0.7
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] git version: nogitversion
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] build info: Darwin elcapitanvm.local 15.0.0 Darwin Kernel Version 15.0.0: Wed Aug 26 16:57:32 PDT 2015; root:xnu-3247.1.106~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] allocator: system
2016-08-24T22:03:19.054-0500 I CONTROL  [initandlisten] options: { storage: { dbPath: "data/db" } }
2016-08-24T22:03:19.060-0500 I INDEX    [initandlisten] allocating new ns file data/db/local.ns, filling with zeroes...
2016-08-24T22:03:19.093-0500 I STORAGE  [FileAllocator] allocating new datafile data/db/local.0, filling with zeroes...
2016-08-24T22:03:19.093-0500 I STORAGE  [FileAllocator] creating directory data/db/_tmp
2016-08-24T22:03:19.190-0500 I STORAGE  [FileAllocator] done allocating datafile data/db/local.0, size: 64MB,  took 0.096 secs
2016-08-24T22:03:19.215-0500 I NETWORK  [initandlisten] waiting for connections on port 27017

Now that the database server is running we can start our new application using the following command (again from the project directory):

grunt

To confirm the application is up and running, open a browser window and navigate to http://localhost:3000, where you should see something similar to the screenshot below:

Screen Shot 2016-08-24 at 10.06.04 PM

Now that we have a running application, we can start to build out the docker machine we’ll deploy it to.

Provisioning the Azure VM
To start using Azure with Docker, we need to create a Docker Machine configuration that uses a virtual machine in Azure. Using the docker-machine command line tools, we’ll create an Azure Resource Group (complete with a virtual machine, NIC, Firewall, Public IP, Virtual Network, Storage Account and availability set) using the “Azure driver”. It’s pretty simple. Just use the command below:

docker-machine create -d azure \
  --azure-ssh-user ops \
  --azure-subscription-id <azure subscription id> \
  --azure-open-port 3000 \
  --azure-image canonical:UbuntuServer:14.04.3-LTS:latest \
  azure-test

This will take a few minutes!

Some items to note in the command above:

  • You must replace <azure subscription id> in the command below with your own Azure subscription key
  • We’ve chosen to leave port 3000 open for our application’s development mode
  • We’ve chosen to use the long term support version of Ubuntu 14.04 for our host machine
  • We’ve named the Docker Machine ‘azure-test’

When the machine creation is complete use the following command to verify the Docker Machine is available for configuration.

docker-machine ls

Also make sure your newly created machine is the one that’s currently running with the following command:

eval $(docker-machine env azure-test)

Deployment
Before we go further with Docker, let’s a make a few changes to our app’s Dockerfile so we can run Mongo in our container, and start both the MEANjs.org app and Mongo with one command. In your project’s Dockerfile, replace this line:

FROM node:0.12

which indicates which base image to start with, with these lines:

FROM node:0.12.14-wheezy

# Install Mongo
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/3.2 main" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update

RUN apt-get install -y mongodb-org

# Install gem sass for  grunt-contrib-sass
RUN apt-get install -y supervisor

In this block, we’re performing the following activities:

  • Adding the apt-get repo and relevant keys for the Mongo binary repository.
  • Installing Mongo DB using apt-get
  • Installing Supervisord (a lightweight process control system that we’ll configure later) using apt-get

Near the middle of the same file, replace this line:

WORKDIR /home/mean

With these:

WORKDIR /home/mean

RUN mkdir -p /var/log/supervisor
RUN mkdir -p /data/db
RUN mkdir ./logs

COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

In this block, we’re performing the following activities:

  • Creating the log directory for Supervisord
  • Creating the data directory for Mongo DB
  • Creating the logs directory for the MEANjs.org application

Near the bottom of the same file, replace these lines:

# Port 3000 for server
# Port 35729 for livereload
EXPOSE 3000 35729
CMD ["grunt"]

With these:

EXPOSE 3000
CMD ["/usr/bin/supervisord"]

In this block, we’re performing the following activities:

  • Telling Docker to open port 3000 for this image
  • Running Supervisord to “start” the image

You may have noticed that one of the commands we added earlier is copying a file named supervisord.conf from the root of our local project directory to the docker container. Let’s create that file, and add the following content:

[supervisord]
nodaemon=true

[program:mongod]
command=mongod --dbpath /data/db
redirect_stderr=true

[program:grunt]
command=grunt prod --force
redirect_stderr=true

Supervisord is a lightweight process control system commonly used in Linux environments. In this file, we’re instructing supervisord to start mongo and our MEANjs node app in background processes, redirecting their output to the stderr log file.

With a configured docker-machine, we’re ready to build and deploy our containers. Start by using the following command to build your docker image:

docker build -t mean-test .

Next, tag the current build with the following command, replacing <username> with your Docker hub user name. This will assign the image we just assembled the ‘latest’ tag.

docker tag mean-test <username>/mean-test:latest

Next, push the newly tagged image into your repo at Docker hub using the following command:

docker push <username>/mean-test:latest

Now, we can create our machine and storage volume. Use the following command to create a named storage volume using the image you’ve just pushed to Docker Hub, again replacing <username> with your Docker Hub username:

docker create -v /data --name mean-test-storage <username>/mean-test:latest /bin/true

Create the machine, linking to the previously created storage volume and mapping our previously-opened http port (3000), using the command below, again replacing <username> with your Docker Hub username:

docker run -d -p 3000:3000 --volumes-from mean-test-storage --name mean-test <username>/mean-test:latest

Test to see if your app is up and running by navigating to it on the web. If you are not sure what the public IP of your machine you can print the configuration details of your machine in the console by running the following command:

docker-machine env azure-test

Navigate to your site using the public IP and port 3000 and you should see the same screen we saw when you ran the app locally. Pretty easy!

Conclusion
I hope this post has provided a simple overview of how to get started with Docker Machine on Azure, and I hope the use of a full-stack application as an example provides insight beyond the basic tutorials available elsewhere on the web.

Code for this tutorial is available on Github at: https://github.com/jrodenbostel/getting-started-with-docker-on-azure

Time to wrap up some side projects and get back to learning more about using the ELK stack, Elm, and functional programming patterns! Stay tuned.

 

Configuring Persistent Storage with Docker and Kubernetes

With DevOps becoming one of the most widely-used buzzwords in the industry, automated configuration management tools like Docker, Ansible, and Chef have been rapidly increasing in popularity and adoption. In particular, Docker, which allows teams to create containerized versions of their applications that can be run without additional virtualization overhead, and is widely supported by PaaS providers. Rather than revisit the value and challenges of using Docker, which is widely written about on the web (good example here: http://venturebeat.com/2015/04/07/docker-in-the-enterprise-the-value-and-the-challenges/), I’ll talk about a specific aspect of using Docker that can be tricky depending on where your Docker container is running – Persistent Storage.

If you’ve worked with Docker in the past or followed the link above, you’d know that one of the big advantages of using Docker is the ability to deploy managed, self-contained deployments of your application. In this scenario, services like Kubernetes, Docker Swarm, and Apache Mesos can be used to build elastic infrastructure – infrastructure that scales automatically when under peak loads, and that contracts when idle, thereby meeting the demands of customers while utilizing infrastructure in a very efficient manner. One thing to note when using Docker is that while it’s very easy to roll out upgrades and changes to containers, when a container is upgraded, it is recreated from scratch. This means anything that is saved to disk that is not part of the Docker manifest is deleted. Depending on the container manager you’re using, the configuration required to enable persistent storage can very greatly. In a very simple example, I’ll detail how to enable persistent storage using standalone Docker, as well as while using Kubernetes on the Google Cloud Platform. This example assumes you have Docker installed, and have a basic understanding of Docker and Kubernetes concepts.

For this post, we’ll start with a simple Dockerfile based on the standard httpd image. The code for this example can be found on Github at: https://github.com/jrodenbostel/persistent-storage-with-docker.

Docker

If you’re starting from scratch, create a simple Dockerfile in your project directory:

Dockerfile

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

RUN mkdir /data

You can see this will create an image based on the standard httpd image, copy the contents of the local /public-html folder into the htdocs directory, and then create a folder at the OS root called data.

From our project directory, we can build an image based on this Dockerfile named “docker-storage-test” using the following command:

docker build -t docker-storage-test .

We can create a container using that image and run it on the fly using the following command:

docker run -t -i --name docker-storage-test-container docker-storage-test

That will create a container named “docker-storage-test-container” using our image named “docker-storage-test”. Because the -i flag puts us in interactive mode, after executing that command, we should be greeted with a command prompt on our host machine. At that prompt, if we navigate to /data, we should find an empty directory.

root@c1522a53c755:/# cd data
root@c1522a53c755:/data# ls -a
.  ..
root@c1522a53c755:/data#

Let’s say we wanted to create some files in that /data folder and preserve them when upgrading our image. We’ll simulate that by doing the following:

root@c1522a53c755:/data# touch important-file.txt
root@c1522a53c755:/data# ls -a
.  ..  important-file.txt
root@c1522a53c755:/data#

To preserve our important files between upgrades, we’ll need to create persistent storage for our image. One way to do that with standalone Docker is to create data volume container. We’ll reuse the same image from our original container, and create a data volume container named “docker-storage-test-volume” mapped to the /data folder using the following command:

docker create -v /data --name docker-storage-test-volume docker-storage-test /bin/true

Before we can use our new data volume, we have to remove our old container using the following command:

docker rm docker-storage-test-container

To attach that data volume container to a new instance of our base container and attach, we use the following command:

docker run -t -i --volumes-from docker-storage-test-volume --name docker-storage-test-container docker-storage-test

Same as before, we can navigate to our /data directory and create our important file using the following commands:

root@b170d2f08ff3:/# cd /data/
root@b170d2f08ff3:/data# touch important-file.txt
root@b170d2f08ff3:/data# ls -a
.  ..  important-file.txt

Now, we can upgrade the docker-storage-test image and create new containers based off it, and that file will be preserved:

docker rm docker-storage-test-container
docker run -t -i --volumes-from docker-storage-test-volume --name docker-storage-test-container docker-storage-test
root@00f17622393f:/# cd /data
root@00f17622393f:/data# ls -a
.  ..  important-file.txt

Kubernetes

Google Cloud Platform’s Container Engine can be used to run Docker containers. As Google’s documentation states, the Container Engine is powered by Kubernetes. Kubernetes is an open-source container cluster manager originally written by Google. As previously mentioned, Kubernetes can be used to easily create scalable container based solutions. This portion of the example assumes you have a Google Cloud Platform account with the appropriate gcloud and kubectl tools installed. If you don’t, directions can be found at the links below:

https://cloud.google.com/sdk/

https://cloud.google.com/container-registry/docs/before-you-begin

For this example, I’ll be using a project called “docker-storage-test-project”. I’ll call out where project names are to be used in the examples below. To enable persistent storage on the Google Cloud Platform’s Container Engine, we must first create a new container cluster.

From the Google Cloud Platform Container Engine view, click “Create Cluster”.

Screen Shot 2016-06-05 at 8.31.07 PM

For this example, my cluster’s name will be “docker-storage-test-cluster”, with a size of 1, using 1 vCPU machines.

After creating the cluster, we’ll prepare our image for upload to Google Cloud Platform’s private Container Registry by tagging it using the following command:

docker tag docker-storage-test gcr.io/docker-storage-test-project/docker-storage-test

After tagging, push the image to your private Google Cloud container registry using the following command:

gcloud docker push gcr.io/docker-storage-test-project/docker-storage-test

Create a persistent disk named “docker-storage-test-disk” using the gcloud SDK command below:

gcloud compute disks create --size 10GB docker-storage-test-disk

Verify the kubectl tool is configured correctly to connect to your newly create cluster. To do this, I used the following command:

gcloud container clusters get-credentials docker-storage-test-cluster

Run the image we’ve uploaded in our newly created cluster:

kubectl run docker-storage-test --image=gcr.io/docker-storage-test/docker-storage-test:latest --port=80

At this point, a Kubernetes deployment file is created for us automatically. To mount the persistent disk we created earlier, we have to edit the deployment. The easiest way to do this is to edit the file with vim and create a local copy of it using the current file’s contents. To do that bring up the contents of the current deployment file using the following command:

kubectl edit deployment docker-storage-test

Copy and past that content into a new file. For this example, I’ve pasted the contents of this file into a file named “kubernetes_deployment.yml” on my project folder.

Add a volumes entry to the spec config – this should be at the same level as “containers:”. I added mine at the bottom. Note that “pdName” must equal the name of the persistent disk you created earlier, and “name” must map to the section we’ll create next:

volumes:
  - name: docker-storage-test-disk
    gcePersistentDisk:
      # This disk must already exist.
      pdName: docker-storage-test-disk
      fsType: ext4

Now add a volumeMount entry to the container config:

        volumeMounts:
          # This name must match the volumes.name below.
          - name: docker-storage-test-disk
          mountPath: /data
        resources: {}

Delete and recreate our deployments, this time using the new kubernetes deployment file we’ve created, by using the following commands:

kubectl delete service,deployments docker-storage-test
kubectl create -f kubernetes_deployment.yml

Now, let’s test our configuration by attaching to docker-storage-test container in the pod we’ve just created, create a file in the /data directory, recreate the deployment, and check for the file’s presence by using the following commands:

First, get your pod name:

kubectl get pods

Then attach to the pod and container. My pod’s name is “docker-storage-test-846338785-jbjk8”

kubectl exec -it docker-storage-test-846338785-jbjk8 -c docker-storage-test bash

root@docker-storage-test-846338785-jbjk8:/usr/local/apache2# cd /data
root@docker-storage-test-846338785-jbjk8:/data# touch important-file.txt
root@docker-storage-test-846338785-jbjk8:/data# ls -l
total 16
-rw-r--r-- 1 root root     0 Jun  6 04:04 important-file.txt
drwx------ 2 root root 16384 Jun  6 04:02 lost+found
root@docker-storage-test-846338785-jbjk8:/data# exit

We’ve got an important file – now delete the container, and recreate it, this is simulating the effect upgrading your container’s image would have:

kubectl delete service,deployments docker-storage-test
kubectl create -f kubernetes_deployment.yml

Get your pod name again. Mine is “docker-storage-test-846338785-u2jji”. Connect to the pod and browse to the data directory. We’ll see if our file is there:

kubectl exec -it docker-storage-test-846338785-u2jji -c docker-storage-test bash

root@docker-storage-test-846338785-u2jji:/usr/local/apache2# cd /data
root@docker-storage-test-846338785-u2jji:/data# ls -l
total 16
-rw-r--r-- 1 root root     0 Jun  6 04:04 important-file.txt
drwx------ 2 root root 16384 Jun  6 04:02 lost+found
root@docker-storage-test-846338785-u2jji:/data#

Conclusion

This is just two of the many ways to configure persistent storage using Docker, and container-related technologies. These are just the two that I had to figured out in my recent explorations. Many more can be found in both the Docker and Kubernetes documentation.
The next post may not be out for a while, but based on the trends of my current work, it’s sure to be IoT-based. Stay tuned for more.

Building a Simple REST API with Scala & Play! (Part 3)

In this 3 part series, we’ll cover creating a basic Play! REST API on top of Reactive Mongo. Full source code for this tutorial is available at https://github.com/jrodenbostel/getting-started-play-scala.

Welcome back!

We finished parts 1 & 2, which started with a description of the tools we’ll be using, and concluded with a fully functioning REST API built in Play! on top of a Reactive Mongo back-end. In part 3, we’ll cover the use of Spec2 and Mockito to write automated unit and integration tests for our application.

Integration Testing

Spec2 provides a DSL for BDD-style test specs. One variation of these specs, WithBrowser, actually runs the code in a headless browser, allowing you to end-to-end test your application in automated fashion. To get started, open the ‘/test/IntegrationSpec.scala’ file, which was created as part of our seed project. Update it to include the following:

import org.junit.runner.RunWith
import org.specs2.mutable.Specification
import org.specs2.runner.JUnitRunner
import play.api.test.WithBrowser

@RunWith(classOf[JUnitRunner])
class IntegrationSpec extends Specification {

  "Application" should {

    "work from within a browser" in new WithBrowser {

      browser.goTo("http://localhost:" + port)

      browser.pageSource must contain("Your database is ready.")
    }

    "remove data through the browser" in new WithBrowser {

      browser.goTo("http://localhost:" + port + "/cleanup")

      browser.pageSource must contain("Your database is clean.")
    }
  }
}

You’ll notice a spec for each Application Controller function, which simply visits the relevant URI and matches the response string. Tests can be executed from the Activator UI or by running ‘activator test’ from the command line from within your project folder.

Unit Testing

Unit testing is a bit more complex because we’ll test each controller by mocking the relevant repository method. Assertions for unit tests are more straightforward, as in most cases, we’re simply inspecting HTTP status codes. Update the ‘test/ApplicationSpec.scala’ file to include the following:

import controllers.{routes, Widgets}
import org.junit.runner.RunWith
import org.specs2.mock.Mockito
import org.specs2.mutable.Specification
import org.specs2.runner.JUnitRunner
import play.api.libs.json.{JsArray, Json}
import play.api.mvc.{Result, _}
import play.api.test.Helpers._
import play.api.test.{WithApplication, _}
import play.modules.reactivemongo.ReactiveMongoApi
import reactivemongo.api.commands.LastError
import reactivemongo.bson.BSONDocument
import repos.WidgetRepoImpl

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{ExecutionContext, Future}

@RunWith(classOf[JUnitRunner])
class ApplicationSpec extends Specification with Results with Mockito {

  val mockRecipeRepo = mock[WidgetRepoImpl]
  val reactiveMongoApi = mock[ReactiveMongoApi]
  val documentId = "56a0ddb6c70000c700344254"
  val lastRequestStatus = new LastError(true, None, None, None, 0, None, false, None, None, false, None, None)

  val oatmealStout = Json.obj(
        "name" -> "Widget One",
        "description" -> "My first widget",
        "author" -> "Justin"
      )

  val posts = List(
    oatmealStout,
    Json.obj(
      "name" -> "Widget Two: The Return",
      "description" -> "My second widget",
      "author" -> "Justin"
    ))

  class TestController() extends Widgets(reactiveMongoApi) {
    override def widgetRepo: WidgetRepoImpl = mockRecipeRepo
  }

  val controller = new TestController()

  "Application" should {

    "send 404 on a bad request" in {
      new WithApplication() {
        route(FakeRequest(GET, "/boum")) must beSome.which(status(_) == NOT_FOUND)
      }
    }

    "Recipes#delete" should {
      "remove recipe" in {
        mockRecipeRepo.remove(any[BSONDocument])(any[ExecutionContext]) returns Future(lastRequestStatus)

        val result: Future[Result] = controller.delete(documentId).apply(FakeRequest())

        status(result) must be equalTo ACCEPTED
        there was one(mockRecipeRepo).remove(any[BSONDocument])(any[ExecutionContext])
      }
    }

    "Recipes#list" should {
      "list recipes" in {
        mockRecipeRepo.find()(any[ExecutionContext]) returns Future(posts)

        val result: Future[Result] = controller.index().apply(FakeRequest())

        contentAsJson(result) must be equalTo JsArray(posts)
        there was one(mockRecipeRepo).find()(any[ExecutionContext])
      }
    }

    "Recipes#read" should {
      "read recipe" in {
        mockRecipeRepo.select(any[BSONDocument])(any[ExecutionContext]) returns Future(Option(oatmealStout))

        val result: Future[Result] = controller.read(documentId).apply(FakeRequest())

        contentAsJson(result) must be equalTo oatmealStout
        there was one(mockRecipeRepo).select(any[BSONDocument])(any[ExecutionContext])
      }
    }

    "Recipes#create" should {
      "create recipe" in {
        mockRecipeRepo.save(any[BSONDocument])(any[ExecutionContext]) returns Future(lastRequestStatus)

        val request = FakeRequest().withBody(oatmealStout)
        val result: Future[Result] = controller.create()(request)

        status(result) must be equalTo CREATED
        there was one(mockRecipeRepo).save(any[BSONDocument])(any[ExecutionContext])
      }
    }

    "Recipes#update" should {
      "update recipe" in {
        mockRecipeRepo.update(any[BSONDocument], any[BSONDocument])(any[ExecutionContext]) returns Future(lastRequestStatus)

        val request = FakeRequest().withBody(oatmealStout)
        val result: Future[Result] = controller.update(documentId)(request)

        status(result) must be equalTo ACCEPTED
        there was one(mockRecipeRepo).update(any[BSONDocument], any[BSONDocument])(any[ExecutionContext])
      }
    }
  }
}

You’ll notice the class starts with the creation of Mocks and example data – this is very straightforward and was pulled from examples of real BSON data from Mongo. In each spec, you’ll also notice the same pattern: mocks are configured, methods are exercised, and assertions are made. Very similar to other BDD-style test frameworks like Jasmine and Rspec.

Conclusion

Reactive programming and related frameworks, especially those of the functional variety, represent a significant shift in the way we write applications. There are many new patterns, tools, and programming styles that a developer must become familiar with in order to write applications in the reactive style effectively. Many of us, myself included, are just starting to get opportunities to do so. Although these tools may not be appropriate for every solution, becoming familiar with the underlying concepts will help individuals become more well-rounded as developers and help ensure that scalable, resilient, responsive architectures are adopted when necessary.

Building a Simple REST API with Scala & Play! (Part 2)

In this 3 part series, we’ll cover creating a basic Play! REST API on top of Reactive Mongo.  Full source code for this tutorial is available at https://github.com/jrodenbostel/getting-started-play-scala.

Welcome back!

If you’re coming in fresh, and need some instructions on getting the appropriate tools installed and creating a shell of an environment, please refer to part 1.  In part 2, we’ll cover adding our first API functions as asynchronous actions in a Play! controller, as well as define our first data access functions.
Because we started with a seed project, we got some bonus cruft in our application in the form of a views package.  While Scala templates are a fine way to create views for your application, for this tutorial, we’ll be building a RESTful API that responds with JSON strings. Start by deleting the ‘views’ folder at ‘app/views’ such that:
Screen Shot 2016-01-25 at 10.03.14 PM
Note that this will break the default controller that came as part of the project seed.  To remedy this, update the default controller action response to simply render text, by replacing:
Ok(views.html.index("Your new application is ready."))
with:
Ok("Your new application is ready.")

Creating the controller

Next, we’ll create our controller.  Create a new file in the ‘app/controllers’ folder named ‘Widgets.scala’.  This will house the RESTful actions associated with Widgets.  We’ll also add default methods to this controller for the RESTful actions that we’ll implement later.
package controllers

import play.api.mvc._

class Widgets extends Controller {

  def index = TODO

  def create = TODO

  def read(id: String) = TODO

  def update(id: String) = TODO

  def delete(id: String) = TODO
}
Play! comes with a default “TODO” page for controller actions that have not yet been implemented.  It’s a nice way to keep your app functioning and reloading while you’re building out functionality incrementally.  Before we can see that default “TODO” page, we must first add routes to the application configuration that reference our new controller and it’s not-yet-implemented actions.
Update /conf/routes with paths to the Widgets controller such that the following is true:
# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET        /                    controllers.Application.index
GET        /cleanup             controllers.Application.cleanup

#Widgets
GET        /api/widgets         controllers.Widgets.index
GET        /api/widget/:id      controllers.Widgets.read(id: String)
POST       /api/widget          controllers.Widgets.create
DELETE     /api/widget/:id      controllers.Widgets.delete(id: String)
PATCH      /api/widget/:id      controllers.Widgets.update(id: String)
Now, we can visit one of these new paths to view the default “TODO” screen.
Screen Shot 2016-01-25 at 10.18.35 PM

Data access

Before we can build out our controller actions, we’ll add a data access layer to surface some data to our controller.  Let’s start by creating the trait, which will define the contract for our data access layer.  (Traits are similar to Interfaces in Java – more info that here http://docs.scala-lang.org/tutorials/tour/traits.html)
Create a new folder called ‘repos’ such that a directory at ‘app/repos’ exists.  In that directory, create a new Scala file named WidgetRepo.scala with the following content:
app/repos/WidgetRepo.scala:
package repos

import javax.inject.Inject

import play.api.libs.json.{JsObject, Json}
import play.modules.reactivemongo.ReactiveMongoApi
import play.modules.reactivemongo.json._
import play.modules.reactivemongo.json.collection.JSONCollection
import reactivemongo.api.ReadPreference
import reactivemongo.api.commands.WriteResult
import reactivemongo.bson.{BSONDocument, BSONObjectID}

import scala.concurrent.{ExecutionContext, Future}

trait WidgetRepo {
  def find()(implicit ec: ExecutionContext): Future[List[JsObject]]

  def select(selector: BSONDocument)(implicit ec: ExecutionContext): Future[Option[JsObject]]

  def update(selector: BSONDocument, update: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult]

  def remove(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult]

  def save(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult]
}
There should be no surprises there – normal CRUD operations for data access and manipulation.  Things you may notice that are special here – the return types.  They’re all asynchronous Futures.  The read operations return JsObjects https://www.playframework.com/documentation/2.0/api/scala/play/api/libs/json/JsObject.html and the write operations return WriteResults http://reactivemongo.org/releases/0.11/documentation/tutorial/write-documents.html.
Let’s continue by adding the implementations for each of these methods in the form of a Scala class in the same source file:
app/repos/WidgetRepo.scala:
class WidgetRepoImpl @Inject() (reactiveMongoApi: ReactiveMongoApi) extends WidgetRepo {

  def collection = reactiveMongoApi.db.collection[JSONCollection]("widgets");

  override def find()(implicit ec: ExecutionContext): Future[List[JsObject]] = {
    val genericQueryBuilder = collection.find(Json.obj());
    val cursor = genericQueryBuilder.cursor[JsObject](ReadPreference.Primary);
    cursor.collect[List]()
  }

  override def select(selector: BSONDocument)(implicit ec: ExecutionContext): Future[Option[JsObject]] = {
    collection.find(selector).one[JsObject]
  }

  override def update(selector: BSONDocument, update: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult] = {
    collection.update(selector, update)
  }

  override def remove(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult] = {
    collection.remove(document)
  }

  override def save(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult] = {
    collection.update(BSONDocument("_id" -> document.get("_id").getOrElse(BSONObjectID.generate)), document, upsert = true)
  }

}

Again, there shouldn’t be much that’s surprising here other than the normal, somewhat complex IMO, Scala syntax.  You can see the implicit ExecutionContext, which, for asynchronous code, basically let’s Scala decide where in the thread pool to execute the related function.  You may also notice the ReadPreference in the find() function.  This tells Mongo that our repo would like to read it’s results from the primary Mongo node.

Back to the controller

At this point, we can return to our controller to round out the implementation details there.  Let’s start simple.
Before we can start adding implementation details, we need to configure some dependency injection details.  We’ll inject the ReactiveMongoApi into our Controller, then use that to create our repository.  Update the signature of the Widgets controller and imports so the following is true:
package controllers

import javax.inject.Inject

import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.libs.json.Json
import play.api.mvc._
import play.modules.reactivemongo.{MongoController, ReactiveMongoApi, ReactiveMongoComponents}
import reactivemongo.api.commands.WriteResult
import reactivemongo.bson.{BSONObjectID, BSONDocument}
import repos.WidgetRepoImpl

class Widgets @Inject()(val reactiveMongoApi: ReactiveMongoApi) extends Controller
    with MongoController with ReactiveMongoComponents {

  def widgetRepo = new WidgetRepoImpl(reactiveMongoApi)

In ‘app/controllers/Widgets.scala’, we have an unimplemented ‘index’ function.  This is meant to display a list of all widgets in the database, selected without parameters.  The implementation for this function is very straightforward. Update the index function such that:

  def index = Action.async { implicit request =>
    widgetRepo.find().map(widgets => Ok(Json.toJson(widgets)))
  }
In this implementation, since we’re expecting a single result, when we execute ‘map’ on the result, we’ll take the resulting object and render it directly to our JSON response.
The ‘read’ method, which is very similar to ‘index’, will take a single String parameter (the id of the document being searched for) and return a single result as JSON string. Update the read function such that:
  def read(id: String) = Action.async { implicit request =>
    widgetRepo.select(BSONDocument(Id -> BSONObjectID(id))).map(widget => Ok(Json.toJson(widget)))
  }

 The ‘delete’ method is similarly straightforward in that it takes a String id for the document to be deleted, and returns a HTTP status code 202 (Accepted), and no body.

  def delete(id: String) = Action.async {
    widgetRepo.remove(BSONDocument(Id -> BSONObjectID(id)))
        .map(result => Accepted)
  }

The ‘create’ and ‘update’ methods introduce a small amount of complexity in that they require the request body to be parsed using Scala’s built in pattern matching functionality. Since we’ll have to match the field names in two places, we’ll create a companion object to hold our field names. Create a ‘WidgetFields’ companion object in the Widgets controller source file:

object WidgetFields {
  val Id = "_id"
  val Name ="name"
  val Description = "description"
  val Author = "author"
}

In the body of the Widget controller, add a scoped import for the companion object:

import controllers.WidgetFields._

For the ‘create’ method, we’ll apply a JSON Body Parser to an implicit request, parsing out the relevant content needed to build a BSONDocument that can be persisted via the Repo:

def create = Action.async(BodyParsers.parse.json) { implicit request =&amp;amp;amp;amp;amp;gt;
    val name = (request.body \ Name).as[String]
    val description = (request.body \ Description).as[String]
    val author = (request.body \ Author).as[String]
    recipeRepo.save(BSONDocument(
      Name -> name,
      Description -> description,
      Author -> author
    )).map(result => Created)
  }

Execution of this method returns the HTTP status code 201.  For the ‘update’ method, we’ll perform largely the same operation – applying the JSON Body Parser to an implicit request, however, this time we’ll call a different repo method – this time building a BSONDocument to select the relevant document with, then passing in the current field values:

  def update(id: String) = Action.async(BodyParsers.parse.json) { implicit request =>
    val name = (request.body \ Name).as[String]
    val description = (request.body \ Description).as[String]
    val author = (request.body \ Author).as[String]
    widgetRepo.update(BSONDocument(Id -> BSONObjectID(id)),
      BSONDocument("$set" -> BSONDocument(Name -> name, Description -> description, Author -> author)))
        .map(result => Accepted)
  }

Testing!

Part 3 of this series will cover testing using the Spec2 library. In the mean time…We have a fully functioning REST API – but testing manually requires configuration and the execution of HTTP operations. Many web frameworks are packed with functionality that allows a developer to ‘bootstrap’ an application – adding seed data to a local environment for testing as an example. Recent changes in the Play! framework’s GlobalSettings have changed the way developers do things like seed test databases (https://www.playframework.com/documentation/2.4.x/GlobalSettings). While the dust settles, and while we wait for part 3, I created some helper functions in the Application controller that will create and remove some test data:

package controllers

import javax.inject.{Inject, Singleton}

import play.api.Logger
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.libs.json.Json
import play.api.mvc.{Action, Controller}
import play.modules.reactivemongo.json.collection.JSONCollection
import play.modules.reactivemongo.{MongoController, ReactiveMongoApi, ReactiveMongoComponents}
import reactivemongo.api.collections.bson.BSONCollection
import reactivemongo.api.commands.bson.BSONCountCommand.{ Count, CountResult }
import reactivemongo.api.commands.bson.BSONCountCommandImplicits._
import reactivemongo.bson.BSONDocument

import scala.concurrent.Future

@Singleton
class Application @Inject()(val reactiveMongoApi: ReactiveMongoApi) extends Controller
    with MongoController with ReactiveMongoComponents {

  def jsonCollection = reactiveMongoApi.db.collection[JSONCollection]("widgets");
  def bsonCollection = reactiveMongoApi.db.collection[BSONCollection]("widgets");

  def index = Action {
    Logger.info("Application startup...")

    val posts = List(
      Json.obj(
        "name" -> "Widget One",
        "description" -> "My first widget",
        "author" -> "Justin"
      ),
      Json.obj(
        "name" -> "Widget Two: The Return",
        "description" -> "My second widget",
        "author" -> "Justin"
      ))

    val query = BSONDocument("name" -> BSONDocument("$exists" -> true))
    val command = Count(query)
    val result: Future[CountResult] = bsonCollection.runCommand(command)

    result.map { res =>
      val numberOfDocs: Int = res.value
      if(numberOfDocs &amp;lt; 1) {
        jsonCollection.bulkInsert(posts.toStream, ordered = true).foreach(i => Logger.info("Record added."))
      }
    }

    Ok("Your database is ready.")
  }

  def cleanup = Action {
    jsonCollection.drop().onComplete {
      case _ => Logger.info("Database collection dropped")
    }
    Ok("Your database is clean.")
  }
}

… and routes …

# Home page
GET        /                    controllers.Application.index
GET        /cleanup             controllers.Application.cleanup

 


Building a Simple REST API with Scala & Play! (Part 1)

In this 3 part series, we’ll cover creating a basic Play! REST API on top of Reactive Mongo.  Full source code for this tutorial is available at https://github.com/jrodenbostel/getting-started-play-scala.

Why Scala & Play!?

There has been a lot of buzz recently in the industry and at our some of our clients around Reactive Programming and related frameworks.   Reactive Programming is a movement based around building applications that can meet the diverse demands of modern environments.  The four main characteristics are:
  • Responsiveness:  high-performance, with consistent and fast response times
  • Resilience: fails gracefully, and remains responsive during failures
  • Elasticity: remains responsive during varying workload
  • Message Driven: non-blocking, back-pressure capable, asynchronous
Moves towards cloud infrastructure, microservices architecture, and DevOps-style tools which have made conveniences of previously cumbersome tasks, all lend themselves to supporting reactive systems: systems composed of many small pieces, possibly distributed, expanding and contracting based on load, location agnostic, relying on messaging to communicate.  Maybe you’ve seen parts of this and haven’t realized it has a name.  Systems designed based on these principles can be considered reactive.
Like many things in development, learning a new tool or technique often requires not only reading, but also coding a live example. In this blog series, I’ll cover creating a small, simple reactive system – a simple RESTful API using the Play! Framework and Reactive Mongo.  Because reactive programming is often associated with functional programming, I’ll also be writing this example in Scala.
The Play! Framework (https://playframework.com) is an open source web app framework that’s been around since 2007.  In many ways, it’s similar to other web application frameworks you may be familiar with like Spring MVC and Rails: it’s MVC-based, it comes with a lot of built-in support tooling (scaffolding, execution, dependency management, etc), and it’s based on the principle of convention over configuration.  In other ways, mostly ways that indicate how it fits into the world of reactive systems, it differs from those frameworks: it’s 100% stateless and is built to run on Netty (http://netty.io) – a non-blocking, asynchronous application framework.
Behind Play!, Reactive Mongo (http://reactivemongo.org) will give us non-blocking and asynchronous access to a Mongo document store through a Scala driver and a Play! module for easy integration into a Play! apps.  The Reactive Mongo API exposes most normal data access functions you’d come to expect, but returns results as Scala Futures, and provides translation utilities for translating the Mongo document format (BSON) to JSON, and many functional helper methods for dealing with result sets.
To wrap it all up, we’ll be using Spec2 (https://etorreborre.github.io/specs2/) to unit and integration test our application.  Spec2 allows us to write test cases in the style of behavior-driven development and highlights the flexibility of Scala and how it an easily be used to create a domain-specific language.
You may find the tools in this tutorial more difficult to get started with than others you may be used to – there are many new concepts in the mix here – it’s to be expected.  These tools have a place in a creating highly-available, fault-tolerant systems capable of handling web-scale traffic.  If you were actually building what we’ll build in this tutorial, this may not be the right tool set.

The Setup

To get started, make sure you have Scala installed.  There are many ways to install Scala.  My favorite is by using the Homebrew package manager available for the Mac (http://brewformulas.org/Scala). You can also download, unpack binaries, and update paths by following the instructions here (http://www.scala-lang.org/download/install.html).
You’re also going to want to have Mongo installed.  Again, this is something that I normally install using the Homebrew package manager.  More detailed instructions can be found here: https://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/.  You should obviously have Mongo running during this tutorial.
Assuming Scala is installed, next download and install the Typesafe Activator, available here (https://www.typesafe.com/activator/download).  This will give us a nice UI for generating our application, running and viewing the results of tests, and running our application.
After installing the Typesafe Activator, open a command line prompt and start it up:
Justins-MacBook-Pro:Projects justin$ activator ui
From the Typesafe Activator window, under the ‘Templates’ menu item, select ‘Seeds’, then select ‘Play Scala Seed’.  Don’t forget to give your application a name and location before hitting the ‘Create app’ button.
Screen Shot 2016-01-25 at 9.33.33 PM
After pressing the ‘Create app’ button, you should be greeted with a message indicating that your application was created successfully.  From this window, we’ll be able to start/stop, compile, and test our new application.  Remember, Play! supports hot-swapping of code, so we’ll be doing a lot of viewing results in this window.
 Screen Shot 2016-01-25 at 9.29.27 PM

Installing Dependencies

Play! applications generated from the Play! Scala seed that we just used come packed with a pre-defined build script written using SBT.  SBT is the de-facto build tool for Scala applications from Typesafe.  More information on SBT can be found here (http://www.scala-sbt.org). Our new application has a build.sbt file that we’ll need to update with a dependency for Reactive Mongo.  Update the library dependencies sequence in build.sbt accordingly:
libraryDependencies ++= Seq(
  jdbc,
  cache,
  ws,
  specs2 % Test,
  "org.reactivemongo" %% "play2-reactivemongo" % "0.11.7.play24"
)
Much like Maven or Bundler, this will automatically download and install the Reactive Mongo Play! module, which will in turn download the necessary dependent Reactive Mongo and Mongo libraries.
Next, we’ll update our application.conf file to include configuration information about our Mongo instance.  The application.conf file is found at /conf/application.conf and contains generally configuration settings for your application.  We have two lines to add to this file.  Add the following at the end of application.conf and save your changes:
play.modules.enabled += "play.modules.reactivemongo.ReactiveMongoModule"

mongodb.uri = "mongodb://localhost:27017/getting-started-play-scala"
At this point it’s probably worth noting that if you’re interested in exploring or manipulating a Mongo instance, I recommend using Robomongo (http://robomongo.org).
To conclude part 1, using the Typesafe Activator, run your application.  If we’ve installed our dependencies correctly, we should be greated with the default Welcome to Play screen, as seen below:
Screen Shot 2016-01-26 at 8.31.54 PM.png
Please continue to part 2, where we’ll begin to define our REST API by creating a Play! controller with asynchronous actions, and then move on to creating our data access layer using Reactive Mongo.

Moving to Spring Boot

The Spring framework has been the de facto standard framework to build a Java application on for some time now. Providing an IoC container and performing dependency injection was just the start. Since it’s initial release in 2002, Spring has expanded and matured, providing developers with familiar, patterns-based abstractions for common components throughout an application’s layers. As Spring grew, the configuration became more and more unwieldy and the framework became known as one that involved a fair amount of effort to set up and get going, and even the most trivial of projects came with a fair amount of boilerplate configuration. There was not an easy place to start. Maven filled this gap in the early days, pushing the community toward the concepts of convention over configuration and dependency management, and through the use of project archetypes, but the same problem eventually cropped up – repetitive, difficult-to-manage configuration.

With the first release of Rails in 2005, the developer community saw what was possible in terms of a developer-friendly framework that all but eliminated the perceived shortcomings of frameworks like Spring. Frameworks like Rails came to be known as Rapid Application Development (RAD) frameworks. These frameworks shared many of the same characteristics – a well-defined convention, opinionated default configurations, scaffolding tools used to quickly create pre-configured components. In 2009, the Spring developers responded to the trend of RAD frameworks with the release of Spring Roo. Spring Roo was never billed as an attempt to replace Spring, only enhance it by eliminating the shortcomings of vanilla Spring. Spring Roo provided a well-defined convention and scaffolding tools, but was driven by AspectJ and relied on a significant amount of code generation to eliminate boilerplate code. This led to difficulty troubleshooting configuration problems, and a steeper learning curve for developers new to Java and Spring.

Enter Spring Boot…
In 2014, the Spring development team released a next-generation take on Spring named Spring Boot. Spring Boot provides many of the same RAD-like features of frameworks like Rails, and goes a step further than Roo by eliminating cumbersome XML-based configuration and the mystery of generated code. This is accomplished through the use of auto-configuration classes. Each Spring Boot module is packaged with a default configuration – the boilerplate code developers used to be responsible for creating. These auto-configuration classes provide the opinionated configuration familiar to users of other RAD frameworks, and that follows the basic best practices familiar to users of traditional Spring. In the next few sections, we’ll get a new project up and running from scratch and see auto-configuration in action.

Creating A New Project
A newer feature of the Spring Boot project is the Spring Initializr, a website (http://start.spring.io) that allows a developer to choose a starting point for their application with a 1 page form the concludes with a ‘Generate Project’ button and a download of the shell project. Below, you can find the steps I used to configured a basic project:

Screen Shot 2015-02-15 at 4.12.48 PM

These choices produced the following project structure:

Screen Shot 2015-02-15 at 3.08.51 PM

This project can be built and run, but without at least one controller we’ll have no page to display other than a default error page. Let’s create a controller and test out the app.

Create a file in the root package of the project – in my case it’s /src/main/java/com/spr/demo/SampleController:


package demo;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;

/**
 * Created by justin on 2/15/15.
 */
@Controller
public class SampleController {
    @RequestMapping("/")
    @ResponseBody
    String home() {
        return "Hello World!";
    }
}

Next, start the server from the root of your project using the pre-configured Gradle script provided to us by Spring Initializr.

Justins-MacBook-Pro:demo justin$ gradle bootRun

You can see in the console output that the app started an already-configured Tomcat server running on port 8080. If we browse to http://localhost:8080 in a browser, we should see the following:

Screen Shot 2015-02-15 at 3.42.49 PM

That’s a full-fledged Spring app with a wired controller and a base configuration. Counting every line of code, that’s only 30 lines of code!

Adding A Template
You may remember that we chose a templating library (Thymeleaf) as part of our initial configuration on the Spring Initializr page. Let’s add a page template to our example to show how simple it is to set that up as well. To do this, we’ll have to create the template itself and change our controller slightly. In the earlier screenshot of our project, you’ll see we have a ‘templates’ directory in our ‘src/main/resources’ directory. Create a file there named ‘hello.html’:

/src/main/resources/hello.html:


<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<head lang="en">
    <meta charset="UTF-8" />
    <title>HELLO</title>
</head>
<body>
<p th:text="${message}"></p>
</body>
</html>

You can see we’ve added a placeholder for a string named ‘message’ that we’ll supply from our controller.

Next, let’s update our controller to populate the ‘message’ element:

/src/main/java/demo/SampleController.java:


package demo;

import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;

/**
 * Created by justin on 2/15/15.
 */
@Controller
public class SampleController {

    @RequestMapping("/")
    public String index(Model model) {
        model.addAttribute("message", "HELLO WORLD!");
        return "hello";
    }
}


Now, when we run our app, we should see a different result – one that builds a page using the template we just created:

Screen Shot 2015-02-15 at 4.02.52 PM

Who’s Behind The Curtain?
You can see we’ve created a full Spring app with a basic configuration, configured a controller, and started using a template engine to render our pages. If there is no generated code driving this, where is the configuration coming from?

If we take a closer look at the console output from the server starting you can see several references to a class named ‘org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration’. This is where the magic is happening. Let’s take a look at the source on Github (https://github.com/spring-projects/spring-boot/blob/master/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration.java). Browsing through this source, we can see references to familiar bean configurations, the paths to pre-configured property file locations, the base configuration to enable Spring MVC, and much more.

Continued Reading
The Spring Boot docs contain great resources for getting started. The ‘Getting Started’ section provides a nice foundation for those new to the tool. http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#getting-started

While the Spring Boot docs contain a lot of valuable information, several of the examples fall a bit short of what we commonly see at clients. Last year, I wrote a 5 part series that expands on the Getting Started guides provided by Spring. https://justinrodenbostel.com/2014/04/08/beyond-the-examples/. This explores some common problems that Spring easily solves: nested form binding, security integration, internationalization, and more.