Configuring Persistent Storage with Docker and Kubernetes

With DevOps becoming one of the most widely-used buzzwords in the industry, automated configuration management tools like Docker, Ansible, and Chef have been rapidly increasing in popularity and adoption. In particular, Docker, which allows teams to create containerized versions of their applications that can be run without additional virtualization overhead, and is widely supported by PaaS providers. Rather than revisit the value and challenges of using Docker, which is widely written about on the web (good example here: http://venturebeat.com/2015/04/07/docker-in-the-enterprise-the-value-and-the-challenges/), I’ll talk about a specific aspect of using Docker that can be tricky depending on where your Docker container is running – Persistent Storage.

If you’ve worked with Docker in the past or followed the link above, you’d know that one of the big advantages of using Docker is the ability to deploy managed, self-contained deployments of your application. In this scenario, services like Kubernetes, Docker Swarm, and Apache Mesos can be used to build elastic infrastructure – infrastructure that scales automatically when under peak loads, and that contracts when idle, thereby meeting the demands of customers while utilizing infrastructure in a very efficient manner. One thing to note when using Docker is that while it’s very easy to roll out upgrades and changes to containers, when a container is upgraded, it is recreated from scratch. This means anything that is saved to disk that is not part of the Docker manifest is deleted. Depending on the container manager you’re using, the configuration required to enable persistent storage can very greatly. In a very simple example, I’ll detail how to enable persistent storage using standalone Docker, as well as while using Kubernetes on the Google Cloud Platform. This example assumes you have Docker installed, and have a basic understanding of Docker and Kubernetes concepts.

For this post, we’ll start with a simple Dockerfile based on the standard httpd image. The code for this example can be found on Github at: https://github.com/jrodenbostel/persistent-storage-with-docker.

Docker

If you’re starting from scratch, create a simple Dockerfile in your project directory:

Dockerfile

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

RUN mkdir /data

You can see this will create an image based on the standard httpd image, copy the contents of the local /public-html folder into the htdocs directory, and then create a folder at the OS root called data.

From our project directory, we can build an image based on this Dockerfile named “docker-storage-test” using the following command:

docker build -t docker-storage-test .

We can create a container using that image and run it on the fly using the following command:

docker run -t -i --name docker-storage-test-container docker-storage-test

That will create a container named “docker-storage-test-container” using our image named “docker-storage-test”. Because the -i flag puts us in interactive mode, after executing that command, we should be greeted with a command prompt on our host machine. At that prompt, if we navigate to /data, we should find an empty directory.

root@c1522a53c755:/# cd data
root@c1522a53c755:/data# ls -a
.  ..
root@c1522a53c755:/data#

Let’s say we wanted to create some files in that /data folder and preserve them when upgrading our image. We’ll simulate that by doing the following:

root@c1522a53c755:/data# touch important-file.txt
root@c1522a53c755:/data# ls -a
.  ..  important-file.txt
root@c1522a53c755:/data#

To preserve our important files between upgrades, we’ll need to create persistent storage for our image. One way to do that with standalone Docker is to create data volume container. We’ll reuse the same image from our original container, and create a data volume container named “docker-storage-test-volume” mapped to the /data folder using the following command:

docker create -v /data --name docker-storage-test-volume docker-storage-test /bin/true

Before we can use our new data volume, we have to remove our old container using the following command:

docker rm docker-storage-test-container

To attach that data volume container to a new instance of our base container and attach, we use the following command:

docker run -t -i --volumes-from docker-storage-test-volume --name docker-storage-test-container docker-storage-test

Same as before, we can navigate to our /data directory and create our important file using the following commands:

root@b170d2f08ff3:/# cd /data/
root@b170d2f08ff3:/data# touch important-file.txt
root@b170d2f08ff3:/data# ls -a
.  ..  important-file.txt

Now, we can upgrade the docker-storage-test image and create new containers based off it, and that file will be preserved:

docker rm docker-storage-test-container
docker run -t -i --volumes-from docker-storage-test-volume --name docker-storage-test-container docker-storage-test
root@00f17622393f:/# cd /data
root@00f17622393f:/data# ls -a
.  ..  important-file.txt

Kubernetes

Google Cloud Platform’s Container Engine can be used to run Docker containers. As Google’s documentation states, the Container Engine is powered by Kubernetes. Kubernetes is an open-source container cluster manager originally written by Google. As previously mentioned, Kubernetes can be used to easily create scalable container based solutions. This portion of the example assumes you have a Google Cloud Platform account with the appropriate gcloud and kubectl tools installed. If you don’t, directions can be found at the links below:

https://cloud.google.com/sdk/

https://cloud.google.com/container-registry/docs/before-you-begin

For this example, I’ll be using a project called “docker-storage-test-project”. I’ll call out where project names are to be used in the examples below. To enable persistent storage on the Google Cloud Platform’s Container Engine, we must first create a new container cluster.

From the Google Cloud Platform Container Engine view, click “Create Cluster”.

Screen Shot 2016-06-05 at 8.31.07 PM

For this example, my cluster’s name will be “docker-storage-test-cluster”, with a size of 1, using 1 vCPU machines.

After creating the cluster, we’ll prepare our image for upload to Google Cloud Platform’s private Container Registry by tagging it using the following command:

docker tag docker-storage-test gcr.io/docker-storage-test-project/docker-storage-test

After tagging, push the image to your private Google Cloud container registry using the following command:

gcloud docker push gcr.io/docker-storage-test-project/docker-storage-test

Create a persistent disk named “docker-storage-test-disk” using the gcloud SDK command below:

gcloud compute disks create --size 10GB docker-storage-test-disk

Verify the kubectl tool is configured correctly to connect to your newly create cluster. To do this, I used the following command:

gcloud container clusters get-credentials docker-storage-test-cluster

Run the image we’ve uploaded in our newly created cluster:

kubectl run docker-storage-test --image=gcr.io/docker-storage-test/docker-storage-test:latest --port=80

At this point, a Kubernetes deployment file is created for us automatically. To mount the persistent disk we created earlier, we have to edit the deployment. The easiest way to do this is to edit the file with vim and create a local copy of it using the current file’s contents. To do that bring up the contents of the current deployment file using the following command:

kubectl edit deployment docker-storage-test

Copy and past that content into a new file. For this example, I’ve pasted the contents of this file into a file named “kubernetes_deployment.yml” on my project folder.

Add a volumes entry to the spec config – this should be at the same level as “containers:”. I added mine at the bottom. Note that “pdName” must equal the name of the persistent disk you created earlier, and “name” must map to the section we’ll create next:

volumes:
  - name: docker-storage-test-disk
    gcePersistentDisk:
      # This disk must already exist.
      pdName: docker-storage-test-disk
      fsType: ext4

Now add a volumeMount entry to the container config:

        volumeMounts:
          # This name must match the volumes.name below.
          - name: docker-storage-test-disk
          mountPath: /data
        resources: {}

Delete and recreate our deployments, this time using the new kubernetes deployment file we’ve created, by using the following commands:

kubectl delete service,deployments docker-storage-test
kubectl create -f kubernetes_deployment.yml

Now, let’s test our configuration by attaching to docker-storage-test container in the pod we’ve just created, create a file in the /data directory, recreate the deployment, and check for the file’s presence by using the following commands:

First, get your pod name:

kubectl get pods

Then attach to the pod and container. My pod’s name is “docker-storage-test-846338785-jbjk8”

kubectl exec -it docker-storage-test-846338785-jbjk8 -c docker-storage-test bash

root@docker-storage-test-846338785-jbjk8:/usr/local/apache2# cd /data
root@docker-storage-test-846338785-jbjk8:/data# touch important-file.txt
root@docker-storage-test-846338785-jbjk8:/data# ls -l
total 16
-rw-r--r-- 1 root root     0 Jun  6 04:04 important-file.txt
drwx------ 2 root root 16384 Jun  6 04:02 lost+found
root@docker-storage-test-846338785-jbjk8:/data# exit

We’ve got an important file – now delete the container, and recreate it, this is simulating the effect upgrading your container’s image would have:

kubectl delete service,deployments docker-storage-test
kubectl create -f kubernetes_deployment.yml

Get your pod name again. Mine is “docker-storage-test-846338785-u2jji”. Connect to the pod and browse to the data directory. We’ll see if our file is there:

kubectl exec -it docker-storage-test-846338785-u2jji -c docker-storage-test bash

root@docker-storage-test-846338785-u2jji:/usr/local/apache2# cd /data
root@docker-storage-test-846338785-u2jji:/data# ls -l
total 16
-rw-r--r-- 1 root root     0 Jun  6 04:04 important-file.txt
drwx------ 2 root root 16384 Jun  6 04:02 lost+found
root@docker-storage-test-846338785-u2jji:/data#

Conclusion

This is just two of the many ways to configure persistent storage using Docker, and container-related technologies. These are just the two that I had to figured out in my recent explorations. Many more can be found in both the Docker and Kubernetes documentation.
The next post may not be out for a while, but based on the trends of my current work, it’s sure to be IoT-based. Stay tuned for more.

Advertisements

Building a Simple REST API with Scala & Play! (Part 3)

In this 3 part series, we’ll cover creating a basic Play! REST API on top of Reactive Mongo. Full source code for this tutorial is available at https://github.com/jrodenbostel/getting-started-play-scala.

Welcome back!

We finished parts 1 & 2, which started with a description of the tools we’ll be using, and concluded with a fully functioning REST API built in Play! on top of a Reactive Mongo back-end. In part 3, we’ll cover the use of Spec2 and Mockito to write automated unit and integration tests for our application.

Integration Testing

Spec2 provides a DSL for BDD-style test specs. One variation of these specs, WithBrowser, actually runs the code in a headless browser, allowing you to end-to-end test your application in automated fashion. To get started, open the ‘/test/IntegrationSpec.scala’ file, which was created as part of our seed project. Update it to include the following:

import org.junit.runner.RunWith
import org.specs2.mutable.Specification
import org.specs2.runner.JUnitRunner
import play.api.test.WithBrowser

@RunWith(classOf[JUnitRunner])
class IntegrationSpec extends Specification {

  "Application" should {

    "work from within a browser" in new WithBrowser {

      browser.goTo("http://localhost:" + port)

      browser.pageSource must contain("Your database is ready.")
    }

    "remove data through the browser" in new WithBrowser {

      browser.goTo("http://localhost:" + port + "/cleanup")

      browser.pageSource must contain("Your database is clean.")
    }
  }
}

You’ll notice a spec for each Application Controller function, which simply visits the relevant URI and matches the response string. Tests can be executed from the Activator UI or by running ‘activator test’ from the command line from within your project folder.

Unit Testing

Unit testing is a bit more complex because we’ll test each controller by mocking the relevant repository method. Assertions for unit tests are more straightforward, as in most cases, we’re simply inspecting HTTP status codes. Update the ‘test/ApplicationSpec.scala’ file to include the following:

import controllers.{routes, Widgets}
import org.junit.runner.RunWith
import org.specs2.mock.Mockito
import org.specs2.mutable.Specification
import org.specs2.runner.JUnitRunner
import play.api.libs.json.{JsArray, Json}
import play.api.mvc.{Result, _}
import play.api.test.Helpers._
import play.api.test.{WithApplication, _}
import play.modules.reactivemongo.ReactiveMongoApi
import reactivemongo.api.commands.LastError
import reactivemongo.bson.BSONDocument
import repos.WidgetRepoImpl

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{ExecutionContext, Future}

@RunWith(classOf[JUnitRunner])
class ApplicationSpec extends Specification with Results with Mockito {

  val mockRecipeRepo = mock[WidgetRepoImpl]
  val reactiveMongoApi = mock[ReactiveMongoApi]
  val documentId = "56a0ddb6c70000c700344254"
  val lastRequestStatus = new LastError(true, None, None, None, 0, None, false, None, None, false, None, None)

  val oatmealStout = Json.obj(
        "name" -> "Widget One",
        "description" -> "My first widget",
        "author" -> "Justin"
      )

  val posts = List(
    oatmealStout,
    Json.obj(
      "name" -> "Widget Two: The Return",
      "description" -> "My second widget",
      "author" -> "Justin"
    ))

  class TestController() extends Widgets(reactiveMongoApi) {
    override def widgetRepo: WidgetRepoImpl = mockRecipeRepo
  }

  val controller = new TestController()

  "Application" should {

    "send 404 on a bad request" in {
      new WithApplication() {
        route(FakeRequest(GET, "/boum")) must beSome.which(status(_) == NOT_FOUND)
      }
    }

    "Recipes#delete" should {
      "remove recipe" in {
        mockRecipeRepo.remove(any[BSONDocument])(any[ExecutionContext]) returns Future(lastRequestStatus)

        val result: Future[Result] = controller.delete(documentId).apply(FakeRequest())

        status(result) must be equalTo ACCEPTED
        there was one(mockRecipeRepo).remove(any[BSONDocument])(any[ExecutionContext])
      }
    }

    "Recipes#list" should {
      "list recipes" in {
        mockRecipeRepo.find()(any[ExecutionContext]) returns Future(posts)

        val result: Future[Result] = controller.index().apply(FakeRequest())

        contentAsJson(result) must be equalTo JsArray(posts)
        there was one(mockRecipeRepo).find()(any[ExecutionContext])
      }
    }

    "Recipes#read" should {
      "read recipe" in {
        mockRecipeRepo.select(any[BSONDocument])(any[ExecutionContext]) returns Future(Option(oatmealStout))

        val result: Future[Result] = controller.read(documentId).apply(FakeRequest())

        contentAsJson(result) must be equalTo oatmealStout
        there was one(mockRecipeRepo).select(any[BSONDocument])(any[ExecutionContext])
      }
    }

    "Recipes#create" should {
      "create recipe" in {
        mockRecipeRepo.save(any[BSONDocument])(any[ExecutionContext]) returns Future(lastRequestStatus)

        val request = FakeRequest().withBody(oatmealStout)
        val result: Future[Result] = controller.create()(request)

        status(result) must be equalTo CREATED
        there was one(mockRecipeRepo).save(any[BSONDocument])(any[ExecutionContext])
      }
    }

    "Recipes#update" should {
      "update recipe" in {
        mockRecipeRepo.update(any[BSONDocument], any[BSONDocument])(any[ExecutionContext]) returns Future(lastRequestStatus)

        val request = FakeRequest().withBody(oatmealStout)
        val result: Future[Result] = controller.update(documentId)(request)

        status(result) must be equalTo ACCEPTED
        there was one(mockRecipeRepo).update(any[BSONDocument], any[BSONDocument])(any[ExecutionContext])
      }
    }
  }
}

You’ll notice the class starts with the creation of Mocks and example data – this is very straightforward and was pulled from examples of real BSON data from Mongo. In each spec, you’ll also notice the same pattern: mocks are configured, methods are exercised, and assertions are made. Very similar to other BDD-style test frameworks like Jasmine and Rspec.

Conclusion

Reactive programming and related frameworks, especially those of the functional variety, represent a significant shift in the way we write applications. There are many new patterns, tools, and programming styles that a developer must become familiar with in order to write applications in the reactive style effectively. Many of us, myself included, are just starting to get opportunities to do so. Although these tools may not be appropriate for every solution, becoming familiar with the underlying concepts will help individuals become more well-rounded as developers and help ensure that scalable, resilient, responsive architectures are adopted when necessary.

Building a Simple REST API with Scala & Play! (Part 2)

In this 3 part series, we’ll cover creating a basic Play! REST API on top of Reactive Mongo.  Full source code for this tutorial is available at https://github.com/jrodenbostel/getting-started-play-scala.

Welcome back!

If you’re coming in fresh, and need some instructions on getting the appropriate tools installed and creating a shell of an environment, please refer to part 1.  In part 2, we’ll cover adding our first API functions as asynchronous actions in a Play! controller, as well as define our first data access functions.
Because we started with a seed project, we got some bonus cruft in our application in the form of a views package.  While Scala templates are a fine way to create views for your application, for this tutorial, we’ll be building a RESTful API that responds with JSON strings. Start by deleting the ‘views’ folder at ‘app/views’ such that:
Screen Shot 2016-01-25 at 10.03.14 PM
Note that this will break the default controller that came as part of the project seed.  To remedy this, update the default controller action response to simply render text, by replacing:
Ok(views.html.index("Your new application is ready."))
with:
Ok("Your new application is ready.")

Creating the controller

Next, we’ll create our controller.  Create a new file in the ‘app/controllers’ folder named ‘Widgets.scala’.  This will house the RESTful actions associated with Widgets.  We’ll also add default methods to this controller for the RESTful actions that we’ll implement later.
package controllers

import play.api.mvc._

class Widgets extends Controller {

  def index = TODO

  def create = TODO

  def read(id: String) = TODO

  def update(id: String) = TODO

  def delete(id: String) = TODO
}
Play! comes with a default “TODO” page for controller actions that have not yet been implemented.  It’s a nice way to keep your app functioning and reloading while you’re building out functionality incrementally.  Before we can see that default “TODO” page, we must first add routes to the application configuration that reference our new controller and it’s not-yet-implemented actions.
Update /conf/routes with paths to the Widgets controller such that the following is true:
# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET        /                    controllers.Application.index
GET        /cleanup             controllers.Application.cleanup

#Widgets
GET        /api/widgets         controllers.Widgets.index
GET        /api/widget/:id      controllers.Widgets.read(id: String)
POST       /api/widget          controllers.Widgets.create
DELETE     /api/widget/:id      controllers.Widgets.delete(id: String)
PATCH      /api/widget/:id      controllers.Widgets.update(id: String)
Now, we can visit one of these new paths to view the default “TODO” screen.
Screen Shot 2016-01-25 at 10.18.35 PM

Data access

Before we can build out our controller actions, we’ll add a data access layer to surface some data to our controller.  Let’s start by creating the trait, which will define the contract for our data access layer.  (Traits are similar to Interfaces in Java – more info that here http://docs.scala-lang.org/tutorials/tour/traits.html)
Create a new folder called ‘repos’ such that a directory at ‘app/repos’ exists.  In that directory, create a new Scala file named WidgetRepo.scala with the following content:
app/repos/WidgetRepo.scala:
package repos

import javax.inject.Inject

import play.api.libs.json.{JsObject, Json}
import play.modules.reactivemongo.ReactiveMongoApi
import play.modules.reactivemongo.json._
import play.modules.reactivemongo.json.collection.JSONCollection
import reactivemongo.api.ReadPreference
import reactivemongo.api.commands.WriteResult
import reactivemongo.bson.{BSONDocument, BSONObjectID}

import scala.concurrent.{ExecutionContext, Future}

trait WidgetRepo {
  def find()(implicit ec: ExecutionContext): Future[List[JsObject]]

  def select(selector: BSONDocument)(implicit ec: ExecutionContext): Future[Option[JsObject]]

  def update(selector: BSONDocument, update: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult]

  def remove(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult]

  def save(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult]
}
There should be no surprises there – normal CRUD operations for data access and manipulation.  Things you may notice that are special here – the return types.  They’re all asynchronous Futures.  The read operations return JsObjects https://www.playframework.com/documentation/2.0/api/scala/play/api/libs/json/JsObject.html and the write operations return WriteResults http://reactivemongo.org/releases/0.11/documentation/tutorial/write-documents.html.
Let’s continue by adding the implementations for each of these methods in the form of a Scala class in the same source file:
app/repos/WidgetRepo.scala:
class WidgetRepoImpl @Inject() (reactiveMongoApi: ReactiveMongoApi) extends WidgetRepo {

  def collection = reactiveMongoApi.db.collection[JSONCollection]("widgets");

  override def find()(implicit ec: ExecutionContext): Future[List[JsObject]] = {
    val genericQueryBuilder = collection.find(Json.obj());
    val cursor = genericQueryBuilder.cursor[JsObject](ReadPreference.Primary);
    cursor.collect[List]()
  }

  override def select(selector: BSONDocument)(implicit ec: ExecutionContext): Future[Option[JsObject]] = {
    collection.find(selector).one[JsObject]
  }

  override def update(selector: BSONDocument, update: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult] = {
    collection.update(selector, update)
  }

  override def remove(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult] = {
    collection.remove(document)
  }

  override def save(document: BSONDocument)(implicit ec: ExecutionContext): Future[WriteResult] = {
    collection.update(BSONDocument("_id" -> document.get("_id").getOrElse(BSONObjectID.generate)), document, upsert = true)
  }

}

Again, there shouldn’t be much that’s surprising here other than the normal, somewhat complex IMO, Scala syntax.  You can see the implicit ExecutionContext, which, for asynchronous code, basically let’s Scala decide where in the thread pool to execute the related function.  You may also notice the ReadPreference in the find() function.  This tells Mongo that our repo would like to read it’s results from the primary Mongo node.

Back to the controller

At this point, we can return to our controller to round out the implementation details there.  Let’s start simple.
Before we can start adding implementation details, we need to configure some dependency injection details.  We’ll inject the ReactiveMongoApi into our Controller, then use that to create our repository.  Update the signature of the Widgets controller and imports so the following is true:
package controllers

import javax.inject.Inject

import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.libs.json.Json
import play.api.mvc._
import play.modules.reactivemongo.{MongoController, ReactiveMongoApi, ReactiveMongoComponents}
import reactivemongo.api.commands.WriteResult
import reactivemongo.bson.{BSONObjectID, BSONDocument}
import repos.WidgetRepoImpl

class Widgets @Inject()(val reactiveMongoApi: ReactiveMongoApi) extends Controller
    with MongoController with ReactiveMongoComponents {

  def widgetRepo = new WidgetRepoImpl(reactiveMongoApi)

In ‘app/controllers/Widgets.scala’, we have an unimplemented ‘index’ function.  This is meant to display a list of all widgets in the database, selected without parameters.  The implementation for this function is very straightforward. Update the index function such that:

  def index = Action.async { implicit request =>
    widgetRepo.find().map(widgets => Ok(Json.toJson(widgets)))
  }
In this implementation, since we’re expecting a single result, when we execute ‘map’ on the result, we’ll take the resulting object and render it directly to our JSON response.
The ‘read’ method, which is very similar to ‘index’, will take a single String parameter (the id of the document being searched for) and return a single result as JSON string. Update the read function such that:
  def read(id: String) = Action.async { implicit request =>
    widgetRepo.select(BSONDocument(Id -> BSONObjectID(id))).map(widget => Ok(Json.toJson(widget)))
  }

 The ‘delete’ method is similarly straightforward in that it takes a String id for the document to be deleted, and returns a HTTP status code 202 (Accepted), and no body.

  def delete(id: String) = Action.async {
    widgetRepo.remove(BSONDocument(Id -> BSONObjectID(id)))
        .map(result => Accepted)
  }

The ‘create’ and ‘update’ methods introduce a small amount of complexity in that they require the request body to be parsed using Scala’s built in pattern matching functionality. Since we’ll have to match the field names in two places, we’ll create a companion object to hold our field names. Create a ‘WidgetFields’ companion object in the Widgets controller source file:

object WidgetFields {
  val Id = "_id"
  val Name ="name"
  val Description = "description"
  val Author = "author"
}

In the body of the Widget controller, add a scoped import for the companion object:

import controllers.WidgetFields._

For the ‘create’ method, we’ll apply a JSON Body Parser to an implicit request, parsing out the relevant content needed to build a BSONDocument that can be persisted via the Repo:

def create = Action.async(BodyParsers.parse.json) { implicit request =>
    val name = (request.body \ Name).as[String]
    val description = (request.body \ Description).as[String]
    val author = (request.body \ Author).as[String]
    recipeRepo.save(BSONDocument(
      Name -> name,
      Description -> description,
      Author -> author
    )).map(result => Created)
  }

Execution of this method returns the HTTP status code 201.  For the ‘update’ method, we’ll perform largely the same operation – applying the JSON Body Parser to an implicit request, however, this time we’ll call a different repo method – this time building a BSONDocument to select the relevant document with, then passing in the current field values:

  def update(id: String) = Action.async(BodyParsers.parse.json) { implicit request =>
    val name = (request.body \ Name).as[String]
    val description = (request.body \ Description).as[String]
    val author = (request.body \ Author).as[String]
    widgetRepo.update(BSONDocument(Id -> BSONObjectID(id)),
      BSONDocument("$set" -> BSONDocument(Name -> name, Description -> description, Author -> author)))
        .map(result => Accepted)
  }

Testing!

Part 3 of this series will cover testing using the Spec2 library. In the mean time…We have a fully functioning REST API – but testing manually requires configuration and the execution of HTTP operations. Many web frameworks are packed with functionality that allows a developer to ‘bootstrap’ an application – adding seed data to a local environment for testing as an example. Recent changes in the Play! framework’s GlobalSettings have changed the way developers do things like seed test databases (https://www.playframework.com/documentation/2.4.x/GlobalSettings). While the dust settles, and while we wait for part 3, I created some helper functions in the Application controller that will create and remove some test data:

package controllers

import javax.inject.{Inject, Singleton}

import play.api.Logger
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.libs.json.Json
import play.api.mvc.{Action, Controller}
import play.modules.reactivemongo.json.collection.JSONCollection
import play.modules.reactivemongo.{MongoController, ReactiveMongoApi, ReactiveMongoComponents}
import reactivemongo.api.collections.bson.BSONCollection
import reactivemongo.api.commands.bson.BSONCountCommand.{ Count, CountResult }
import reactivemongo.api.commands.bson.BSONCountCommandImplicits._
import reactivemongo.bson.BSONDocument

import scala.concurrent.Future

@Singleton
class Application @Inject()(val reactiveMongoApi: ReactiveMongoApi) extends Controller
    with MongoController with ReactiveMongoComponents {

  def jsonCollection = reactiveMongoApi.db.collection[JSONCollection]("widgets");
  def bsonCollection = reactiveMongoApi.db.collection[BSONCollection]("widgets");

  def index = Action {
    Logger.info("Application startup...")

    val posts = List(
      Json.obj(
        "name" -> "Widget One",
        "description" -> "My first widget",
        "author" -> "Justin"
      ),
      Json.obj(
        "name" -> "Widget Two: The Return",
        "description" -> "My second widget",
        "author" -> "Justin"
      ))

    val query = BSONDocument("name" -> BSONDocument("$exists" -> true))
    val command = Count(query)
    val result: Future[CountResult] = bsonCollection.runCommand(command)

    result.map { res =>
      val numberOfDocs: Int = res.value
      if(numberOfDocs > 1) {
        jsonCollection.bulkInsert(posts.toStream, ordered = true).foreach(i => Logger.info("Record added."))
      }
    }

    Ok("Your database is ready.")
  }

  def cleanup = Action {
    jsonCollection.drop().onComplete {
      case _ => Logger.info("Database collection dropped")
    }
    Ok("Your database is clean.")
  }
}

… and routes …

# Home page
GET        /                    controllers.Application.index
GET        /cleanup             controllers.Application.cleanup

Building a Simple REST API with Scala & Play! (Part 1)

In this 3 part series, we’ll cover creating a basic Play! REST API on top of Reactive Mongo.  Full source code for this tutorial is available at https://github.com/jrodenbostel/getting-started-play-scala.

Why Scala & Play!?

There has been a lot of buzz recently in the industry and at our some of our clients around Reactive Programming and related frameworks.   Reactive Programming is a movement based around building applications that can meet the diverse demands of modern environments.  The four main characteristics are:
  • Responsiveness:  high-performance, with consistent and fast response times
  • Resilience: fails gracefully, and remains responsive during failures
  • Elasticity: remains responsive during varying workload
  • Message Driven: non-blocking, back-pressure capable, asynchronous
Moves towards cloud infrastructure, microservices architecture, and DevOps-style tools which have made conveniences of previously cumbersome tasks, all lend themselves to supporting reactive systems: systems composed of many small pieces, possibly distributed, expanding and contracting based on load, location agnostic, relying on messaging to communicate.  Maybe you’ve seen parts of this and haven’t realized it has a name.  Systems designed based on these principles can be considered reactive.
Like many things in development, learning a new tool or technique often requires not only reading, but also coding a live example. In this blog series, I’ll cover creating a small, simple reactive system – a simple RESTful API using the Play! Framework and Reactive Mongo.  Because reactive programming is often associated with functional programming, I’ll also be writing this example in Scala.
The Play! Framework (https://playframework.com) is an open source web app framework that’s been around since 2007.  In many ways, it’s similar to other web application frameworks you may be familiar with like Spring MVC and Rails: it’s MVC-based, it comes with a lot of built-in support tooling (scaffolding, execution, dependency management, etc), and it’s based on the principle of convention over configuration.  In other ways, mostly ways that indicate how it fits into the world of reactive systems, it differs from those frameworks: it’s 100% stateless and is built to run on Netty (http://netty.io) – a non-blocking, asynchronous application framework.
Behind Play!, Reactive Mongo (http://reactivemongo.org) will give us non-blocking and asynchronous access to a Mongo document store through a Scala driver and a Play! module for easy integration into a Play! apps.  The Reactive Mongo API exposes most normal data access functions you’d come to expect, but returns results as Scala Futures, and provides translation utilities for translating the Mongo document format (BSON) to JSON, and many functional helper methods for dealing with result sets.
To wrap it all up, we’ll be using Spec2 (https://etorreborre.github.io/specs2/) to unit and integration test our application.  Spec2 allows us to write test cases in the style of behavior-driven development and highlights the flexibility of Scala and how it an easily be used to create a domain-specific language.
You may find the tools in this tutorial more difficult to get started with than others you may be used to – there are many new concepts in the mix here – it’s to be expected.  These tools have a place in a creating highly-available, fault-tolerant systems capable of handling web-scale traffic.  If you were actually building what we’ll build in this tutorial, this may not be the right tool set.

The Setup

To get started, make sure you have Scala installed.  There are many ways to install Scala.  My favorite is by using the Homebrew package manager available for the Mac (http://brewformulas.org/Scala). You can also download, unpack binaries, and update paths by following the instructions here (http://www.scala-lang.org/download/install.html).
You’re also going to want to have Mongo installed.  Again, this is something that I normally install using the Homebrew package manager.  More detailed instructions can be found here: https://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/.  You should obviously have Mongo running during this tutorial.
Assuming Scala is installed, next download and install the Typesafe Activator, available here (https://www.typesafe.com/activator/download).  This will give us a nice UI for generating our application, running and viewing the results of tests, and running our application.
After installing the Typesafe Activator, open a command line prompt and start it up:
Justins-MacBook-Pro:Projects justin$ activator ui
From the Typesafe Activator window, under the ‘Templates’ menu item, select ‘Seeds’, then select ‘Play Scala Seed’.  Don’t forget to give your application a name and location before hitting the ‘Create app’ button.
Screen Shot 2016-01-25 at 9.33.33 PM
After pressing the ‘Create app’ button, you should be greeted with a message indicating that your application was created successfully.  From this window, we’ll be able to start/stop, compile, and test our new application.  Remember, Play! supports hot-swapping of code, so we’ll be doing a lot of viewing results in this window.
 Screen Shot 2016-01-25 at 9.29.27 PM

Installing Dependencies

Play! applications generated from the Play! Scala seed that we just used come packed with a pre-defined build script written using SBT.  SBT is the de-facto build tool for Scala applications from Typesafe.  More information on SBT can be found here (http://www.scala-sbt.org). Our new application has a build.sbt file that we’ll need to update with a dependency for Reactive Mongo.  Update the library dependencies sequence in build.sbt accordingly:
libraryDependencies ++= Seq(
  jdbc,
  cache,
  ws,
  specs2 % Test,
  "org.reactivemongo" %% "play2-reactivemongo" % "0.11.7.play24"
)
Much like Maven or Bundler, this will automatically download and install the Reactive Mongo Play! module, which will in turn download the necessary dependent Reactive Mongo and Mongo libraries.
Next, we’ll update our application.conf file to include configuration information about our Mongo instance.  The application.conf file is found at /conf/application.conf and contains generally configuration settings for your application.  We have two lines to add to this file.  Add the following at the end of application.conf and save your changes:
play.modules.enabled += "play.modules.reactivemongo.ReactiveMongoModule"

mongodb.uri = "mongodb://localhost:27017/getting-started-play-scala"
At this point it’s probably worth noting that if you’re interested in exploring or manipulating a Mongo instance, I recommend using Robomongo (http://robomongo.org).
To conclude part 1, using the Typesafe Activator, run your application.  If we’ve installed our dependencies correctly, we should be greated with the default Welcome to Play screen, as seen below:
Screen Shot 2016-01-26 at 8.31.54 PM.png
Please continue to part 2, where we’ll begin to define our REST API by creating a Play! controller with asynchronous actions, and then move on to creating our data access layer using Reactive Mongo.

Moving to Spring Boot

The Spring framework has been the de facto standard framework to build a Java application on for some time now. Providing an IoC container and performing dependency injection was just the start. Since it’s initial release in 2002, Spring has expanded and matured, providing developers with familiar, patterns-based abstractions for common components throughout an application’s layers. As Spring grew, the configuration became more and more unwieldy and the framework became known as one that involved a fair amount of effort to set up and get going, and even the most trivial of projects came with a fair amount of boilerplate configuration. There was not an easy place to start. Maven filled this gap in the early days, pushing the community toward the concepts of convention over configuration and dependency management, and through the use of project archetypes, but the same problem eventually cropped up – repetitive, difficult-to-manage configuration.

With the first release of Rails in 2005, the developer community saw what was possible in terms of a developer-friendly framework that all but eliminated the perceived shortcomings of frameworks like Spring. Frameworks like Rails came to be known as Rapid Application Development (RAD) frameworks. These frameworks shared many of the same characteristics – a well-defined convention, opinionated default configurations, scaffolding tools used to quickly create pre-configured components. In 2009, the Spring developers responded to the trend of RAD frameworks with the release of Spring Roo. Spring Roo was never billed as an attempt to replace Spring, only enhance it by eliminating the shortcomings of vanilla Spring. Spring Roo provided a well-defined convention and scaffolding tools, but was driven by AspectJ and relied on a significant amount of code generation to eliminate boilerplate code. This led to difficulty troubleshooting configuration problems, and a steeper learning curve for developers new to Java and Spring.

Enter Spring Boot…
In 2014, the Spring development team released a next-generation take on Spring named Spring Boot. Spring Boot provides many of the same RAD-like features of frameworks like Rails, and goes a step further than Roo by eliminating cumbersome XML-based configuration and the mystery of generated code. This is accomplished through the use of auto-configuration classes. Each Spring Boot module is packaged with a default configuration – the boilerplate code developers used to be responsible for creating. These auto-configuration classes provide the opinionated configuration familiar to users of other RAD frameworks, and that follows the basic best practices familiar to users of traditional Spring. In the next few sections, we’ll get a new project up and running from scratch and see auto-configuration in action.

Creating A New Project
A newer feature of the Spring Boot project is the Spring Initializr, a website (http://start.spring.io) that allows a developer to choose a starting point for their application with a 1 page form the concludes with a ‘Generate Project’ button and a download of the shell project. Below, you can find the steps I used to configured a basic project:

Screen Shot 2015-02-15 at 4.12.48 PM

These choices produced the following project structure:

Screen Shot 2015-02-15 at 3.08.51 PM

This project can be built and run, but without at least one controller we’ll have no page to display other than a default error page. Let’s create a controller and test out the app.

Create a file in the root package of the project – in my case it’s /src/main/java/com/spr/demo/SampleController:


package demo;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;

/**
 * Created by justin on 2/15/15.
 */
@Controller
public class SampleController {
    @RequestMapping("/")
    @ResponseBody
    String home() {
        return "Hello World!";
    }
}

Next, start the server from the root of your project using the pre-configured Gradle script provided to us by Spring Initializr.

Justins-MacBook-Pro:demo justin$ gradle bootRun

You can see in the console output that the app started an already-configured Tomcat server running on port 8080. If we browse to http://localhost:8080 in a browser, we should see the following:

Screen Shot 2015-02-15 at 3.42.49 PM

That’s a full-fledged Spring app with a wired controller and a base configuration. Counting every line of code, that’s only 30 lines of code!

Adding A Template
You may remember that we chose a templating library (Thymeleaf) as part of our initial configuration on the Spring Initializr page. Let’s add a page template to our example to show how simple it is to set that up as well. To do this, we’ll have to create the template itself and change our controller slightly. In the earlier screenshot of our project, you’ll see we have a ‘templates’ directory in our ‘src/main/resources’ directory. Create a file there named ‘hello.html’:

/src/main/resources/hello.html:


<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<head lang="en">
    <meta charset="UTF-8" />
    <title>HELLO</title>
</head>
<body>
<p th:text="${message}"></p>
</body>
</html>

You can see we’ve added a placeholder for a string named ‘message’ that we’ll supply from our controller.

Next, let’s update our controller to populate the ‘message’ element:

/src/main/java/demo/SampleController.java:


package demo;

import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;

/**
 * Created by justin on 2/15/15.
 */
@Controller
public class SampleController {

    @RequestMapping("/")
    public String index(Model model) {
        model.addAttribute("message", "HELLO WORLD!");
        return "hello";
    }
}


Now, when we run our app, we should see a different result – one that builds a page using the template we just created:

Screen Shot 2015-02-15 at 4.02.52 PM

Who’s Behind The Curtain?
You can see we’ve created a full Spring app with a basic configuration, configured a controller, and started using a template engine to render our pages. If there is no generated code driving this, where is the configuration coming from?

If we take a closer look at the console output from the server starting you can see several references to a class named ‘org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration’. This is where the magic is happening. Let’s take a look at the source on Github (https://github.com/spring-projects/spring-boot/blob/master/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration.java). Browsing through this source, we can see references to familiar bean configurations, the paths to pre-configured property file locations, the base configuration to enable Spring MVC, and much more.

Continued Reading
The Spring Boot docs contain great resources for getting started. The ‘Getting Started’ section provides a nice foundation for those new to the tool. http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#getting-started

While the Spring Boot docs contain a lot of valuable information, several of the examples fall a bit short of what we commonly see at clients. Last year, I wrote a 5 part series that expands on the Getting Started guides provided by Spring. https://justinrodenbostel.com/2014/04/08/beyond-the-examples/. This explores some common problems that Spring easily solves: nested form binding, security integration, internationalization, and more.

Getting Started With Ionic & NgCordova

In my most recent engagement, I’ve been working on a hybrid mobile app built using Ionic and ngCordova. Functionality-wise, the app itself is fairly straightforward, but since this is my first project that directly targets mobile devices (as opposed to responsive web), I’ve learned a few things that I think are worth sharing. Like most posts, the information contained has been cobbled together from many different sources during my time on this project. The purpose of this post is to walk through how to configure a new, fully testable project using Ionic and ngCordova. Like always, the code for this post is in Github at (https://github.com/jrodenbostel/getting-started-with-ionic-and-ngcordova).

Ionic (http://ionicframework.com) Hybrid mobile app development frameworks have been around for quite some time now. The Ionic Framework is one of the better entries I’ve seen to date. Based on current web technologies and frameworks (HTML5, CSS3, AngularJs (https://angularjs.org)), and leveraging a tried and true native container that runs on many devices (http://cordova.apache.org), Ionic provides a mostly-familiar starting point for folks new to mobile development. On top of that, Ionic is also packaged with a nice set of UI components and icons that help applications look nice as well as function smoothly.

ngCordova (http://ngcordova.com) The ngCordova project basically wraps the Cordova API to make it more Angular-friendly by giving the developer the ability to inject Cordova components as dependencies in your Angular controllers, services, etc. This project is still new and changing rapidly, but simplifies development greatly, and makes code that calls Cordova from within Ionic more readable and more easily testable.

Others (Yeoman, Grunt, Bower, Karma, Protractor, Mocha, Chai) These are the tools we’ll use to build our app. They are used for a variety of things, all are introduced from the same source – Yeoman (http://yeoman.io). Remember how revolutionary the scaffolding features of Rails were when they first surfaced? Yeoman provides scaffolding like that, and anyone can write a generator. There happens to be a generator for Ionic, and in my opinion, it’s all but necessary to use. Out of the box, you get a working app shell, a robust Grunt (http://gruntjs.com) script for app assembly, packaging, emulation, etc, dependency injection via Bower (http://bower.io), and example Mocha (http://mochajs.org) tests running via Karma (http://karma-runner.github.io/0.12/index.html). The only item we’ll add is support for end to end integration tests with Protractor (https://github.com/angular/protractor).

Prerequisites Before we start, you’ll need to have node.js (http://nodejs.org) and npm (https://www.npmjs.org) installed on your machine. Installation instructions can be found here (http://nodejs.org/download/) and here (http://blog.npmjs.org/post/85484771375/how-to-install-npm).

Step 1 – Scaffold Install Yeoman using the following command:

npm install -g yo

Install the Ionic Generator for Yeoman using the following command:

npm install -g generator-ionic

Create a folder for your project and use the newly install generator to build the shell of an Ionic app. Be sure you’re executing these commands in the root of your project folder. You can play around and answer the questions however you’d like. If you’re interested in following along, I’ve included the answers I’ve used and the relevant output below:

Justins-MacBook-Pro:getting-started-with-ionic-and-ngcordova justin$ yo ionic
    _             _
   (_)           (_)
    _  ___  _ __  _  ___
   | |/ _ \| '_ \| |/ __|
   | | (_) | | | | | (__
   |_|\___/|_| |_|_|\___|

[?] Would you like to use Sass with Compass (requires Ruby)? Yes
Created a new Cordova project with name "GettingStartedWithIonicAndNgcordova" and id "com.example.GettingStartedWithIonicAndNgcordova"
[?] Which Cordova plugins would you like to include? org.apache.cordova.console, org.apache.cordova.device
[?] Which starter template [T] or example app [A] would you like to use? [T] Tabs

Install plugins registered at plugins.cordova.io: grunt plugin:add:org.apache.cordova.globalization
Or install plugins direct from source: grunt plugin:add:https://github.com/apache/cordova-plugin-console.git

Installing selected Cordova plugins, please wait.
Installing starter template. Please wait

     info ... Fetching http://github.com/diegonetto/ionic-starter-tabs/archive/master.tar.gz ...
     info This might take a few moments

Step 2 – Run! Validate there weren’t any issues running the generator by starting the app. The Yeoman generator we’ve used includes a full-featured build script that includes a variety of ways to start up our app. We’ll use more features of the script later, but for a complete list of available commands visit the generator’s Github page (https://github.com/diegonetto/generator-ionic).For now, we’ll serve the app with the simple http server included as part of our sample app (courtesy of the Yeoman generator) using the following command (from the root of your project folder:

grunt serve

This should have started the server and opened your default browser. In the browser, you should see something similar to the screenshot below:

Screen Shot 2015-02-04 at 2.39.34 PM

Step 3 – ngCordova We’re off to a nice start – a fully functional app, running in the browser, with automated chai tests (using Karma (http://karma-runner.github.io/0.12/index.html) via grunt test) and some static code analysis (using jshint (http://jshint.com/) via grunt jshint) in place, all as a result of our Yeoman generator. If we explore the generated code, we notice that the app itself is very simple. As soon as we start writing code that depends on device APIs (checking for a network connection, identifying the current device, etc), we run into a problem: there’s only a global reference to Cordova, and there isn’t a nice way to inject Cordova into our Angular controllers, especially for testing. This is where ngCordova comes into play. Here, we’ll write some simple code that checks the device platform the app is currently running on, and display it on the opening screen. Let’s start by writing a test* that looks for an object in scope of the DashCtrl called ‘devicePlatform’. First, there are a few different ways to run the tests. One enables watching, but doesn’t run the tests immediately (you have to leave this on, and it runs tests as/when files in your project change), and the other just runs the tests on demand. With watching:

grunt test

On demand:

grunt karma

At the bottom of ‘/test/spec/controllers.js’, add a test for the DashCtrl with the following code:

describe('Controller: DashCtrl', function () {

  var should = chai.should();

  // load the controller's module
  beforeEach(module('GettingStartedWithIonicAndNgcordova'));

  var DashCtrl,
    scope;

  // Initialize the controller and a mock scope
  beforeEach(inject(function ($controller, $rootScope) {
    scope = $rootScope.$new();
    DashCtrl = $controller('DashCtrl', {
      $scope: scope
    });
  }));

  it('should inspect the current devicePlatform', function () {
    scope.devicePlatform.should.equal('ios');
  });

});

Immediately after adding that code to our test file (if you’re using ‘grunt test’) or running the tests on demand (using ‘grunt karma’), we should see results in our terminal window, and we should see that this test has failed because ‘devicePlatform’ is undefined in the DashCtrl’s scope.

PhantomJS 1.9.8 (Mac OS X) Controller: DashCtrl should inspect the current devicePlatform FAILED
TypeError: 'undefined' is not an object (evaluating 'scope.devicePlatform.should')

Next, we’ll install ngCordova and implement the logic this test is exercising. Detailed instructions on installing ngCordova can be found here (http://ngcordova.com/docs/install/).The simplest install is via Bower using the following command:

bower install ngCordova

Add a reference to the newly installed ngCordova to your app/index.html file, above the reference to cordova, such that:

    <script src="lib/ngCordova/dist/ng-cordova.js"></script>
    <!-- cordova script (this will be a 404 during development) -->
    <script src="cordova.js"></script>

To get the device OS, we’ll need to use Cordova’s Device plugin. If you haven’t already, we’ll need to make sure that it’s installed. Use the following command to install it:

cordova plugin add org.apache.cordova.device

Next, we’ll add ngCordova to our project as a module. In app/scripts/app.js, change this line:

angular.module('GettingStartedWithIonicAndNgcordova', ['ionic', 'config', 'GettingStartedWithIonicAndNgcordova.controllers', ‘GettingStartedWithIonicAndNgcordova.services’])

to:

angular.module('GettingStartedWithIonicAndNgcordova', ['ionic', 'config', 'GettingStartedWithIonicAndNgcordova.controllers', 'GettingStartedWithIonicAndNgcordova.services', 'ngCordova'])

Next, let’s write the code that adds the device platform to the DashCtrl scope. Start by injecting the device plugin into the DashCtrl using the code below:

.controller('DashCtrl', function($scope, $cordovaDevice) {
})

Then create the devicePlatform scope variable and set it’s value to the device’s actual platform using the following code:

.controller('DashCtrl', function($scope, $cordovaDevice) {
     $scope.devicePlatform = $cordovaDevice.getPlatform();
})

Finally, add a reference to the device plugin to templates/tab-dash.html:

<ion-view title="Dashboard">
  <ion-content class="has-header padding">
    <h1>Dash</h1>
     <h2>{{devicePlatform}}</h2>
  </ion-content>
</ion-view>

You’ll notice that when we run our test again, they still fail. This is because karma tests run in the browser – the browser doesn’t interact with Cordova plugins – there’s no platform for the browser, there’s no ‘model’, there’s no device for Cordova to plug in to. We’ll need to add a few more things to get this working. At this point, if you’re interested in continuing on under the assumption that you’ll only be unit testing and never testing in the browser (which includes automated end to end testing) prior to testing on the device, you can simply mock any calls to Cordova using spies/doubles. I think there’s value in automated end to end testing and manual browser testing prior to testing on devices. I think it’s an easy and efficient way to troubleshoot your code in an environment isolated from platform dependencies. In that case, we’ll use ngCordovaMocks (and some grunt scripting) to make our unit tests pass in our development environment, we’ll add Protractor so we can test our app end-to-end prior to running on the device, and finally, we’ll run the app in the iOS emulator to complete our validation.

ngCordovaMocks You might notice in the ngCordova Bower package that there are an additional set of files named ‘ng-cordova-mocks’. These compliment ngCordova by providing empty implementations of the services that ngCordova wraps, which can be injected in place of the standard ngCordova implementations for testing purposes. First, we’ll need to add references in two places – our application config, and our test config. For the application configuration, update our app’s module definition in /scripts/app.js:

angular.module('GettingStartedWithIonicAndNgcordova', ['ionic', 'config', 'GettingStartedWithIonicAndNgcordova.controllers', 'GettingStartedWithIonicAndNgcordova.services', 'ngCordovaMocks'])

The test config can be found in our Grunt script. In /Gruntfile.js, find the karma task. In the karma task, you should see a configuration option named ‘files’. Add a line to update it to the following:

        files: [
          '<%= yeoman.app %>/lib/angular/angular.js',
          '<%= yeoman.app %>/lib/angular-animate/angular-animate.js',
          '<%= yeoman.app %>/lib/angular-sanitize/angular-sanitize.js',
          '<%= yeoman.app %>/lib/angular-ui-router/release/angular-ui-router.js',
          '<%= yeoman.app %>/lib/ionic/release/js/ionic.js',
          '<%= yeoman.app %>/lib/ionic/release/js/ionic-angular.js',
          '<%= yeoman.app %>/lib/angular-mocks/angular-mocks.js',
          '<%= yeoman.app %>/lib/ngCordova/dist/ng-cordova-mocks.js',
          '<%= yeoman.app %>/<%= yeoman.scripts %>/**/*.js',
          'test/mock/**/*.js',
          'test/spec/**/*.js'
        ],

Now, we’ll update our test to use the new ngCordovaMocks library. We’ll add a reference to the ngCordovaMocks module, we’ll inject a decorated version of our $cordovaDevice plugin into our DashCtrl, and we’ll update our test condition accordingly.

describe('Controller: DashCtrl', function () {

  var should = chai.should(), $cordovaDevice = null, $httpBackend, DashCtrl, scope;

     beforeEach(module('GettingStartedWithIonicAndNgcordova'));
     beforeEach(module('ngCordovaMocks'));

     beforeEach(inject(function (_$cordovaDevice_) {
          $cordovaDevice = _$cordovaDevice_;
     }));

  // Initialize the controller and a mock scope
  beforeEach(inject(function ($controller, $rootScope, _$httpBackend_) {
          $httpBackend = _$httpBackend_;
          $httpBackend.when('GET', /templates\S/).respond("");
          $cordovaDevice.platform = 'TEST VALUE';
    scope = $rootScope.$new();
    DashCtrl = $controller('DashCtrl', {
      $scope: scope
    });
          $httpBackend.flush();
  }));

  it('should inspect the current deviceType', function () {
    scope.devicePlatform.should.equal('TEST VALUE');
  });

});

You can see now we’re decorating $cordovaDevice, supplying it with a value for it’s platform property, and we’re asserting that the $cordovaDevice.getPlatform() method is returning the correct value via our $scope.devicePlatform variable. We’ve also added a mock $httpBackend (and subsequent flush) that will listen to and ignore any page requests triggered by our controller initializing. In this way, we can simulate a specific platform and exercise our code in unit tests, AND our app still runs in the browser. At this point, running in the browser without ngCordovaMocks would cause failures. To really see the value of ngCordovaMocks, we’ll add support for Protractor tests.

Protractor (https://github.com/angular/protractor) First, we’ll need to install two node modules that give us new grunt tasks: one to control a Selenium Webdriver (http://www.seleniumhq.org), and one to run our protractor tests.

npm install grunt-protractor-webdriver --save-dev
npm install grunt-protractor-runner --save-dev

While grunt-protractor-runner installs a controller for Selenium Webdriver, we still need a Selenium server. We can install a standalone Selenium server by running the following the root of our project:

node_modules/protractor/bin/webdriver-manager update

Next, we’ll update our grunt script to include configurations for the new tasks, and add a new task of our own. Include these new tasks somewhere in your grunt.initConfig object:

    protractor_webdriver: {
      all: {
        command: 'webdriver-manager start'
      }
    },
    protractor: {
      options: {
        keepAlive: true, // If false, the grunt process stops when the test fails.
        noColor: false // If true, protractor will not use colors in its output.
      },
      all: {
        options: {
          configFile: 'test/protractor-conf.js'
        }
      }
    },

Then register our custom task somewhere after grunt.initConfig:

  grunt.registerTask('test_e2e', [
    'protractor_webdriver',
    'protractor'
  ]);

We’re not doing anything special in this config – we’re basically using a grunt task to control the Selenium server we could otherwise control from the CLI, and we’re offloading much of our protractor config to a properties file. Next, create the properties file at the path listed above (test/protractor-conf.js):

exports.config = {
    seleniumAddress: 'http://localhost:4444/wd/hub',

    specs: [
        'e2e/**/*.js'
    ],

    framework: 'mocha',

    capabilities: {
        'browserName': 'chrome',
        'chromeOptions': {
          args: ['--args','--disable-web-security']
        }
    },

    /**
     * This should point to your running app instance, for relative path resolution in tests.
     */
    baseUrl: 'http://localhost:8100',
};

Last, we’ll write a new end to end test case and execute it. Create a file at /test/e2e (which is the directory we included in our protractor configuration above). I named mine ‘tabs.js’. Add the content below to the file:

var chai = require('chai');
var chaiAsPromised = require('chai-as-promised');

chai.use(chaiAsPromised);
var expect = chai.expect;

describe('Ionic Dash Tab', function() {

  var decoratedModule = function() {
          var ngCordovaMocks = angular.module('ngCordovaMocks');
          var injector = angular.injector(['ngCordovaMocks', 'ng']);
    ngCordovaMocks.service('$cordovaDevice', function() {
               var cordovaDevice = injector.get('$cordovaDevice');
               cordovaDevice.platform = 'ios';
      return cordovaDevice;
    });
  };

  it('should have the correct heading', function() {
          browser.addMockModule('ngCordovaMocks', decoratedModule);
    browser.get('http://localhost:8100');

          var heading = element(by.css('h2'));
          expect(heading.getText()).to.eventually.equal('ios');
  });
});

In the code above, we’re decorating the $cordovaDevice service (much like we were in the unit tests), by first getting a reference to the ngCordovaMocks module, then getting a handle on the injector instance from the ngCordovaMocks module, then getting the $cordovaDevice service itself, and finally decorating the service by setting the platform to our desired value. In the test itself, we’re adding our newly decorated ngCordovaMocks module to Protractor’s browser instance. At this point, running the tests should yield positive results. You can run them using the custom task we registered (assuming your server is already running), by using the following command (be sure your server is running with ‘grunt serve’:

grunt test_e2e

Dynamic Configuration Since we’ve updated our app to only use ngCordovaMocks instead of ngCordova, we need the ability to switch between using the two seamlessly. Inspiration from this portion of the post comes from this post (http://www.ecofic.com/about/blog/getting-started-with-ng-cordova-mocks). To do this, we’ll use the grunt-text-replace grunt task. Install the grunt-text-replace node package using the following statement:

npm install grunt-text-replace --save-dev

Next, add the following config to Gruntfile.js somewhere in your grunt.initConfig object:

 replace: {
            production: {
              src: [
                '<%= yeoman.app %>/index.html',
                '<%= yeoman.app %>/<%= yeoman.scripts %>/app.js'
              ],
              overwrite: true,
              replacements:[
                { from: 'lib/ngCordova/dist/ng-cordova-mocks.js', to: 'lib/ngCordova/dist/ng-cordova.js' },
                { from: '\'ngCordovaMocks\'', to: '\'ngCordova\'' }
              ]
            },
            development: {
              src: [
                '<%= yeoman.app %>/index.html',
                '<%= yeoman.app %>/<%= yeoman.scripts %>/app.js'
              ],
              overwrite: true,
              replacements:[
                { from: 'lib/ngCordova/dist/ng-cordova.js', to: 'lib/ngCordova/dist/ng-cordova-mocks.js' },
                { from: '\'ngCordova\'', to: '\'ngCordovaMocks\'' }
              ]
            }
          },

Now we’ll add calls to these tasks to our existing Grunt tasks, as well as create a new init-development task, as seen below:

  grunt.registerTask('test', [
    'replace:development',
    'clean',
    'concurrent:test',
    'autoprefixer',
          'karma',
    'karma:unit:start',
    'watch:karma'
  ]);

  grunt.registerTask('serve', function (target) {
    if (target === 'compress') {
      return grunt.task.run(['compress', 'ionic:serve']);
    }

    grunt.config('concurrent.ionic.tasks', ['ionic:serve', 'watch']);
    grunt.task.run(['init-development', 'concurrent:ionic']);
  });

  grunt.registerTask('init', [
    'replace:production',
    'clean',
    'wiredep',
    'concurrent:server',
    'autoprefixer',
    'newer:copy:app',
    'newer:copy:tmp'
  ]);

  grunt.registerTask('init-development', [
    'replace:development',
    'clean',
    'wiredep',
    'concurrent:server',
    'autoprefixer',
    'newer:copy:app',
    'newer:copy:tmp'
  ]);

  grunt.registerTask('test_e2e', [
     'replace:development',
    'protractor_webdriver',
    'protractor'
  ]);

When we run ‘grunt serve’, the application will start in the web server, and will be running with ngCordovaMocks. From here, we can run our automated end to end tests using ‘grunt test_e2e’. We can also simply run the unit tests standalone using ‘grunt test’. You can see the way the tasks above changed to make that possible – calls to ‘replace:development’ prior to the tasks executing. In the case of the ‘test’ task, it was altered slightly to also include the ‘karma’ task to run the tests through after initial invocation, then followed by a test watcher. At this point, we can also run in our emulator without issue. To do that, we’ll quickly add the iOS platform to our project, and kick off the emulator to see the ‘real’ platform displayed on the screen.

grunt platform:add:ios

…followed by:

grunt emulate:ios

At this point, we’ll start to see a slight divergence from the way the app is functioning on the web versus in our emulator. In some cases, the device is available before all of the Cordova plugins are loaded. Furthermore, the way the screen refreshes as a result of this is also slightly different. To counter this, we’ll have to add a bit of logic to our controller to wait for the device to be ready. Update the DashCtrl with the following code:

.controller('DashCtrl', function($scope, $cordovaDevice, $ionicPlatform) {
     $ionicPlatform.ready(function() {
          $scope.devicePlatform = $cordovaDevice.getPlatform();
     });
})

Run the emulator again, and we should see the app functioning properly:

Screen Shot 2015-02-04 at 1.51.16 PM

That’s it! We should now be able to run in the browser, in the emulator, and through our automated tests with consistency. This setup has paid efficiency dividends for me on my current project, and I hope it helps folks get started on the right foot. It was a lot longer than I thought it would be. *As previously stated, I ran the Yeoman generator with the ‘Tabs’ example project option. Turns out it came with a broken test. I added a question to an open issue on this at the ngCordova project’s Github (https://github.com/driftyco/ng-cordova) page You can find the fixed test below:

'use strict';

describe('Controller: FriendsCtrl', function () {

  var should = chai.should();

  // load the controller's module
  beforeEach(module('GettingStartedWithIonicAndNgcordova'));

  var FriendsCtrl,
    scope;

  // Initialize the controller and a mock scope
  beforeEach(inject(function ($controller, $rootScope) {
    scope = $rootScope.$new();
    FriendsCtrl = $controller('FriendsCtrl', {
      $scope: scope
    });
  }));

  it('should attach a list of pets to the scope', function () {
    scope.friends.should.have.length(4);
  });
});