Friday, 22 July 2016

Docker & Spring Boot

Docker allows you to package an application with its dependencies, into a light weight, portable container that can run on almost any environment.  You can think of a Docker container as a run time, a mini virtual machine that encapsulates your application and its dependencies.
In order to run a container you need a Docker image. An image is like a template that defines everything that will exist within the container. You can almost think of an container as a run time instance of the image it was created from. In this post we'll define and build 3 slightly different Docker images that run a simple Java app.

Installing Docker 

If you haven't already done so you'll need to install Docker. The official documentation is pretty good so following it step by step should see you up and running in about 15 minutes.
I've installed Docker on Windows and Ubuntu but to be honest I prefer running it on Ubuntu and have found it a bit more reliable than with Docker Toolbox on Windows.    
Follow the links above to install docker on your OS of choice. Once you're done, open a terminal window and run docker run hello-world to check that your docker install is working as expected.   

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
brianh@brianh-VirtualBox:~/apps$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world

c04b14da8d14: Already exists 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

This command pulls the hello-world image from the Docker hub repository and uses the image to start a container. If you see the output shown above you'll know your docker installation was successful.

Sample Code 

We're going to look at 3 slightly different ways of building a docker image to run a simple Spring Boot app. We'll start a container from each of the 3 images and call the application health check to make sure the app is up and running.

The Boot app itself couldn't be simpler, consisting of just an Application class annotated with @SpringBootApplication. This is enough to enable auto configuration and act as the application entry point. We don't even need to define our own health check as Boot provides one out of the box. 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
package com.blog.samples.boot;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {
    
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
   
}



Dockerfile Definition

A Docker file is a set of instructions or steps that tells Docker how to build an image. The Dockerfile below defines steps to build, package and run our app.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM anapsix/docker-oracle-java8

# Install maven
RUN apt-get update -y
RUN apt-get install -y maven

# Creating working directory
WORKDIR /app

# Add src to working directory
ADD pom.xml /app/pom.xml
ADD src /app/src

# Build JAR
RUN mvn package -DskipTests=true

# Start app
ENTRYPOINT ["java","-jar","/app/target/docker-sample-1-0.1.0.jar"]

We'll walk through the Dockerfile  line by line and explain whats happening.
  • Line 1 - FROM instruction tells Docker what base image we want to use as a starting point for our image. I've used anapsix/docker-oracle-java8 which is a lightweight image for Java 8 running on Ubuntu.
  • Line 4 - RUN instruction tells Docker to run a command, in this case apt-get update -y to update the apt package list in preparation for the next step.  
  • Line 5 - tells Docker to run apt-get install -y to download and install maven on the image. The image will use Maven to build our app.
  • Line 8 - WORKDIR command tells Docker to create a working directory on the image. This directory will be used by the ADD and RUN commands that are defined below.
  • Line 10 - ADD command tells Docker to add the application POM from the host machine, to the app directory on the image.
  • Line 11 - tells Docker to add the application source from the host machine to /app/src on the image. At this point the image has everything it need to build the project.
  • Line 15 - tells Docker to run the mvn package -DskipTests=true command from the /app directory on the image. Maven will download all required dependencies and build an executable jar in the /app/target directory.  
  • Line 18 - ENTRYPOINT tells Docker what command to run when the container is started. The comma separated list of values consists of an executable (java in our instance) and a number of parameters. The entry point defined here tells Docker to run the executable JAR from the /app/target directory.       



Creating the Image 

Now that we've defined a Dockerfile lets put it to work by building an image. Run the docker build command specifying an image name and the location of the Dockerfile. For example docker build -t "docker-sample-1" .

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
brianh@brianh-VirtualBox:~/apps/docker-spring-boot/docker-sample-1$ docker build -t "docker-sample-1" .
Sending build context to Docker daemon 12.83 MB
Step 1 : FROM anapsix/docker-oracle-java8
 ---> a8a9dcb0ac64
Step 2 : RUN apt-get update -y
 ---> Running in 5d477ccb8f46
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://ppa.launchpad.net trusty InRelease [15.5 kB]
Get:2 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:3 http://archive.ubuntu.com trusty-security InRelease [65.9 kB]
Hit http://archive.ubuntu.com trusty Release.gpg
Hit http://archive.ubuntu.com trusty Release

// Lots of output from apt update and maven build removed for brevity

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9:06:42.103s
[INFO] Finished at: Thu Jul 21 16:24:04 UTC 2016
[INFO] Final Memory: 21M/107M
[INFO] ------------------------------------------------------------------------
 ---> b362beb8e90d
Removing intermediate container 512fd61457e6
Step 8 : RUN ls /app/target
 ---> Running in 7eba00207f50
classes
docker-sample-1-0.1.0.jar
docker-sample-1-0.1.0.jar.original
generated-sources
maven-archiver
maven-status
 ---> 613a5f6ef5bb
Removing intermediate container 7eba00207f50
Step 9 : ENTRYPOINT java -jar /app/target/docker-sample-1-0.1.0.jar
 ---> Running in e43707589d45
 ---> 2e4f625b102e
Removing intermediate container e43707589d45
Successfully built 2e4f625b102e

Docker runs each instruction in the Dockerfile step by step. Step 1 in this instance runs quickly because I already have this image cached locally. When you're building this for the first time you likely wont have the anapsix/docker-oracle-java-8 image, so Docker will pull it from the Docker Hub repository. Subsequent builds will use the local cached image and as a result will run much quicker. For each step Docker does the following
  • creates a new intermediate container
  • runs the command inside that container 
  • commits the change as a new image layer 
  • removes the intermediate container and moves to the next step
The new image consists of multiple layers stacked one on top of the other, one for each instruction in the Dockerfile. Run the docker images command to see the newly created image.

1
2
3
4
brianh@brianh-VirtualBox:~/apps$ docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
docker-sample-1               latest              2e4f625b102e        33 minutes ago      936 MB
anapsix/docker-oracle-java8   latest              a8a9dcb0ac64        3 weeks ago         784.5 MB
                                                              
Note the Image ID is the same as that output at the end of the build. To see the various layers that make up the new image run the docker history command as follows. 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
brianh@brianh-VirtualBox:~/apps$ docker history docker-sample-1
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
2e4f625b102e        37 minutes ago      /bin/sh -c #(nop) ENTRYPOINT ["java" "-jar" "   0 B                 
613a5f6ef5bb        37 minutes ago      /bin/sh -c ls /app/target                       0 B                 
b362beb8e90d        37 minutes ago      /bin/sh -c mvn package -DskipTests=true         37.51 MB            
0f3103ac18be        9 hours ago         /bin/sh -c #(nop) ADD dir:82830cfed5011783b44   1.276 kB            
73ccd6348460        9 hours ago         /bin/sh -c #(nop) ADD file:cbad7ca7f8efa76f28   1.349 kB            
22f2ab199dd7        9 hours ago         /bin/sh -c #(nop) WORKDIR /app                  0 B                 
5e3e8435f2b4        9 hours ago         /bin/sh -c apt-get install -y maven             92.07 MB            
6c352c184d38        9 hours ago         /bin/sh -c apt-get update -y                    21.9 MB             
a8a9dcb0ac64        3 weeks ago         /bin/sh -c #(nop) ENV JAVA_HOME=/usr/lib/jvm/   0 B                 
<missing>           3 weeks ago         /bin/sh -c apt-get update && DEBIAN_FRONTEND=   583.6 MB            
<missing>           3 weeks ago         /bin/sh -c apt-key adv --keyserver keyserver.   25.18 kB            
<missing>           3 weeks ago         /bin/sh -c echo "deb http://ppa.launchpad.net   65 B                
<missing>           3 weeks ago         /bin/sh -c echo "oracle-java8-installer share   2.677 MB            
<missing>           3 weeks ago         /bin/sh -c #(nop) ENV LC_ALL=en_US.UTF-8        0 B                 
<missing>           3 weeks ago         /bin/sh -c #(nop) ENV LANG=en_US.UTF-8          0 B                 
<missing>           3 weeks ago         /bin/sh -c locale-gen en_US.UTF-8               1.621 MB            
<missing>           3 weeks ago         /bin/sh -c #(nop) MAINTAINER Anastas Dancha "   0 B                 
<missing>           3 weeks ago         /bin/sh -c #(nop) CMD ["/bin/bash"]             0 B                 
<missing>           3 weeks ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/   1.895 kB            
<missing>           3 weeks ago         /bin/sh -c rm -rf /var/lib/apt/lists/*          0 B                 
<missing>           3 weeks ago         /bin/sh -c set -xe   && echo '#!/bin/sh' > /u   8.841 MB 
   
Note that lines 3 to 11 list the image layers that were added as a result of each instruction executed in our Dockerfile.

Running the Container 

Now that we've built the image we're ready to start a container using the docker run command. When the container starts it will launch the Java app on container port 8080. We need to tell the Docker container to map port 8080 to a port on the host machine so that we can access the application running inside the container. We do this using the -p 8080:8080 argument as part of the docker run command.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
brianh@brianh-VirtualBox:~/apps/docker-spring-boot/docker-sample-1$ docker run -p 8080:8080 docker-sample-1

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v1.2.7.RELEASE)

3051 [main] INFO  com.blog.samples.boot.Application - Starting Application v0.1.0 on 81828366ca4e with PID 1 (/app/target/docker-sample-1-0.1.0.jar started by root in /app) 
3382 [main] INFO  o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@66429cee: startup date [Thu Jul 21 17:15:31 UTC 2016]; root of context hierarchy 
6707 [main] INFO  o.s.b.f.s.DefaultListableBeanFactory - Overriding bean definition for bean 'beanNameViewResolver': replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter.class]] 
8176 [main] INFO  o.h.validator.internal.util.Version - HV000001: Hibernate Validator 5.1.3.Final 
10328 [main] INFO  o.s.b.c.e.t.TomcatEmbeddedServletContainer - Tomcat initialized with port(s): 8080 (http) 
11235 [main] INFO  o.a.catalina.core.StandardService - Starting service Tomcat 

When the container starts it runs the ENTRYPOINT command java -jar /app/target/docker-sample-1-0.1.o.jar specified in the Dockerfile,  The application will start on container port 8080 and Docker will bind to port 8080 on the host.

Testing the Application

To test that the app is up and running we can call the app health check using a simple curl command. We should see some activity in the logs and receive a HTTP 200 response.

1
2
3
4
5
6
7
brianh@brianh-VirtualBox:~/apps/docker-spring-boot/docker-sample-1$ curl -i localhost:8080/health
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
X-Application-Context: application
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Thu, 21 Jul 2016 17:15:54 GMT



Other Examples  

Next we'll look at 2 more Docker examples that are slight variations of the one above. I won't describe these in the same detail but feel free to pull them from Github and have a play around.

Example two is a simplified version of the first example, and simply adds the app JAR to the container and runs the app. In this instance you build and package the app on the host and simply copy the JAR into the container. This keeps the image slightly lighter as it doesn't have to create a maven repository like the first example did.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
FROM anapsix/docker-oracle-java8

# Creating working directory
WORKDIR /app

# Add src to working directory
ADD target/docker-smaple-2-0.1.0.jar /app/docker-sample-2-0.1.0.jar

# Start app
ENTRYPOINT ["java","-jar","/app/docker-sample-2-0.1.0.jar"]

Example 3 is a slightly different variation again. Rather than using the ADD command to copy the application artifact from the host machine, we use pass a URL to the ADD command to pull the artifact from a repository. I've used S3 in this example but you could pull your app from a CI server like Team City, Jenkins or anywhere else you please.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
FROM anapsix/docker-oracle-java8

# Creating working directory
WORKDIR /app

# Pull artifact from repo and add to working directory
ADD https://s3-us-west-2.amazonaws.com/docker-boot-artifact/docker-sample-3-0.1.0.jar /app/docker-sample-3-0.1.0.jar

# Start app
ENTRYPOINT ["java","-jar","/app/docker-sample-3-0.1.0.jar"]



Source Code  

The source code for each of these examples is on Github and split into 3 separate projects. Pull the code down, play around with it and if you have any comments, questions or suggestions just leave a note below.

Tuesday, 17 May 2016

An Introduction to Wiremock

This post provides a brief introduction to Wiremock, showing how it can be used to to quickly and easily mock remote API calls. We'll use Wiremock to write some integration tests for a simple Dropwizard app and show you how it can be put to use in a real world scenario.

Why would I need to mock external API calls? 

There are a number of scenarios where it makes sense to mock an external API rather than call a live service.
  • The external API may still be in development and not yet available for integration. In this instance as long as a data contract has been defined (e.g. Swagger spec, WSDL), the remote API can be stubbed based on the data contract. Stubbed endpoints allows a team to continue development even when external APIs isn't fully implemented.
  • You may have little or no control over the external API uptime in development or test.  As a result you cannot guarantee it will be available to call when running integration tests. In this instance it makes sense to use mocked responses so that your tests don't fail because an external dependency is down. This is especially important if integration tests are being run as part of your Continuous Integration pipeline.  
  • You may want to test fault tolerance scenarios that aren't particularly easy to produce in a live API.  You may want the API to behave badly, but in a very specific way, in order to test how your application deals with a remote failure. Remote calls timing out is an example of a fault tolerance scenario that isn't that easy to set up on a live API. Using a mocked responses allows you to easily test a variety of failure scenarios and ensure your application behaves as expected. 

How does it work? 

Wiremock uses a Jetty Servlet container to expose HTTP endpoints that can be configured to behave in a specific way. Stubbed endpoints are configured to return any HTTP response code, header and body, allowing you to test a wide variety of integration scenarios. Even though stubbed responses are being returned, from the client applications perspective the remote calls appear authentic. As a result the client behaves in exactly the same way it would integrating with a live API.
Wiremock can be deployed as a standalone server, returning mocked responses for preconfigured endpoints. It can also be started and stopped on the fly as part of an integration test suit. This is the approach we're going to take in this post. I like the idea of being able to start the server, configure a stubbed response, run an integration test and then tear down the stub when we're done.  

Sample Code

The sample code is a small Dropwizard app that exposes a single endpoint. An integration test will call the test endpoint, triggering a remote API call which will be serviced by a Wiremock stub.  
  

Customer Resource

The CustomerResource class is a Jersey managed resource that exposes a simple endpoint. In the constructor we pass the external API URL and configure the Client that will be used for the remote call.
On line 22 the Customer that was sent to the endpoint is used to issue a HTTP POST to the credit check API. A CreditCheckResult JSON response is expected from the API, which is then returned to the client.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@Path("/customer")
public class CustomerResource {

    private String creditCheckServerUrl;
    private Client client;
 
    public CustomerResource(String creditCheckServerUrl) {
     
        this.creditCheckServerUrl = creditCheckServerUrl; 
        client = ClientBuilder.newClient();
        client.property(ClientProperties.CONNECT_TIMEOUT, 1000);
        client.property(ClientProperties.READ_TIMEOUT,    3000);
    }
    
    @POST
    @Path("/perform-customer-credit-check")
    @Produces(MediaType.APPLICATION_JSON)
    @Consumes({MediaType.APPLICATION_JSON})
    public Response performCustomerCreditCheck(Customer customer) throws Exception {
     
        WebTarget webTarget = client.target(creditCheckServerUrl + "/credit-check-api");      
        Response response = webTarget.request().post(Entity.entity(customer, MediaType.APPLICATION_JSON));
     
        if(response.getStatus() == 200){
            return Response.ok(response.readEntity(CreditCheckResult.class), MediaType.APPLICATION_JSON).build();      
        }
     
        throw new CreditCheckFailedException("Error occurred calling Check Service");
    }
}
Figure 1.0 - customer credit check endpoint

Another component worth mentioning is ApplicationExceptionMapper. This class is used to translate application exceptions into HTTP responses. If a SocketTimeoutException is thrown a 503 Service Unavailable is returned along with a message to say the credit check service call timed out. We'll see this in action later with a fault tolerance integration test. Other exception types result in a 500 Internal Server Error and a generic error message.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Provider
public class ApplicationExceptionMapper implements ExceptionMapper<Throwable>
{  
    @Override
    public Response toResponse(Throwable exception)
    {          
     if(exception.getCause() instanceof SocketTimeoutException){
      return Response.status(Response.Status.SERVICE_UNAVAILABLE).
                      entity("Credit Check Service Timed Out").
                      type(MediaType.APPLICATION_JSON).build();
     }
     else{
      return Response.status(Response.Status.INTERNAL_SERVER_ERROR).
                      entity("Error occurred calling Check Service").
                      type(MediaType.APPLICATION_JSON).build();      
     }      
    }
}

                                                                             Figure 1.1 - exception mapper


Integration Test Configuration

To see Wiremock in action we're going to create 3 integration tests, all of which will call the performCustomerCreditCheck endpoint defined above. Before any tests can be written we need to do some general configuration.
  • DropWizardAppRule starts the Dropwizard application before tests are run and stops it again after they finish. This is handy as it saves us having to manually start and stop the server each time we want to run our tests.
  • WireMockRule starts a Jetty container so that Wiremock can serve the mock HTTP responses defined in our tests. Jetty is started on the port specified in the WireMockRule constructor. 
  • Client object is used to call the Jersey endpoint.
  • ObjectMapper is used to serialize/deserialize JSON. 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
public class CustomerResourceTest {

 @ClassRule    
 public static final DropwizardAppRule<WireMockDemoAppConfig> RULE = 
                 new DropwizardAppRule<WireMockDemoAppConfig>(WireMockDemoApp.class, 
                    ResourceHelpers.resourceFilePath("config.yml"));
 @Rule
 public WireMockRule wireMockRule = new WireMockRule(8090);
 
 private Client client = ClientBuilder.newClient();;     
 private ObjectMapper mapper = new ObjectMapper();


Test 1 - Credit Check Success

Lets start with a happy path test that expects a successful response from the credit check API.  Lines 4 to 10 configure a stub that will expose an endpoint at /credit-check-api . The stub expects a HTTP POST request with an application/json content type and a request body containing Customer JSON. If WireMock receives a request matching this criteria it will return a HTTP 200, an application/json content type and a response body containing the supplied CreditCheckResponse JSON. Note that the getCustomerJson and getCreditCheckResult methods are local helper methods used to build mock JSON for the expected request and response.
 
Now that a stub has been configured for the remote API call, we can call the endpoint we created earlier at /customer/perform-customer-credit-check. On line 13 we do a HTTP POST with a request body containing Customer json.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
    @Test
    public void testCustomerCreditCheckSuccessResponse() throws Exception {

     stubFor(post(urlEqualTo("/credit-check-api"))
                .withHeader("Content-Type", WireMock.equalTo("application/json"))
                .withRequestBody(WireMock.equalTo(getCustomerJson()))
                .willReturn(aResponse()
                    .withStatus(200)
                    .withHeader("Content-Type", "application/json")
                    .withBody(getCreditCheckJson())));
         
     WebTarget webTarget = client.target("http://localhost:8080/customer/perform-customer-credit-check");     Response response = webTarget.request(MediaType.APPLICATION_JSON).
                                  post(Entity.entity(getCustomer(), MediaType.APPLICATION_JSON));
     
     assertThat(response.getStatus(), equalTo(200));     
     assertThat(response.readEntity(CreditCheckResult.class), equalTo(getCreditCheckResult()));
    }

Lets quickly revisit the endpoint we created earlier (figure 1.0 above). The HTTP POST on line 13 above will be handled by the performCustomerCreditCheck endpoint. This endpoint will then make a call out to the credit check API (figure 1.0 line 8), which we configured above as a Wiremock stub. The stub will return a CreditCheckResult response and a HTTP 200, resulting in a similar response being returned by performCustomerCreditCheck.

Test 2 - Credit Check Failure

Next we'll look at a failure scenario that expects an error response from the credit check API. This time the Wiremock stub is configured to return a HTTP 503, testing how our endpoint handles an error response from the remote API.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
    @Test
    public void testCustomerCreditCheckErrorResponse() throws Exception {

     stubFor(post(urlEqualTo("/credit-check-api")).withHeader("Content-Type", WireMock.equalTo("application/json"))
                .withRequestBody(WireMock.equalTo(getCustomerJson()))
                .willReturn(aResponse()
                    .withStatus(503)
                    .withHeader("Content-Type", "application/json")));
                    
     WebTarget webTarget = client.target("http://localhost:8080/customer/perform-customer-credit-check");     Response response = webTarget.request(MediaType.APPLICATION_JSON).
                                  post(Entity.entity(getCustomer(), MediaType.APPLICATION_JSON));
     
     assertThat(response.getStatus(), equalTo(500));     
     assertThat(response.readEntity(String.class), equalTo("Error occurred calling Check Service"));
    }

This test sends a HTTP POST to /customer/perform-customer-credit-checkwhich results in an external call out to the credit check API (figure 1.0 line 8). The stub will return a HTTP 503 which will result in the endpoint throwing a CreditCheckFailedException. The ApplicationExceptionMapper defined earlier (Figure 1.1) will translate the CreditCheckFailedException into a HTTP 500 response which is returned to the client.

Test 3 - Credit Check Service Timeout

Our final test will look at a fault tolerance scenario. The Wiremock stub is configured to return a successful HTTP 200, but this time the response is delayed by 6 seconds. This simulates a slow or unresponsive API call and allows us to test how our application handles such a scenario. It's good practice to terminate external calls if the remote endpoint does not response within a reasonable period.
In the CustomerResource defined earlier (figure 1.0) we configured the Client with a read time out of 3 seconds. This means that if the credit check API doesn't respond within 3 seconds the call will time out and fail. The stub configuration below tests this behaviour by mocking the endpoint to wait 6 seconds before responding, more than enough time for the client application to time out.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
@Test
    public void testCustomerCreditCheckServiceTimeout() throws Exception {

     int creditCheckServiceDelayMillis = 6000;
     
     stubFor(post(urlEqualTo("/credit-check-api"))                .withHeader("Content-Type", WireMock.equalTo("application/json"))
                .withRequestBody(WireMock.equalTo(getCustomerJson()))
                .willReturn(WireMock.aResponse()
                    .withStatus(200)
                    .withHeader("Content-Type", "application/json")                    
                    .withBody(getCreditCheckJson())
                    .withFixedDelay(creditCheckServiceDelayMillis)));
     
     WebTarget webTarget = client.target("http://localhost:8080/customer/perform-customer-credit-check");     
     long startMillis = DateTime.now().getMillis();
     Response response = webTarget.request(MediaType.APPLICATION_JSON).
              post(Entity.entity(getCustomer(), MediaType.APPLICATION_JSON));
     long endMillis = DateTime.now().getMillis();
     
     assertThat((int)(endMillis - startMillis), is(lessThan(creditCheckServiceDelayMillis)));     
     assertThat(response.getStatus(), equalTo(503));
     assertThat(response.readEntity(String.class), equalTo("Credit Check Service Timed Out"));
    }

This test sends a HTTP POST to /customer/perform-customer-credit-check, which results in an external call out to the credit check API (figure 1.0 line 8). The stub will hang for 6 seconds causing the Client to time out and throw a SocketTimeOutException. The ApplicationExceptionMapper defined earlier (Figure 1.1) will translate the SocketTimeOutException into a HTTP 503 response and include a message saying that the credit check service timed out.

Running the Tests

To see the sample code in action you'll need to pull it down and run it as follows.
  • git clone https://github.com/briansjavablog/wiremock-demo.git
  • cd wiremock-demo
  • mvn test 

What else can Wiremock do?

Wiremock has a number of other interesting features that are worth looking at.
  • Standalone Deployment - Wiremock can be deployed as a standalone web app rather than being started and stopped with each integration test.
  • Mapping files can be used to configure stubs as an alternative to using just the API. This is useful is you have lots of stubs and you want to decouple their configuration from your tests. Mappings can be placed in a directory on your deployed Wiremock instance or registered by posting them to a Wiremock endpoint.
  • Proxying - Wiremock can be used to proxy calls to a live API. Say for example you want to run the majority of your integration tests against a live API, but you'd like to use Wiremock stubs for testing fault tolerance scenarios. Wiremock can be used as a proxy so that it forwards some requests to the live API and responds with stubs for other. This is a powerful feature as it allows you to mix stubbed and real responses in the same suit of tests.   

Wrapping Up 

This post looked at how Wiremock can be used to stub remote endpoints and provide a simple way to test a variety of integration scenarios. From what I've seen Wiremock isn't an alternative to live end to end integration testing, but rather something that can compliment it.  As always, if you have any questions or suggestions, please leave a comment below.

Tuesday, 3 May 2016

Spring Boot & Amazon Web Services (EC2, RDS & S3)

This post will take you through a step by step guide to building and deploying a simple Java application to the AWS cloud platform. The application will use a few well known AWS services which I'll describe along the way. There is quite a bit of material to cover in this post so the overview of the AWS services will be light. For those interested in finding out more I'll link to the appropriate section of the AWS documentation. Amazon have done a fine job documenting their platform so I'd encourage you to have a read if time permits.      

Prerequisites 

In order to get the sample application up and running you'll need access to AWS. If you don't already have access you can register for a free account which includes access to a bunch of great services and some pretty generous allowances. I'd encourage you to get an account set up now before going any further.

What will the sample application look like? 

The application we're going to build is very simple customer management app and will consist of a Spring Boot web tier and an AnularJS front end. We'll deploy the application to AWS and make use of the following services.
  • EC2 - Amazons Elastic Cloud Compute provides on demand virtual server instances that can be quickly provisioned with the operating system and software stack of your choice. We'll be using Amazons own Linux machine image to deploy our application. 
  • Relational Database Service - Amazons database as a service allows developers to provision Amazon managed database instances in the cloud.  A number of  common database platforms are supported but we'll be using a MySQL instance.
  • S3 Storage - Amazons Simple Storage Service provides simple key value data storage which we'll be using to store image files. 
We're going to build a simple CRUD style customer management app to create, view and delete customer details. Below is a high level overview of each of the screens and how they interact with other components.
  • Create customer - An Angular managed view will capture and post customer data to a Spring Boot managed endpoint. When a customer is added the endpoint will save the customer data to a MySQL database instance on RDS. The customer image will be saved to S3 storage which will generate a unique key and a public URL to the image. The key and public URL will be saved in the database as part of the customer data. 
Create Customer View
  • View customer - An Angular managed view will issue a GET request to an endpoint for a specific customer. The endpoint will retrieve customer data from the MySQL database instance on RDS and return it to the client. The response data will include a publicly accessible URL which will be used to reference the customer image directly from S3 storage.
View Customer View
  • View all customers - An Angular managed view will issue a GET request for all customers to a Spring Boot managed endpoint. Customers will be displayed in a simple table and users will have the ability to view or delete customer rows. The endpoint will retrieve all customer data from the MySQL database instance on RDS and return it to the client. Images will be referenced from S3 in the same way as the View Customer screen. 
View All Customers View

Part 1 - Building the application  

The first part of this post will focus on building the demo application. In the second part we'll look at configuring the various services on AWS, running the application locally and then deploying it in the cloud.

Source Code  

The full source code for this tutorial is available on github at https://github.com/briansjavablog/spring-boot-aws. You may find it useful to pull the code locally so that you can experiment with it as you work through the tutorial.

Application Structure.



In the sections that follow we'll look at some of the most important components in detail. The focus of this post isn't Spring Boot so I wont describe every class in detail, as I've covered quite a bit this already in a separate post. We'll focus more on AWS integration and making our app cloud ready.  

Domain Model

The domain model for the demo app is very simple and consist of just 3 entities - a CustomerCustomerAddress and CustomerImage. The Customer entity is defined below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
@Entity(name="app_customer")
public class Customer{

    public Customer(){}
 
    public Customer(String firstName, String lastName, Date dateOfBirth, CustomerImage customerImage, Address address) {
       super();
       this.firstName = firstName;
       this.lastName = lastName;
       this.dateOfBirth = dateOfBirth;
       this.customerImage = customerImage;
       this.address = address;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(nullable = false, length = 30)
    private String firstName;
 
    @Setter
    @Getter
    @Column(nullable = false, length = 30)
    private String lastName;
 
    @Setter 
    @Getter
    @Column(nullable = false)
    private Date dateOfBirth;
 
    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private CustomerImage customerImage;
 
    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private Address address;
}

Address is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Entity(name="app_address")
public class Address{

    public Address(){}
 
    public Address(String street, String town, String county, String postCode) {
       this.street = street;
       this.town = town;
       this.county = county;
       this.postcode = postCode;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(name = "street", nullable = false, length=40)
    private String street;
 
    @Setter
    @Getter
    @Column(name = "town", nullable = false, length=40)
    private String town;
 
    @Setter 
    @Getter
    @Column(name = "county", nullable = false, length=40)
    private String county;

    @Setter
    @Getter
    @Column(name = "postcode", nullable = false, length=40)
    private String postcode;
}

And finally CustomerImage is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
@Entity(name="app_customer_image")
public class CustomerImage {

    public CustomerImage(){}
 
    public CustomerImage(String key, String url) {
       this.key = key;
       this.url =url;  
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(name = "s3_key", nullable = false, length=200)
    private String key;
 
    @Setter
    @Getter
    @Column(name = "url", nullable = false, length=1000)
    private String url;
 
}


Customer Controller

The CustomerController exposes endpoints for creating, retrieving and deleting customers and is called from an Angular front end that we'll create later.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@RestController
public class CustomerController {

 @Autowired
 private CustomerRepository customerRepository;
 
 @Autowired
 private FileArchiveService fileArchiveService; 
  
 
        @RequestMapping(value = "/customers", method = RequestMethod.POST)
        public @ResponseBody Customer createCustomer(            
                @RequestParam(value="firstName", required=true) String firstName,
                @RequestParam(value="lastName", required=true) String lastName,
                @RequestParam(value="dateOfBirth", required=true) @DateTimeFormat(pattern="yyyy-MM-dd") Date dateOfBirth,
                @RequestParam(value="street", required=true) String street,
                @RequestParam(value="town", required=true) String town,
                @RequestParam(value="county", required=true) String county,
                @RequestParam(value="postcode", required=true) String postcode,
                @RequestParam(value="image", required=true) MultipartFile image) throws Exception {
    
             CustomerImage customerImage = fileArchiveService.saveFileToS3(image);         
             Customer customer = new Customer(firstName, lastName, dateOfBirth, customerImage, 
                                              new Address(street, town, county, postcode));
     
             customerRepository.save(customer);
             return customer;            
    }

This code snippet above does a few different things
  • Injects a CustomerRepository for saving and retrieving customer entities and a FileArchiveService for saving and retrieving customer images in S3 storage.
  • Takes posted form data including an image file and maps it to method parameters. 
  • Uses the FileArchiveService service to save the uploaded file to S3 storage. The returned CustomerImage object contains a key and public URL returned from S3.
  • Creates a Customer entity and saves it to the database. Note that the CustomerImage is saved as part of Customer so that the customer entity has a reference to the image stored on S3.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.GET)
public Customer getCustomer(@PathVariable("customerId") Long customerId) {
  
    /* validate customer Id parameter */
    if (null==customerId) {
       throw new InvalidCustomerRequestException();
    }
  
    Customer customer = customerRepository.findOne(customerId);
  
    if(null==customer){
       throw new CustomerNotFoundException();
    }
  
    return customer;
}

The method above provides an endpoint that takes a customer Id via a HTTP GET, retrieves the customer from the database and returns a JSON representation to the client.

1
2
3
4
5
@RequestMapping(value = "/customers", method = RequestMethod.GET)
public List<Customer> getCustomers() {
  
    return (List<Customer>) customerRepository.findAll();
}

The method above provides an endpoint for retrieving all customers via a HTTP GET.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.DELETE)
public void removeCustomer(@PathVariable("customerId") Long customerId, HttpServletResponse httpResponse) {

    if(customerRepository.exists(customerId)){
        Customer customer = customerRepository.findOne(customerId);
        fileArchiveService.deleteImageFromS3(customer.getCustomerImage());
        customerRepository.delete(customer); 
    }
  
    httpResponse.setStatus(HttpStatus.NO_CONTENT.value());
}

The method above exposes an endpoint for deleting customers using a HTTP DELETE. The CustomerImage associated with the Customer is used to call the FileArchiveService to remove the customer image from S3 storage. The Customer is then removed from the database and a HTTP 204 is returned to the client.

File Archive Service

As mentioned above, we're going to save uploaded images to S3 storage. Thankfully AWS provides an SDK that makes it easy to integrate with S3, so all we need to do is write a simple Service that uses that SDK to save and retrieve files.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
@Service
public class FileArchiveService {

    @Autowired
    private AmazonS3Client s3Client;

    private static final String S3_BUCKET_NAME = "brians-java-blog-aws-demo";


    /**
     * Save image to S3 and return CustomerImage containing key and public URL
     * 
     * @param multipartFile
     * @return
     * @throws IOException
     */
    public CustomerImage saveFileToS3(MultipartFile multipartFile) throws FileArchiveServiceException {

        try{
            File fileToUpload = convertFromMultiPart(multipartFile);
            String key = Instant.now().getEpochSecond() + "_" + fileToUpload.getName();

            /* save file */
            s3Client.putObject(new PutObjectRequest(S3_BUCKET_NAME, key, fileToUpload));

            /* get signed URL (valid for one year) */
            GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(S3_BUCKET_NAME, key);
            generatePresignedUrlRequest.setMethod(HttpMethod.GET);
            generatePresignedUrlRequest.setExpiration(DateTime.now().plusYears(1).toDate());

            URL signedUrl = s3Client.generatePresignedUrl(generatePresignedUrlRequest); 

            return new CustomerImage(key, signedUrl.toString());
        }
        catch(Exception ex){   
            throw new FileArchiveServiceException("An error occurred saving file to S3", ex);
        }  
    }
  • Line 5 - AmazonS3Client is provided by the AWS SDK and allows us to read and write to S3. This component gets the credentials necessary to connect to S3 from aws-config.xml which we'll define later.
  • Line 7 - The name of the S3 bucket that the application will read from and write to. You can think of a bucket as a storage container into which you can save resources. We'll look at how to define an S3 bucket later in the post. 
  • Lines 20 & 21 - The MultiPartFile uploaded from the client is converted to a File and a key is generated using the file name and time stamp. The combination of file name and time stamp is important so that multiple files can be uploaded with the same name.
  • Line 24 - The S3 client saves the file to the specified bucket using the generated key. 
  • Lines 27 to 31 - Using the bucket name and key to uniquely identify this resource,  a pre signed public facing URL is generated that can be later used to retrieve the image. The expiration is set to one year from today to tell S3 to make the resource available using this public URL for no more than one year.  
  • Line 33 - The generated key and public facing URL are wrapped in a CustomerImage and returned to the controller. CustomerImage is saved to the database as part of the Customer persist and is the link between the Customer stored in the database and the customers image file on S3. When a client issues a GET request for a specific customer the public facing URL to the customer image is returned. This allows the client application to reference the image directly from S3. 
1    
2
3
4
5
6
7
8
/**
 * Delete image from S3 using specified key
 * 
 * @param customerImage
 */
public void deleteImageFromS3(CustomerImage customerImage){
    s3Client.deleteObject(new DeleteObjectRequest(S3_BUCKET_NAME, customerImage.getKey())); 
}

The method above uses the key from CustomerImage to delete the specific resource from the brians-java-blog-aws-demo bucket on S3. This is the key that was used to save the image to S3 in the saveFileToS3 method described above.

Java Resource Configuration for AWS

The AwsResourceConfig class handles configuration required for integration with S3 storage and the MySQL instance running on RDS. The contents of this class are explained in detail below.

1
2
3
4
5
6
7
8
@Configuration
@ImportResource("classpath:/aws-config.xml")
@EnableRdsInstance(databaseName = "${database-name:}", 
                   dbInstanceIdentifier = "${db-instance-identifier:}", 
                   password = "${rdsPassword:}")
public class AwsResourceConfig {

}
  • @Configuration indicates that this class contains configuration and should be processed as part of component scanning.  
  • @ImportResources tells Spring to load the XML configuration defined in aws-config.xml. We'll cover the contents of this file later. 
  • @EnableRdsInstance is provided by Spring Cloud AWS as a convenient way of configuring an RDS instance. The databaseName, dbInstanceIdentifier and password are defined when setting up the RDS instance in the AWS console. We'll look at RDS set up later.


XML Resource Configuration for AWS

In order to access protected resources using Amazons SDK an access key and a secret key must be supplied. Spring Cloud for AWS provides an XML namespace for configuring both values so that they are available to the SDK at runtime.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aws-context="http://www.springframework.org/schema/cloud/aws/context"
       xmlns:jdbc="http://www.springframework.org/schema/cloud/aws/jdbc"
       xsi:schemaLocation="http://www.springframework.org/schema/beans 
                           http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
                           http://www.springframework.org/schema/cloud/aws/context
                           http://www.springframework.org/schema/cloud/aws/context/spring-cloud-aws-context-1.0.xsd
                           http://www.springframework.org/schema/cloud/aws/jdbc             
                           http://www.springframework.org/schema/cloud/aws/jdbc/spring-cloud-aws-jdbc-1.0.xsd">

  <aws-context:context-credentials>
     <aws-context:simple-credentials access-key="${accessKey:}" secret-key="${secretKey:}"/>
  </aws-context:context-credentials> 
  
  <aws-context:context-resource-loader/>

</beans>
  • Line 14 sets the access key and secret key required by the SDK. It's important to note that these values should not be set directly in your configuration or properties files and should be passed to the application on start up (via environment or system variables). The secret key as the name suggests is very sensitive and if compromised will provide a user with access to all AWS services on your account. Make sure this value is not checked into source control, especially if your code is in a public repository. It's common for applications to trawl public repositories looking for keys that are subsequently used to compromise AWS accounts.
  • The context-resource-loader on line 17 is required to access S3 storage. You'll remember that we injected an instance of AmazonS3Client into the FileArchiveService earlier. The context-resource-loader ensures that an instance of AmazonS3Client is available with the credentials supplied in context-credentials.     


Front End - AngularJS

Now that the core server side components are in place it's time to look at some of the client side code. I'm not going to cover it in detail as the focus of this post is integrating with AWS, not the ins and outs of AngularJS. The AngularJS logic is wrapped up in app.js as follows.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
(function () {
    var springBootAws = angular.module('SpringBootAwsDemo', ['ngRoute', 'angularUtils.directives.dirPagination']);

    springBootAws.directive('active', function ($location) {
        return {
            link: function (scope, element) {
                function makeActiveIfMatchesCurrentPath() {
                    if ($location.path().indexOf(element.find('a').attr('href').substr(1)) > -1) {
                        element.addClass('active');
                    } else {
                        element.removeClass('active');
                    }
                }

                scope.$on('$routeChangeSuccess', function () {
                    makeActiveIfMatchesCurrentPath();
                });
            }
        };
    });
    
    springBootAws.directive('fileModel', [ '$parse', function($parse) {
     return {
      restrict : 'A',
      link : function(scope, element, attrs) {
       var model = $parse(attrs.fileModel);
       var modelSetter = model.assign;

       element.bind('change', function() {
        scope.$apply(function() {
         modelSetter(scope, element[0].files[0]);
        });
       });
      }
     };
    } ]);
    
    springBootAws.controller('CreateCustomerCtrl', function ($scope, $location, $http) {
        var self = this;
        
        self.add = function () {            
         var customerModel = self.model;         
         var savedCustomer;
         
         var formData = new FormData();
         formData.append('firstName', customerModel.firstName);
         formData.append('lastName', customerModel.lastName);
         formData.append('dateOfBirth', customerModel.dateOfBirth.getFullYear()  + '-' +  (customerModel.dateOfBirth.getMonth() + 1)  + '-' + customerModel.dateOfBirth.getDay());
         formData.append('image', customerModel.image);
         formData.append('street', customerModel.address.street);
         formData.append('town', customerModel.address.town);
         formData.append('county', customerModel.address.county);
         formData.append('postcode', customerModel.address.postcode);
          
         $scope.saving=true;
         $http.post('/spring-boot-aws/customers', formData, { 
             transformRequest : angular.identity,
       headers : {
        'Content-Type' : undefined
       }
      }).success(function(savedCustomer) {
       $scope.saving=false;
       $location.path("/view-customer/" + savedCustomer.id);       
      }).error(function(data) {
       $scope.saving=false; 
      });
        };
    });
    
    springBootAws.controller('ViewCustomerCtrl', function ($scope, $http, $routeParams) {
        
     var customerId = $routeParams.customerId;             
     $scope.currentPage = 1;
     $scope.pageSize = 10;
     
     $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers/' + customerId).then(function onSuccess(response) {
         $scope.customer = response.data;
         $scope.dataLoading = false;
        }, function onError(response) {
         $scope.customer = response.statusText;
         $scope.dataLoading = false;
        });
    });
    
    springBootAws.controller('ViewAllCustomersCtrl', function ($scope, $http) {
     
     var self = this;
     $scope.customers = []; 
     $scope.searchText;
        
        $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers').then(function mySucces(response) {
         $scope.customers = response.data;
         $scope.dataLoading = false;
        }, function myError(response) {
         $scope.customer = response.statusText;
         $scope.dataLoading = false;
        });        
        
        self.add = function (customerId) {
         $scope.selectedCustomer = customerId;
         $scope.customerDelete = true;
         $http.delete('/spring-boot-aws/customers/' + customerId).then(function onSucces(response) {
             $scope.customers = _.without($scope.customers, _.findWhere($scope.customers, {id: customerId}));
             $scope.customerDelete = false;
            }, function onError(){
             
            });
        },
        
        $scope.searchFilter = function (obj) {
            var re = new RegExp($scope.searchText, 'i');
            return !$scope.searchText || re.test(obj.firstName) || re.test(obj.lastName.toString());
        };
    });
    
    springBootAws.filter('formatDate', function() {
     return function(input) {
      return moment(input).format("DD-MM-YYYY");
     };
    });
    
    springBootAws.config(function ($routeProvider) {
        $routeProvider.when('/home', {templateUrl: 'pages/home.tpl.html'});
        $routeProvider.when('/create-customer', {templateUrl: 'pages/createCustomer.tpl.html'});
        $routeProvider.when('/view-customer/:customerId', {templateUrl: 'pages/viewCustomer.tpl.html'});
        $routeProvider.when('/view-all-customers', {templateUrl: 'pages/viewAllCustomers.tpl.html'});
        $routeProvider.otherwise({redirectTo: '/home'});
    });
    
}());

The controller logic handles the 3 main views in the application - create customer, view customer and view all customers.
  • CreateCustomerCtrl uses model data populated in the view to build a FormData object and performs a HTTP POST to the create customer endpoint defined earlier. In the success callback there is a transition to the view customer route, passing the target customer Id in the URL.
  • ViewCustomerCtrl uses the customer Id passed in the URL and issues a HTTP GET to the getCustomer endpoint defined earlier. The response JSON is added to scope for display.
  • ViewAllCustomersCtrl issues a HTTP GET to the getAllCustomers endpoint to retrieve all customers. The response JSON is added to scope for display in a tabular view. The delete method takes the selected customer Id and issues a HTTP DELETE to the removeCustomer endpoint to remove the customer from the database and to remove the uploaded image from S3. 
The demo app is now complete so its time to turn our attention to AWS so that we can configure the the RDS database instance and S3 resources needed.

Part 2 - Relational Database Service & S3 Storage Setup

In this section you'll need access to the AWS console. If you haven't already done so you should register for a free account. We're going to step through the RDS database instance set up and the creation of a new storage bucket in S3. By the end of this section you should have the application running locally, hooked up to an  RDS database instance and S3 storage.

Creating a Security Group to access RDS

Security groups provide a means of granting granular access to AWS services. Before creating a database instance on RDS we need to create a security group that will make the database accessible from the internet. This is required so that the application running on your local machine will be able to connect to the database instance on RDS.
Note: in a production environment your database would never be publicly accessible and would only be accessible from EC2 instances within your Virtual Private Cloud.

1. Log into the AWS console and on the landing page select EC2.
AWS Console - Landing Screen
2. Select Security Groups from the menu on the left hand side.
EC2 Landing Screen
4. Click Create Security Group.
Security Groups Screen

5. Enter a security group name and a meaningful description. Next select the default VPC (denoted with a *). A VPC (Virtual Private Cloud) allows users to configure a logically isolated network infrastructure for their applications to run on. Each AWS account comes with a default VPC so you don't have to define one to get started. For the sake of this demo we'll stick the default VPC.
Next we'll specify rules that will define the type of inbound and outbound traffic permitted by the security group. We need to define a single inbound rule that will allow TCP traffic on port 3306 (port used by MySQL). In the rule config below I've set the inbound Source to Anywhere, meaning that the database instance will accept connections from any source IP. This is handy if you're connecting to to a development database instance from public WIFI where your IP will vary. In most cases we'd obviously narrow this to a specified IP range. The default outbound rule allows all traffic to all IP addresses.
Create Security Group
6. From the main AWS dashboard click RDS. On the main RDS dashboard click Launch a DB Instance.
RDS Dashboard Landing Screen

7. Select MySQL as the DB engine.
RDS - Select Database Engine

8. Select the Dev/Test option as we don't need advanced features like multi availability zone deployments for our demo.
Select Database Type

9. In the next section we define the main database instance settings. We'll retain most of the default settings so I'll describe only the most relevant settings below.
  • DB Instance Class - the size of the DB instance to launch. Choose T2 Micro as this is currently the smallest available and is free as part of free tier usage. 
  • Multi AZ Deployment - indicates whether or not we want the DB deployed across multiple available zones for high availability. We don't need this for a simple demo. 
  • Storage Type - the underlying persistence storage type used by the instance. General purpose Solid State Drives are now available by default so we'll use those. 
  • Allocated Storage - The amount of physical storage available to the database. 5GB is suffice for this demo.  
  • DB Instance Identifier - the name that will uniquely identify this database instance. This value is used by the AWSResourceConfig class we looked at earlier.
  • Master Username - the username we'll use to connect to the database.
  • Master Password - the password we'll use to authenticate with.
Database Instance Settings
10. Next we'll configure some of the advanced settings. Again we'll be able to use many of the default values here so I'll only describe the settings that are most relevant.
  • VPC - Select the default VPC. We haven't defined a custom VPC as part of this demo so select the default VPC option.
  • Subnet Group - As we're using the default VPC we'll also use the default subnet group.
  • Publicly Accessible - Set to true so that we can connect to the DB from our local dev environment.
  • Availability Zone - Select No Preference and allow AWS to decide which AZ the DB instance will reside. 
  • VPC Security Groups - Select the Security Group we defined earlier, in this case demo-rds-sec-group. This will apply the defined inbound and outbound TCP rules to the database instance.
  • Database Name - select a name for the database. This will be used along with the database identifier we defined in the last section to connect to the database. 
  • Database Port - Use default MySQL port 3306.
  • The remaining settings in the Database Options section should use the defaults as shown below.
  • Backup - Use default retention period of 7 days and No Preference for backup window. Carefully considered backup settings are obviously very important for a production database but for this demo we'll stick with the defaults. 
  • Monitoring & Maintenance - Again these values aren't important for our demo app so we'll use the defaults shown below.  
Database Instance Advanced Settings
11. Click Launch DB Instance and wait for a few moments while instance is brought up. Click View Your DB Instance to see the configured instance in the RDS instance screen. 
Database Instance Created
12. In the RDS instances view the newly create instance should be displayed with status Available. If you expand the instance view you'll see a summary of the configuration details we defined above.
Configured DB Instance


Connecting to the database & creating the schema


Now that the database instance is up and running we can connect from the command line. You'll need MySQL client locally for this section so if you don't already have it installed you can get here.
  • cd to MY_SQL_INSTALL_DIRECTORY\mysql-5.7.11-winx64\bin. 
  • Here is a sample connection command mysql -u briansjavablog1 -h rds-sample-db2.cg29ws2p7rim.us-west-2.rds.amazonaws.com -p
  • Replace the value following -u with the username you defined as part of the DB instance configuration. 
  • Replace the value following -h with the DB host of the instance you created above.  The host is displayed as Endpoint on your newly created DB instance (see screenshot above). Note: The Endpoint displayed in the console includes the port number (3306). When connecting from the command line you should drop this portion of the endpoint as MySQL will use 3306 by default (see screenshot below).    
  • When prompted enter the master password that you defined as part of the DB instance configuration above. 

Once connected run the show databases command and you will see the rds_demo instance we created in the AWS console. Running use rds_demo and then show tables should return no results as the schema is empty. We can now create the schema by running the SchemaScript.sql from src/main/resources.  SchemaScript.sql creates 3 tables that correspond to the 3 JPA entities created earlier and is defined as follows.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
DROP SCHEMA IF EXISTS rds_demo;
CREATE SCHEMA IF NOT EXISTS rds_demo DEFAULT CHARACTER SET utf8;
USE rds_demo;

CREATE TABLE IF NOT EXISTS rds_demo.app_address (
  id INT NOT NULL AUTO_INCREMENT,  
  street VARCHAR(40) NOT NULL,
  town VARCHAR(40) NOT NULL,
  county VARCHAR(40) NOT NULL,
  postcode VARCHAR(8) NOT NULL,
  PRIMARY KEY (id));
  
CREATE TABLE IF NOT EXISTS rds_demo.app_customer_image (
  id INT NOT NULL AUTO_INCREMENT,
  s3_key VARCHAR(200) NOT NULL,
  url VARCHAR(1000) NOT NULL,
  PRIMARY KEY (id));  
  
CREATE TABLE IF NOT EXISTS rds_demo.app_customer (
  id INT NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(30) NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  date_of_birth DATE NOT NULL,
  customer_image_id INT NOT NULL,
  address_id INT NOT NULL,
  PRIMARY KEY (id),
  CONSTRAINT FK_ADDRESS_ID
    FOREIGN KEY (address_id)
    REFERENCES rds_demo.app_address (id),
  CONSTRAINT FK_CUSTOMER_IMAGE_ID
    FOREIGN KEY (customer_image_id)
    REFERENCES rds_demo.app_customer_image (id));    

Run the script with source ROOT\spring-boot-aws\src\main\resources\SchemaScript.sql. Running show tables again should display 3 new tables as shown below.
Create Database Schema

Creating an S3 storage bucket


Now that the database instance is up and running we can look at setting up the S3 storage. On the main AWS management console select S3 under the storage and content delivery section. When the S3 management console loads, click Create Bucket.
S3 Management Console
Enter a bucket name and ensure its matches the name specified in FileArchiveService.java we defined earlier. If you're running the sample code straight from github then the bucket name should be brians-java-blog-demo as shown below.
Create S3 Bucket
Click Create and the new bucket will be displayed as shown below.
New S3 Bucket

Running the application locally


Its preferable to run the application locally before attempting to deploy it to EC2 as it helps iron out any issues with RDS or S3 connectivity.

In order to run the application we need to supply application properties on start-up.  The properties are defined below and are set based on the values used to create the database instance and the access keys associated with your account.

1
2
3
4
5
6
7
{
 "database-name": "rds_demo",
 "db-instance-identifier": "rds-demo",
 "rdsPassword": "rds-sample-db",
 "accessKey": "XXXXXXXXXXXXXXXXXXXX",
 "secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

Boot allows you to supply configuration on the command line via a the -Dspring.application.json system variable.

1
java -Dspring.application.json='{"database-name": "rds_demo","db-instance-identifier": "rds-demo","rdsPassword": "rds-sample-db","accessKey": "XXXXXXXXXXXXXXXX","secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}' -jar target/spring-boot-aws-0.1.0.jar

You can also supply configuration via the SPRING_APPLICATION_JSON environment variable. An example of supplying the environment variable and running the application in STS is shown below.

Environment Variable Configuration
At this point you should have the application up and running. When the application starts it will establish a connection with the database instance on RDS. Navigate to http://localhost:8080/spring-boot-aws/#/home and you should see the home screen.

Home Screen
Check that everything is working by clicking the Create New Customer link in the header to add a new customer.
Create Customer View
After saving the new customer you'll be taken to the view customer screen.
View Customer
Clicking the customer image will open a new tab where you'll see that the image being referenced directly from S3 storage.
Customer Image From S3 Storage
Note the structure of the URL is as followings

https://<s3_bucket_name>s3<region>.amazonaws.com/<item_key>?AWSAccessKeyId=....
  • Bucket name - the value used to create the bucket in the AWS console. 
  • Region - the region associated with your AWS account
  • Item Key - the key we construct at runtime while saving the customer image. We looked at this logic earlier in the FileArchiveService.
To view all customers click the View All icon at the top of the screen.
View All Customers
Here you can search for customers, view a specific customer or delete a customer using the icons on the right hand side.

Deploying the application to EC2

Once everything is working locally you should be ready to deploy the application to the cloud. This section takes you through a step by step guide to creating a new EC2 instance and deploying the application. Lets get started.

Create a role for EC2

  • Before we create the EC2 instance we'll create a Role through Identity Access Management. The role will be granted to the EC2 instance as part of the set up and will allow access to the database instance on RDS and S3 storage.  
  • Log into the AWS console and navigate to Identity Access Management.
Identity Access Management Console
  • On the left hand side select Roles and click Create New Role  
Create New Role
  • Enter the role name rds-and-s3-access-role  
Set Role Name
  • Select Role Type Amazon EC2 
Select Role Type
  • Attach AmazonS3FullAccess and AmazonRDSFullAccess policies to the role to allow read/write access to RDS and S3. 
Attach Policies for RDS and S3
  • Review the role configuration and click Create Role.
Create Role
Creating an EC2 Instance

Now that we've created a role that will provide read/write access to RDS and S3, we're ready to create the EC2 instance.
  • Navigate to the EC2 console and click launch instance.
  • Choose the AWS Linux AMI. This is the base server image we'll to create the EC2 instance.   
Select Amazon Machine Image
  • To keep costs down select t2 micro as the instance type. This is a pretty light weight instance with limited resources but is sufficient for running our demo app.  
Select EC2 Instance Type
  • We only need one instance for the demo and can deploy it to the default VPC. Ensure that auto assign public IP is enabled, either via Use Subnet Setting or explicitly. This is required so that the instance can be accessed from the internet. Select the rds-and-s3-access-role IAM role we created earlier so that RDS and S3 services can be accessed from the instance. The remaining settings can be left defaulted as shown below. When all values have been selected click Next:Add Storage.
Configure EC2 Instance
  • Use the default storage settings for this instance and click Next:Tag Instance
Add Storage to EC2 Instance
  • Add a single tag to store the instance name and click Next:Configure Security Group
Tag EC2 Instance
The security group settings define what type of traffic is allowed to access your instance. We need to configure SSH access to the instance so that we can SSH onto the box to set it up and run the application. We also need HTTP access so that we can access the application once its up and running. The Source value specifies what IPs the instance will accept traffic from. I spend quite a bit of time on the train (public WI-FI) where the IP address changes regularly, so for handiness I'm leaving the Source open. Ordinarily we'd want to limit this value so that the instance is not open to the world.
Configure EC2 Security Group
  • The final step is to review the configuration settings and click Launch. 
Review and Launch Instance
  • You'll be prompted to select a key pair that will be used to SSH onto the EC2 instance. If you don't already have a key pair you can create one now. 
Select Key Pair
  • Click Launch Instance to display the launch status screen shown below. At this point the instance is being created so you can navigate back to the EC2 instance landing screen.    
Launch Instance Summary
  • Returning to the instance landing screen you should see the instance with state initializing. It may take a few minutes before the instance state changes to running and is ready to use.
Instance Initializing
  • When the instance state changes to running the instance is ready to use. Open the description tab and get the public IP that will be used to SSH onto the instance.   
Instance Running
  • Open a command prompt and SSH onto the instance with ssh <ip_address> -l ec2-user -i <my_private_key>.pem as shown below.
SSH onto Instance
  • Once we're connected to the instance we need to do some basic setup. Switch to the root user, remove the default Java 7 JDK that comes bundled with the Amazon Machine Image and install the Java 8 JDK.
1
2
3
sudo su
yum remove java-1.7.0-openjdk -y
yum install java-1.8.0
  • The EC2 image should now be ready to use, so all that remains is to copy up our application JAR and run it. On the command line use SCP to copy the application JAR to the EC2 instance..
Copy Application JAR to EC2 Instance
  • When you SSH onto the EC2 instance the spring-boot-aws-0.1.0.jar should be in /home/ec2-user/. Launch the application by running the same command you ran locally, not forgetting to supply the application config JSON.
Running the Application on EC2 Instance
  • When the application starts you should be able to access it on port 8080 using the public DNS displayed in description tab of the EC2 instances page. 
Access Application on EC2
In a production environment we wouldn't access the application directly on the EC2 instance. Instead we'd configure an Elastic Load Balancer to route and distribute incoming traffic across multiple EC2 instances. That however is a story for another day.

Summary

We've covered quite a bit in this post and hopefully provided a decent introduction to building and deploying a simple application on AWS.  EC2, RDS and S3 are just the tip of the iceberg in terms of AWS services, so I'd encourage you to dive in and experiment with some of the others. You could even use the demo app created here as a starting point for playing around with the likes of SQS or Elastic Cache.  As always I'm keen to hear some feedback, so if you have any questions on this post or have suggestions for future posts please let me know.