Tuesday, 3 May 2016

Spring Boot & Amazon Web Services (EC2, RDS & S3)

This post will take you through a step by step guide to building and deploying a simple Java application to the AWS cloud platform. The application will use a few well known AWS services which I'll describe along the way. There is quite a bit of material to cover in this post so the overview of the AWS services will be light. For those interested in finding out more I'll link to the appropriate section of the AWS documentation. Amazon have done a fine job documenting their platform so I'd encourage you to have a read if time permits.      

Prerequisites 

In order to get the sample application up and running you'll need access to AWS. If you don't already have access you can register for a free account which includes access to a bunch of great services and some pretty generous allowances. I'd encourage you to get an account set up now before going any further.

What will the sample application look like? 

The application we're going to build is very simple customer management app and will consist of a Spring Boot web tier and an AnularJS front end. We'll deploy the application to AWS and make use of the following services.
  • EC2 - Amazons Elastic Cloud Compute provides on demand virtual server instances that can be quickly provisioned with the operating system and software stack of your choice. We'll be using Amazons own Linux machine image to deploy our application. 
  • Relational Database Service - Amazons database as a service allows developers to provision Amazon managed database instances in the cloud.  A number of  common database platforms are supported but we'll be using a MySQL instance.
  • S3 Storage - Amazons Simple Storage Service provides simple key value data storage which we'll be using to store image files. 
We're going to build a simple CRUD style customer management app to create, view and delete customer details. Below is a high level overview of each of the screens and how they interact with other components.
  • Create customer - An Angular managed view will capture and post customer data to a Spring Boot managed endpoint. When a customer is added the endpoint will save the customer data to a MySQL database instance on RDS. The customer image will be saved to S3 storage which will generate a unique key and a public URL to the image. The key and public URL will be saved in the database as part of the customer data. 
Create Customer View
  • View customer - An Angular managed view will issue a GET request to an endpoint for a specific customer. The endpoint will retrieve customer data from the MySQL database instance on RDS and return it to the client. The response data will include a publicly accessible URL which will be used to reference the customer image directly from S3 storage.
View Customer View
  • View all customers - An Angular managed view will issue a GET request for all customers to a Spring Boot managed endpoint. Customers will be displayed in a simple table and users will have the ability to view or delete customer rows. The endpoint will retrieve all customer data from the MySQL database instance on RDS and return it to the client. Images will be referenced from S3 in the same way as the View Customer screen. 
View All Customers View

Part 1 - Building the application  

The first part of this post will focus on building the demo application. In the second part we'll look at configuring the various services on AWS, running the application locally and then deploying it in the cloud.

Source Code  

The full source code for this tutorial is available on github at https://github.com/briansjavablog/spring-boot-aws. You may find it useful to pull the code locally so that you can experiment with it as you work through the tutorial.

Application Structure.



In the sections that follow we'll look at some of the most important components in detail. The focus of this post isn't Spring Boot so I wont describe every class in detail, as I've covered quite a bit this already in a separate post. We'll focus more on AWS integration and making our app cloud ready.  

Domain Model

The domain model for the demo app is very simple and consist of just 3 entities - a CustomerCustomerAddress and CustomerImage. The Customer entity is defined below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
@Entity(name="app_customer")
public class Customer{

    public Customer(){}
 
    public Customer(String firstName, String lastName, Date dateOfBirth, CustomerImage customerImage, Address address) {
       super();
       this.firstName = firstName;
       this.lastName = lastName;
       this.dateOfBirth = dateOfBirth;
       this.customerImage = customerImage;
       this.address = address;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(nullable = false, length = 30)
    private String firstName;
 
    @Setter
    @Getter
    @Column(nullable = false, length = 30)
    private String lastName;
 
    @Setter 
    @Getter
    @Column(nullable = false)
    private Date dateOfBirth;
 
    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private CustomerImage customerImage;
 
    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private Address address;
}

Address is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Entity(name="app_address")
public class Address{

    public Address(){}
 
    public Address(String street, String town, String county, String postCode) {
       this.street = street;
       this.town = town;
       this.county = county;
       this.postcode = postCode;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(name = "street", nullable = false, length=40)
    private String street;
 
    @Setter
    @Getter
    @Column(name = "town", nullable = false, length=40)
    private String town;
 
    @Setter 
    @Getter
    @Column(name = "county", nullable = false, length=40)
    private String county;

    @Setter
    @Getter
    @Column(name = "postcode", nullable = false, length=40)
    private String postcode;
}

And finally CustomerImage is defined as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
@Entity(name="app_customer_image")
public class CustomerImage {

    public CustomerImage(){}
 
    public CustomerImage(String key, String url) {
       this.key = key;
       this.url =url;  
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    @Column(name = "s3_key", nullable = false, length=200)
    private String key;
 
    @Setter
    @Getter
    @Column(name = "url", nullable = false, length=1000)
    private String url;
 
}


Customer Controller

The CustomerController exposes endpoints for creating, retrieving and deleting customers and is called from an Angular front end that we'll create later.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@RestController
public class CustomerController {

 @Autowired
 private CustomerRepository customerRepository;
 
 @Autowired
 private FileArchiveService fileArchiveService; 
  
 
        @RequestMapping(value = "/customers", method = RequestMethod.POST)
        public @ResponseBody Customer createCustomer(            
                @RequestParam(value="firstName", required=true) String firstName,
                @RequestParam(value="lastName", required=true) String lastName,
                @RequestParam(value="dateOfBirth", required=true) @DateTimeFormat(pattern="yyyy-MM-dd") Date dateOfBirth,
                @RequestParam(value="street", required=true) String street,
                @RequestParam(value="town", required=true) String town,
                @RequestParam(value="county", required=true) String county,
                @RequestParam(value="postcode", required=true) String postcode,
                @RequestParam(value="image", required=true) MultipartFile image) throws Exception {
    
             CustomerImage customerImage = fileArchiveService.saveFileToS3(image);         
             Customer customer = new Customer(firstName, lastName, dateOfBirth, customerImage, 
                                              new Address(street, town, county, postcode));
     
             customerRepository.save(customer);
             return customer;            
    }

This code snippet above does a few different things
  • Injects a CustomerRepository for saving and retrieving customer entities and a FileArchiveService for saving and retrieving customer images in S3 storage.
  • Takes posted form data including an image file and maps it to method parameters. 
  • Uses the FileArchiveService service to save the uploaded file to S3 storage. The returned CustomerImage object contains a key and public URL returned from S3.
  • Creates a Customer entity and saves it to the database. Note that the CustomerImage is saved as part of Customer so that the customer entity has a reference to the image stored on S3.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.GET)
public Customer getCustomer(@PathVariable("customerId") Long customerId) {
  
    /* validate customer Id parameter */
    if (null==customerId) {
       throw new InvalidCustomerRequestException();
    }
  
    Customer customer = customerRepository.findOne(customerId);
  
    if(null==customer){
       throw new CustomerNotFoundException();
    }
  
    return customer;
}

The method above provides an endpoint that takes a customer Id via a HTTP GET, retrieves the customer from the database and returns a JSON representation to the client.

1
2
3
4
5
@RequestMapping(value = "/customers", method = RequestMethod.GET)
public List<Customer> getCustomers() {
  
    return (List<Customer>) customerRepository.findAll();
}

The method above provides an endpoint for retrieving all customers via a HTTP GET.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.DELETE)
public void removeCustomer(@PathVariable("customerId") Long customerId, HttpServletResponse httpResponse) {

    if(customerRepository.exists(customerId)){
        Customer customer = customerRepository.findOne(customerId);
        fileArchiveService.deleteImageFromS3(customer.getCustomerImage());
        customerRepository.delete(customer); 
    }
  
    httpResponse.setStatus(HttpStatus.NO_CONTENT.value());
}

The method above exposes an endpoint for deleting customers using a HTTP DELETE. The CustomerImage associated with the Customer is used to call the FileArchiveService to remove the customer image from S3 storage. The Customer is then removed from the database and a HTTP 204 is returned to the client.

File Archive Service

As mentioned above, we're going to save uploaded images to S3 storage. Thankfully AWS provides an SDK that makes it easy to integrate with S3, so all we need to do is write a simple Service that uses that SDK to save and retrieve files.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
@Service
public class FileArchiveService {

    @Autowired
    private AmazonS3Client s3Client;

    private static final String S3_BUCKET_NAME = "brians-java-blog-aws-demo";


    /**
     * Save image to S3 and return CustomerImage containing key and public URL
     * 
     * @param multipartFile
     * @return
     * @throws IOException
     */
    public CustomerImage saveFileToS3(MultipartFile multipartFile) throws FileArchiveServiceException {

        try{
            File fileToUpload = convertFromMultiPart(multipartFile);
            String key = Instant.now().getEpochSecond() + "_" + fileToUpload.getName();

            /* save file */
            s3Client.putObject(new PutObjectRequest(S3_BUCKET_NAME, key, fileToUpload));

            /* get signed URL (valid for one year) */
            GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(S3_BUCKET_NAME, key);
            generatePresignedUrlRequest.setMethod(HttpMethod.GET);
            generatePresignedUrlRequest.setExpiration(DateTime.now().plusYears(1).toDate());

            URL signedUrl = s3Client.generatePresignedUrl(generatePresignedUrlRequest); 

            return new CustomerImage(key, signedUrl.toString());
        }
        catch(Exception ex){   
            throw new FileArchiveServiceException("An error occurred saving file to S3", ex);
        }  
    }
  • Line 5 - AmazonS3Client is provided by the AWS SDK and allows us to read and write to S3. This component gets the credentials necessary to connect to S3 from aws-config.xml which we'll define later.
  • Line 7 - The name of the S3 bucket that the application will read from and write to. You can think of a bucket as a storage container into which you can save resources. We'll look at how to define an S3 bucket later in the post. 
  • Lines 20 & 21 - The MultiPartFile uploaded from the client is converted to a File and a key is generated using the file name and time stamp. The combination of file name and time stamp is important so that multiple files can be uploaded with the same name.
  • Line 24 - The S3 client saves the file to the specified bucket using the generated key. 
  • Lines 27 to 31 - Using the bucket name and key to uniquely identify this resource,  a pre signed public facing URL is generated that can be later used to retrieve the image. The expiration is set to one year from today to tell S3 to make the resource available using this public URL for no more than one year.  
  • Line 33 - The generated key and public facing URL are wrapped in a CustomerImage and returned to the controller. CustomerImage is saved to the database as part of the Customer persist and is the link between the Customer stored in the database and the customers image file on S3. When a client issues a GET request for a specific customer the public facing URL to the customer image is returned. This allows the client application to reference the image directly from S3. 
1    
2
3
4
5
6
7
8
/**
 * Delete image from S3 using specified key
 * 
 * @param customerImage
 */
public void deleteImageFromS3(CustomerImage customerImage){
    s3Client.deleteObject(new DeleteObjectRequest(S3_BUCKET_NAME, customerImage.getKey())); 
}

The method above uses the key from CustomerImage to delete the specific resource from the brians-java-blog-aws-demo bucket on S3. This is the key that was used to save the image to S3 in the saveFileToS3 method described above.

Java Resource Configuration for AWS

The AwsResourceConfig class handles configuration required for integration with S3 storage and the MySQL instance running on RDS. The contents of this class are explained in detail below.

1
2
3
4
5
6
7
8
@Configuration
@ImportResource("classpath:/aws-config.xml")
@EnableRdsInstance(databaseName = "${database-name:}", 
                   dbInstanceIdentifier = "${db-instance-identifier:}", 
                   password = "${rdsPassword:}")
public class AwsResourceConfig {

}
  • @Configuration indicates that this class contains configuration and should be processed as part of component scanning.  
  • @ImportResources tells Spring to load the XML configuration defined in aws-config.xml. We'll cover the contents of this file later. 
  • @EnableRdsInstance is provided by Spring Cloud AWS as a convenient way of configuring an RDS instance. The databaseName, dbInstanceIdentifier and password are defined when setting up the RDS instance in the AWS console. We'll look at RDS set up later.


XML Resource Configuration for AWS

In order to access protected resources using Amazons SDK an access key and a secret key must be supplied. Spring Cloud for AWS provides an XML namespace for configuring both values so that they are available to the SDK at runtime.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aws-context="http://www.springframework.org/schema/cloud/aws/context"
       xmlns:jdbc="http://www.springframework.org/schema/cloud/aws/jdbc"
       xsi:schemaLocation="http://www.springframework.org/schema/beans 
                           http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
                           http://www.springframework.org/schema/cloud/aws/context
                           http://www.springframework.org/schema/cloud/aws/context/spring-cloud-aws-context-1.0.xsd
                           http://www.springframework.org/schema/cloud/aws/jdbc             
                           http://www.springframework.org/schema/cloud/aws/jdbc/spring-cloud-aws-jdbc-1.0.xsd">

  <aws-context:context-credentials>
     <aws-context:simple-credentials access-key="${accessKey:}" secret-key="${secretKey:}"/>
  </aws-context:context-credentials> 
  
  <aws-context:context-resource-loader/>

</beans>
  • Line 14 sets the access key and secret key required by the SDK. It's important to note that these values should not be set directly in your configuration or properties files and should be passed to the application on start up (via environment or system variables). The secret key as the name suggests is very sensitive and if compromised will provide a user with access to all AWS services on your account. Make sure this value is not checked into source control, especially if your code is in a public repository. It's common for applications to trawl public repositories looking for keys that are subsequently used to compromise AWS accounts.
  • The context-resource-loader on line 17 is required to access S3 storage. You'll remember that we injected an instance of AmazonS3Client into the FileArchiveService earlier. The context-resource-loader ensures that an instance of AmazonS3Client is available with the credentials supplied in context-credentials.     


Front End - AngularJS

Now that the core server side components are in place it's time to look at some of the client side code. I'm not going to cover it in detail as the focus of this post is integrating with AWS, not the ins and outs of AngularJS. The AngularJS logic is wrapped up in app.js as follows.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
(function () {
    var springBootAws = angular.module('SpringBootAwsDemo', ['ngRoute', 'angularUtils.directives.dirPagination']);

    springBootAws.directive('active', function ($location) {
        return {
            link: function (scope, element) {
                function makeActiveIfMatchesCurrentPath() {
                    if ($location.path().indexOf(element.find('a').attr('href').substr(1)) > -1) {
                        element.addClass('active');
                    } else {
                        element.removeClass('active');
                    }
                }

                scope.$on('$routeChangeSuccess', function () {
                    makeActiveIfMatchesCurrentPath();
                });
            }
        };
    });
    
    springBootAws.directive('fileModel', [ '$parse', function($parse) {
     return {
      restrict : 'A',
      link : function(scope, element, attrs) {
       var model = $parse(attrs.fileModel);
       var modelSetter = model.assign;

       element.bind('change', function() {
        scope.$apply(function() {
         modelSetter(scope, element[0].files[0]);
        });
       });
      }
     };
    } ]);
    
    springBootAws.controller('CreateCustomerCtrl', function ($scope, $location, $http) {
        var self = this;
        
        self.add = function () {            
         var customerModel = self.model;         
         var savedCustomer;
         
         var formData = new FormData();
         formData.append('firstName', customerModel.firstName);
         formData.append('lastName', customerModel.lastName);
         formData.append('dateOfBirth', customerModel.dateOfBirth.getFullYear()  + '-' +  (customerModel.dateOfBirth.getMonth() + 1)  + '-' + customerModel.dateOfBirth.getDay());
         formData.append('image', customerModel.image);
         formData.append('street', customerModel.address.street);
         formData.append('town', customerModel.address.town);
         formData.append('county', customerModel.address.county);
         formData.append('postcode', customerModel.address.postcode);
          
         $scope.saving=true;
         $http.post('/spring-boot-aws/customers', formData, { 
             transformRequest : angular.identity,
       headers : {
        'Content-Type' : undefined
       }
      }).success(function(savedCustomer) {
       $scope.saving=false;
       $location.path("/view-customer/" + savedCustomer.id);       
      }).error(function(data) {
       $scope.saving=false; 
      });
        };
    });
    
    springBootAws.controller('ViewCustomerCtrl', function ($scope, $http, $routeParams) {
        
     var customerId = $routeParams.customerId;             
     $scope.currentPage = 1;
     $scope.pageSize = 10;
     
     $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers/' + customerId).then(function onSuccess(response) {
         $scope.customer = response.data;
         $scope.dataLoading = false;
        }, function onError(response) {
         $scope.customer = response.statusText;
         $scope.dataLoading = false;
        });
    });
    
    springBootAws.controller('ViewAllCustomersCtrl', function ($scope, $http) {
     
     var self = this;
     $scope.customers = []; 
     $scope.searchText;
        
        $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers').then(function mySucces(response) {
         $scope.customers = response.data;
         $scope.dataLoading = false;
        }, function myError(response) {
         $scope.customer = response.statusText;
         $scope.dataLoading = false;
        });        
        
        self.add = function (customerId) {
         $scope.selectedCustomer = customerId;
         $scope.customerDelete = true;
         $http.delete('/spring-boot-aws/customers/' + customerId).then(function onSucces(response) {
             $scope.customers = _.without($scope.customers, _.findWhere($scope.customers, {id: customerId}));
             $scope.customerDelete = false;
            }, function onError(){
             
            });
        },
        
        $scope.searchFilter = function (obj) {
            var re = new RegExp($scope.searchText, 'i');
            return !$scope.searchText || re.test(obj.firstName) || re.test(obj.lastName.toString());
        };
    });
    
    springBootAws.filter('formatDate', function() {
     return function(input) {
      return moment(input).format("DD-MM-YYYY");
     };
    });
    
    springBootAws.config(function ($routeProvider) {
        $routeProvider.when('/home', {templateUrl: 'pages/home.tpl.html'});
        $routeProvider.when('/create-customer', {templateUrl: 'pages/createCustomer.tpl.html'});
        $routeProvider.when('/view-customer/:customerId', {templateUrl: 'pages/viewCustomer.tpl.html'});
        $routeProvider.when('/view-all-customers', {templateUrl: 'pages/viewAllCustomers.tpl.html'});
        $routeProvider.otherwise({redirectTo: '/home'});
    });
    
}());

The controller logic handles the 3 main views in the application - create customer, view customer and view all customers.
  • CreateCustomerCtrl uses model data populated in the view to build a FormData object and performs a HTTP POST to the create customer endpoint defined earlier. In the success callback there is a transition to the view customer route, passing the target customer Id in the URL.
  • ViewCustomerCtrl uses the customer Id passed in the URL and issues a HTTP GET to the getCustomer endpoint defined earlier. The response JSON is added to scope for display.
  • ViewAllCustomersCtrl issues a HTTP GET to the getAllCustomers endpoint to retrieve all customers. The response JSON is added to scope for display in a tabular view. The delete method takes the selected customer Id and issues a HTTP DELETE to the removeCustomer endpoint to remove the customer from the database and to remove the uploaded image from S3. 
The demo app is now complete so its time to turn our attention to AWS so that we can configure the the RDS database instance and S3 resources needed.

Part 2 - Relational Database Service & S3 Storage Setup

In this section you'll need access to the AWS console. If you haven't already done so you should register for a free account. We're going to step through the RDS database instance set up and the creation of a new storage bucket in S3. By the end of this section you should have the application running locally, hooked up to an  RDS database instance and S3 storage.

Creating a Security Group to access RDS

Security groups provide a means of granting granular access to AWS services. Before creating a database instance on RDS we need to create a security group that will make the database accessible from the internet. This is required so that the application running on your local machine will be able to connect to the database instance on RDS.
Note: in a production environment your database would never be publicly accessible and would only be accessible from EC2 instances within your Virtual Private Cloud.

1. Log into the AWS console and on the landing page select EC2.
AWS Console - Landing Screen
2. Select Security Groups from the menu on the left hand side.
EC2 Landing Screen
4. Click Create Security Group.
Security Groups Screen

5. Enter a security group name and a meaningful description. Next select the default VPC (denoted with a *). A VPC (Virtual Private Cloud) allows users to configure a logically isolated network infrastructure for their applications to run on. Each AWS account comes with a default VPC so you don't have to define one to get started. For the sake of this demo we'll stick the default VPC.
Next we'll specify rules that will define the type of inbound and outbound traffic permitted by the security group. We need to define a single inbound rule that will allow TCP traffic on port 3306 (port used by MySQL). In the rule config below I've set the inbound Source to Anywhere, meaning that the database instance will accept connections from any source IP. This is handy if you're connecting to to a development database instance from public WIFI where your IP will vary. In most cases we'd obviously narrow this to a specified IP range. The default outbound rule allows all traffic to all IP addresses.
Create Security Group
6. From the main AWS dashboard click RDS. On the main RDS dashboard click Launch a DB Instance.
RDS Dashboard Landing Screen

7. Select MySQL as the DB engine.
RDS - Select Database Engine

8. Select the Dev/Test option as we don't need advanced features like multi availability zone deployments for our demo.
Select Database Type

9. In the next section we define the main database instance settings. We'll retain most of the default settings so I'll describe only the most relevant settings below.
  • DB Instance Class - the size of the DB instance to launch. Choose T2 Micro as this is currently the smallest available and is free as part of free tier usage. 
  • Multi AZ Deployment - indicates whether or not we want the DB deployed across multiple available zones for high availability. We don't need this for a simple demo. 
  • Storage Type - the underlying persistence storage type used by the instance. General purpose Solid State Drives are now available by default so we'll use those. 
  • Allocated Storage - The amount of physical storage available to the database. 5GB is suffice for this demo.  
  • DB Instance Identifier - the name that will uniquely identify this database instance. This value is used by the AWSResourceConfig class we looked at earlier.
  • Master Username - the username we'll use to connect to the database.
  • Master Password - the password we'll use to authenticate with.
Database Instance Settings
10. Next we'll configure some of the advanced settings. Again we'll be able to use many of the default values here so I'll only describe the settings that are most relevant.
  • VPC - Select the default VPC. We haven't defined a custom VPC as part of this demo so select the default VPC option.
  • Subnet Group - As we're using the default VPC we'll also use the default subnet group.
  • Publicly Accessible - Set to true so that we can connect to the DB from our local dev environment.
  • Availability Zone - Select No Preference and allow AWS to decide which AZ the DB instance will reside. 
  • VPC Security Groups - Select the Security Group we defined earlier, in this case demo-rds-sec-group. This will apply the defined inbound and outbound TCP rules to the database instance.
  • Database Name - select a name for the database. This will be used along with the database identifier we defined in the last section to connect to the database. 
  • Database Port - Use default MySQL port 3306.
  • The remaining settings in the Database Options section should use the defaults as shown below.
  • Backup - Use default retention period of 7 days and No Preference for backup window. Carefully considered backup settings are obviously very important for a production database but for this demo we'll stick with the defaults. 
  • Monitoring & Maintenance - Again these values aren't important for our demo app so we'll use the defaults shown below.  
Database Instance Advanced Settings
11. Click Launch DB Instance and wait for a few moments while instance is brought up. Click View Your DB Instance to see the configured instance in the RDS instance screen. 
Database Instance Created
12. In the RDS instances view the newly create instance should be displayed with status Available. If you expand the instance view you'll see a summary of the configuration details we defined above.
Configured DB Instance


Connecting to the database & creating the schema


Now that the database instance is up and running we can connect from the command line. You'll need MySQL client locally for this section so if you don't already have it installed you can get here.
  • cd to MY_SQL_INSTALL_DIRECTORY\mysql-5.7.11-winx64\bin. 
  • Here is a sample connection command mysql -u briansjavablog1 -h rds-sample-db2.cg29ws2p7rim.us-west-2.rds.amazonaws.com -p
  • Replace the value following -u with the username you defined as part of the DB instance configuration. 
  • Replace the value following -h with the DB host of the instance you created above.  The host is displayed as Endpoint on your newly created DB instance (see screenshot above). Note: The Endpoint displayed in the console includes the port number (3306). When connecting from the command line you should drop this portion of the endpoint as MySQL will use 3306 by default (see screenshot below).    
  • When prompted enter the master password that you defined as part of the DB instance configuration above. 

Once connected run the show databases command and you will see the rds_demo instance we created in the AWS console. Running use rds_demo and then show tables should return no results as the schema is empty. We can now create the schema by running the SchemaScript.sql from src/main/resources.  SchemaScript.sql creates 3 tables that correspond to the 3 JPA entities created earlier and is defined as follows.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
DROP SCHEMA IF EXISTS rds_demo;
CREATE SCHEMA IF NOT EXISTS rds_demo DEFAULT CHARACTER SET utf8;
USE rds_demo;

CREATE TABLE IF NOT EXISTS rds_demo.app_address (
  id INT NOT NULL AUTO_INCREMENT,  
  street VARCHAR(40) NOT NULL,
  town VARCHAR(40) NOT NULL,
  county VARCHAR(40) NOT NULL,
  postcode VARCHAR(8) NOT NULL,
  PRIMARY KEY (id));
  
CREATE TABLE IF NOT EXISTS rds_demo.app_customer_image (
  id INT NOT NULL AUTO_INCREMENT,
  s3_key VARCHAR(200) NOT NULL,
  url VARCHAR(1000) NOT NULL,
  PRIMARY KEY (id));  
  
CREATE TABLE IF NOT EXISTS rds_demo.app_customer (
  id INT NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(30) NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  date_of_birth DATE NOT NULL,
  customer_image_id INT NOT NULL,
  address_id INT NOT NULL,
  PRIMARY KEY (id),
  CONSTRAINT FK_ADDRESS_ID
    FOREIGN KEY (address_id)
    REFERENCES rds_demo.app_address (id),
  CONSTRAINT FK_CUSTOMER_IMAGE_ID
    FOREIGN KEY (customer_image_id)
    REFERENCES rds_demo.app_customer_image (id));    

Run the script with source ROOT\spring-boot-aws\src\main\resources\SchemaScript.sql. Running show tables again should display 3 new tables as shown below.
Create Database Schema

Creating an S3 storage bucket


Now that the database instance is up and running we can look at setting up the S3 storage. On the main AWS management console select S3 under the storage and content delivery section. When the S3 management console loads, click Create Bucket.
S3 Management Console
Enter a bucket name and ensure its matches the name specified in FileArchiveService.java we defined earlier. If you're running the sample code straight from github then the bucket name should be brians-java-blog-demo as shown below.
Create S3 Bucket
Click Create and the new bucket will be displayed as shown below.
New S3 Bucket

Running the application locally


Its preferable to run the application locally before attempting to deploy it to EC2 as it helps iron out any issues with RDS or S3 connectivity.

In order to run the application we need to supply application properties on start-up.  The properties are defined below and are set based on the values used to create the database instance and the access keys associated with your account.

1
2
3
4
5
6
7
{
 "database-name": "rds_demo",
 "db-instance-identifier": "rds-demo",
 "rdsPassword": "rds-sample-db",
 "accessKey": "XXXXXXXXXXXXXXXXXXXX",
 "secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

Boot allows you to supply configuration on the command line via a the -Dspring.application.json system variable.

1
java -Dspring.application.json='{"database-name": "rds_demo","db-instance-identifier": "rds-demo","rdsPassword": "rds-sample-db","accessKey": "XXXXXXXXXXXXXXXX","secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}' -jar target/spring-boot-aws-0.1.0.jar

You can also supply configuration via the SPRING_APPLICATION_JSON environment variable. An example of supplying the environment variable and running the application in STS is shown below.

Environment Variable Configuration
At this point you should have the application up and running. When the application starts it will establish a connection with the database instance on RDS. Navigate to http://localhost:8080/spring-boot-aws/#/home and you should see the home screen.

Home Screen
Check that everything is working by clicking the Create New Customer link in the header to add a new customer.
Create Customer View
After saving the new customer you'll be taken to the view customer screen.
View Customer
Clicking the customer image will open a new tab where you'll see that the image being referenced directly from S3 storage.
Customer Image From S3 Storage
Note the structure of the URL is as followings

https://<s3_bucket_name>s3<region>.amazonaws.com/<item_key>?AWSAccessKeyId=....
  • Bucket name - the value used to create the bucket in the AWS console. 
  • Region - the region associated with your AWS account
  • Item Key - the key we construct at runtime while saving the customer image. We looked at this logic earlier in the FileArchiveService.
To view all customers click the View All icon at the top of the screen.
View All Customers
Here you can search for customers, view a specific customer or delete a customer using the icons on the right hand side.

Deploying the application to EC2

Once everything is working locally you should be ready to deploy the application to the cloud. This section takes you through a step by step guide to creating a new EC2 instance and deploying the application. Lets get started.

Create a role for EC2

  • Before we create the EC2 instance we'll create a Role through Identity Access Management. The role will be granted to the EC2 instance as part of the set up and will allow access to the database instance on RDS and S3 storage.  
  • Log into the AWS console and navigate to Identity Access Management.
Identity Access Management Console
  • On the left hand side select Roles and click Create New Role  
Create New Role
  • Enter the role name rds-and-s3-access-role  
Set Role Name
  • Select Role Type Amazon EC2 
Select Role Type
  • Attach AmazonS3FullAccess and AmazonRDSFullAccess policies to the role to allow read/write access to RDS and S3. 
Attach Policies for RDS and S3
  • Review the role configuration and click Create Role.
Create Role
Creating an EC2 Instance

Now that we've created a role that will provide read/write access to RDS and S3, we're ready to create the EC2 instance.
  • Navigate to the EC2 console and click launch instance.
  • Choose the AWS Linux AMI. This is the base server image we'll to create the EC2 instance.   
Select Amazon Machine Image
  • To keep costs down select t2 micro as the instance type. This is a pretty light weight instance with limited resources but is sufficient for running our demo app.  
Select EC2 Instance Type
  • We only need one instance for the demo and can deploy it to the default VPC. Ensure that auto assign public IP is enabled, either via Use Subnet Setting or explicitly. This is required so that the instance can be accessed from the internet. Select the rds-and-s3-access-role IAM role we created earlier so that RDS and S3 services can be accessed from the instance. The remaining settings can be left defaulted as shown below. When all values have been selected click Next:Add Storage.
Configure EC2 Instance
  • Use the default storage settings for this instance and click Next:Tag Instance
Add Storage to EC2 Instance
  • Add a single tag to store the instance name and click Next:Configure Security Group
Tag EC2 Instance
The security group settings define what type of traffic is allowed to access your instance. We need to configure SSH access to the instance so that we can SSH onto the box to set it up and run the application. We also need HTTP access so that we can access the application once its up and running. The Source value specifies what IPs the instance will accept traffic from. I spend quite a bit of time on the train (public WI-FI) where the IP address changes regularly, so for handiness I'm leaving the Source open. Ordinarily we'd want to limit this value so that the instance is not open to the world.
Configure EC2 Security Group
  • The final step is to review the configuration settings and click Launch. 
Review and Launch Instance
  • You'll be prompted to select a key pair that will be used to SSH onto the EC2 instance. If you don't already have a key pair you can create one now. 
Select Key Pair
  • Click Launch Instance to display the launch status screen shown below. At this point the instance is being created so you can navigate back to the EC2 instance landing screen.    
Launch Instance Summary
  • Returning to the instance landing screen you should see the instance with state initializing. It may take a few minutes before the instance state changes to running and is ready to use.
Instance Initializing
  • When the instance state changes to running the instance is ready to use. Open the description tab and get the public IP that will be used to SSH onto the instance.   
Instance Running
  • Open a command prompt and SSH onto the instance with ssh <ip_address> -l ec2-user -i <my_private_key>.pem as shown below.
SSH onto Instance
  • Once we're connected to the instance we need to do some basic setup. Switch to the root user, remove the default Java 7 JDK that comes bundled with the Amazon Machine Image and install the Java 8 JDK.
1
2
3
sudo su
yum remove java-1.7.0-openjdk -y
yum install java-1.8.0
  • The EC2 image should now be ready to use, so all that remains is to copy up our application JAR and run it. On the command line use SCP to copy the application JAR to the EC2 instance..
Copy Application JAR to EC2 Instance
  • When you SSH onto the EC2 instance the spring-boot-aws-0.1.0.jar should be in /home/ec2-user/. Launch the application by running the same command you ran locally, not forgetting to supply the application config JSON.
Running the Application on EC2 Instance
  • When the application starts you should be able to access it on port 8080 using the public DNS displayed in description tab of the EC2 instances page. 
Access Application on EC2
In a production environment we wouldn't access the application directly on the EC2 instance. Instead we'd configure an Elastic Load Balancer to route and distribute incoming traffic across multiple EC2 instances. That however is a story for another day.

Summary

We've covered quite a bit in this post and hopefully provided a decent introduction to building and deploying a simple application on AWS.  EC2, RDS and S3 are just the tip of the iceberg in terms of AWS services, so I'd encourage you to dive in and experiment with some of the others. You could even use the demo app created here as a starting point for playing around with the likes of SQS or Elastic Cache.  As always I'm keen to hear some feedback, so if you have any questions on this post or have suggestions for future posts please let me know.

Tuesday, 1 December 2015

Spring Boot REST Tutorial

Spring Boot makes it easier to build Spring based applications by focusing on convention over configuration.  Following standard Spring Boot conventions we can minimize the configuration required to get an application up and running. The use of an embedded servlet container allows us to package the application as an executable JAR and simply invoke it on the command line to launch the application.
One of my favorite things about Boot is its emphasis on production readiness. Out of the box it provides a number of key non functional features, such as metrics, health checks and externalized configuration. In the past these types of features would have been written from scratch (or more worrying, not at all), before an application could be considered production ready.

This tutorial is an introduction to Spring Boot and describes the steps required to build and test a simple JPA backed REST service. The code is intended to be simple, easy to follow and provide readers with a template for building more elaborate services.

Source Code  

The full source code for this tutorial is available on github at https://github.com/briansjavablog/spring-boot-rest-tutorial. You may find it useful to have the code locally so that you can experiment with it as you work through the tutorial.

Main Application Class

We'll start off by looking at the main Application class. The first thing thing you'll probably notice is a main method that calls SpringApplication.run. This is used to launch the application and apply configuration based on the annotation values specified at the top of the class.

package com.blog.samples;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class Application {
    
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

The presence of a main method in a web application may seem a little strange at first, but this allows us to run the application as a simple executable JAR. No longer do we need to build a WAR and deploy it to a servlet container. Instead we can simply execute the JAR, which will launch and run the application in an embedded servlet container. By default Boot uses Tomcat but it's possible to use Jetty instead if that's your preference. For the the sake of this tutorial we'll stick with Tomcat.

Now lets look at the the class annotations that provide the base configuration for our application.
  • @SpringBootApplication - a wrapper annotation that automatically includes the following common configuration annotations
    • @Configuration - registers the class as a source of beans for Springs Application Context. 
    • @EnableAutoConfiguration - Boot uses this to configure the application based on JAR dependencies we've added to the POM.
    • @ComponentScan - Tells Spring to look for and register beans from base package com.blog.samples and all of its sub packages.

Domain Model

Before we can create our REST endpoints we need to define a domain model. To keeps things really simple we'll have just 2 entities, a Customer and their Address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
package com.blog.samples.boot.rest.model;

import java.util.Date;

import javax.persistence.CascadeType;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.OneToOne;

import lombok.Getter;
import lombok.Setter;

@Entity
public class Customer{

    public Customer(){}
 
    public Customer(String firstName, String lastName, Date dateOfBirth, Address address) {
        super();
        this.firstName = firstName;
        this.lastName = lastName;
        this.dateOfBirth = dateOfBirth;
        this.address = address;
    }


    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    private String firstName;
 
    @Setter
    @Getter
    private String lastName;
 
    @Setter 
    @Getter
    private Date dateOfBirth;

    @Setter
    @Getter
    @OneToOne(cascade = {CascadeType.ALL})
    private Address address;
}

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
package com.blog.samples.boot.rest.model;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

import lombok.Getter;
import lombok.Setter;

@Entity
public class Address{

    public Address(){}
 
    public Address(String street, String town, String county, String postcode) {
        this.street = street;
        this.town = town;
        this.county = county;
        this.postcode = postcode;
    }

    @Id
    @Getter
    @GeneratedValue(strategy=GenerationType.AUTO)
    private long id;
 
    @Setter
    @Getter
    private String street;
 
    @Setter
    @Getter
    private String town;
 
    @Setter   
    @Getter
    private String county;

    @Setter
    @Getter
    private String postcode;
}

Our sample application will use a a simple hsql in memory database to store customer data. We'll create a simple JPA repository later, but first need to configure the domain objects so that they can be mapped to database tables.
  • @Entity - registers the bean as a JPA entity. By default this entity is mapped to a database table with the same name. If we wanted to map this entity to a database table with a different name we would specify the table name with @Entity(name="app_customer")
  • @Id - marks the id instance variable as the primary key field.
  • @GeneratedValue(strategy=GenerationType.AUTO) - indicates that primary keys will be generated automatically when a new instance is persisted. As a result, the application is not responsible for setting the entity Id before persisting the object.
  • @OneToOne(cascade = {CascadeType.ALL}) - maps a one to one relationship between Customer and Address. CascadeType.ALL allows us to create a new Customer with a new Address, and persist both with a single save.The same cascading action applies when deleting a Customer (associated Address is also deleted)  
  • @Setter/@Getter - these are nothing to do with JPA, just convenient lombock annotations for generating getters and setters.

JPA Repository

Next we'll create a JPA repository for persisting the entities created above. Creating a repository couldn't be simpler as Spring Data does all the heavy lifting for us. We simply create an interface that extends CrudRepository and supply our Customer target type. We don't need to create an implementation as Spring Data will use the information supplied to route requests to the appropriate JPA CRUD repository implementation our our behalf. As a result we get a fully functional CRUD repository with a bunch of common data access methods implemented for us.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
package com.blog.samples.boot.rest.repository;

import java.util.List;

import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;

import com.blog.samples.boot.rest.model.Customer;


public interface CustomerRepository extends CrudRepository<Customer, Long> {

    public List<Customer> findByFirstName(String firstName); 
}

I've added a custom query called findByFirstName. Spring Data recognizes that firstName is an instance variable on Customer and so provides us with an implementation for this query.

REST Controller

Now that most of the supporting components are in place, we can start looking at the REST endpoints. The controller we're going to build will expose the following CRUD endpoints.

HTTP Action
URL
Purpose
GET
http://localhost:8080/rest/customers/1
Returns JSON representation of Customer resource based on Id specified in URL (in this case 1)
GET
http://localhost:8080/rest/customers
Returns JSON representation of all available Customer resources 
POST
http://localhost:8080/rest/customers
Create a new Customer resource using JSON representation supplied in the HTTP request body. The path to the newly created Customer resource is returned in a HTTP header as Location: /rest/customers/6 
PUT
http://localhost:8080/rest/customers/1
Update existing Customer resource with JSON representation supplied in the HTTP request body. 
DELETE
http://localhost:8080/rest/customers/4
Delete the Customer specified by the Id in the URL.


The first step is to create a Controller class and annotate it with @RestController.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package com.blog.samples.boot.rest.controller;

import java.util.List;

import javax.servlet.http.HttpServletResponse;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.context.request.WebRequest;

import com.blog.samples.boot.rest.exception.CustomerNotFoundException;
import com.blog.samples.boot.rest.exception.InvalidCustomerRequestException;
import com.blog.samples.boot.rest.model.Customer;
import com.blog.samples.boot.rest.repository.CustomerRepository;

/**
 * Customer Controller exposes a series of RESTful endpoints
 */
@RestController
public class CustomerController {

    @Autowired
    private CustomerRepository customerRepository;

 
  • @RestController is a convenience annotation that registers the class as a controller for handling incoming HTTP requests, similar to @Controller. It has the added benefit of automatically applying @ResponseBody to each of the endpoint methods that return an entity, @ResponseBody is responsible for handling the serialization of the response object so that its JSON representation can be written to the HTTP response body. 
  • @Autowired CustomerRepostory is the JPA repository we created earlier. We'll use it later to retrieve and persist Customer entities,

Get Customer Endpoint

Below is the endpoint definition for handling a HTTP Get request for a specific Customer resource.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
    @RequestMapping(value = "/rest/customers/{customerId}", method = RequestMethod.GET)
    public Customer getcustomer(@PathVariable("customerId") Long customerId) {
  
        /* validate customer Id parameter */
        if (null==customerId) {
            throw new InvalidCustomerRequestException();
        }
  
        Customer customer = customerRepository.findOne(customerId);
  
        if(null==customer){
            throw new CustomerNotFoundException();
        }
  
        return customer;
    }
  • @RequestMapping annotation supplies 2 important pieces of information. The value attribute defines the URL pattern supported and the method attribute defines the HTTP method supported. Springs dispatcher servlet inspects incoming HTTP requests and uses both pieces of configuration to route the appropriate requests to this endpoint.   
  • @PathVariable annotation strips customer Id parameter from request URL and maps it to customer Id method parameter,
  • Lines 5&6 check if a customer Id has been supplied and if not throws a custom InvalidCustomerRequestException. Later we'll create a custom exception handler to convert this application exception to an appropriate HTTP response code.
  • Line 9 uses the CustomerRepository to retrieve the specified Customer entity from the database,
  • Lines 11 & 12 check if the requested Customer was found. If not we throw a custom CustomerNotFoundException

Endpoint Exception Handling

In the endpoint method above we came across 2 custom exceptions. Given that this is a RESTful endpoint, its important we return the appropriate HTTP response code in the event of an error. Spring provides a neat way of mapping application exceptions to HTTP response codes, by allowing us to define a custom exception handler outside of the controller.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package com.blog.samples.boot.rest.exception;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseStatus;

import lombok.extern.slf4j.Slf4j;

@Slf4j
@ControllerAdvice
public class ControllerExceptionHandler {

    @ResponseStatus(HttpStatus.NOT_FOUND) // 404
    @ExceptionHandler(CustomerNotFoundException.class)
    public void handleNotFound() {
        log.error("Resource not found");
    }

    @ResponseStatus(HttpStatus.BAD_REQUEST) // 400
    @ExceptionHandler(InvalidCustomerRequestException.class)
    public void handleBadRequest() {
        log.error("Invalid Fund Request");
    }
 
    @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR) // 500
    @ExceptionHandler(Exception.class)
    public void handleGeneralError(Exception ex) {
        log.error("An error occurred procesing request", ex);
    }
}

This class handles all application exceptions thrown by the controller endpoints. Each method corresponds to a particular type of exception and contains 2 key pieces of configuration
  • @ExceptionHandler - defines the application exception that this method handles. We can define any exception type that extends Throwable. Spring calls the handleNotFound method when CustomerNotFoundException is thrown by an endpoint.
  • @ResponseStatus - defines the HTTP response code returned to the client when this type of exception is thrown. This allows us to map any kind of application exception to the most suitable HTTP response code. The handleNotFound method has been configured so that a HTTP 404 is returned when CustomerNotFoundException is thrown by an endpoint. This provides the client application with the correct semantic context when the requested resource is not available.  
The thrown exception is passed to the method as an argument so we're free to use it as we please. In this instance I simply log the error, but in a production application we may want to do something more interesting, like gather metrics for different types of failure.

Get All Customers Endpoint

Below is the endpoint definition for handling HTTP Get requests for all Customer resources,

1
2
3
4
5
@RequestMapping(value = "/rest/customers", method = RequestMethod.GET)
public List<Customer> getCustomers() {
  
    return (List<Customer>) customerRepository.findAll();
}

This endpoint method is very simple indeed and uses the CustomerRepository to return all Customer resources from the database, The request mapping configuration is similar to that defined earlier for the Get Customer by Id endpoint. The obvious difference is that the URL template and the method signature do not expect an Id parameter.

Create Customer Endpoint

Now lets take a look at how we create a new Customer resource. The endpoint defined below does exactly that.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@RequestMapping(value = { "/rest/customers" }, method = { RequestMethod.POST })
public Customer createCustomer(@RequestBody Customer customer, HttpServletResponse httpResponse, WebRequest request) {

    Customer createdcustomer = null;
    createdcustomer = customerRepository.save(customer);  
    httpResponse.setStatus(HttpStatus.CREATED.value());
    httpResponse.setHeader("Location", String.format("%s/rest/customers/%s", request.getContextPath(), customer.getId()));
 
    return createdcustomer;
}
  • @RequestMapping - contains the URL pattern that this endpoint will handle requests for. The method attribute is POST, indicating that this method will process incoming HTTP POST requests. 
  • @RequestBody - Requests are expected to contain a JSON representation of a Customer resource in the request body. We use the @RequestBody annotation on the method signature to map the incoming JSON payload to a Customer POJO.
  • Line 5 - save the Customer entity to the database using JPA repository we created earlier.
  • Line 6 - Set a HTTP Created 201 code on the response so that the client knows the resource was created successfully.
  • Line 7 - Set a Location header on the HTTP response with a URL providing the client with a handle on the resource they've just create
  • Line 9 - return an updated representation of the resource that's just been created. 

Update Customer Endpoint

Next we'll create an update endpoint so that clients can apply updates to an existing Customer resource.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@RequestMapping(value = { "/rest/customers/{customerId}" }, method = { RequestMethod.PUT })
public void updateCustomer(@RequestBody Customer customer, @PathVariable("customerId") Long customerId, HttpServletResponse httpResponse) {

    if(!customerRepository.exists(customerId)){
        httpResponse.setStatus(HttpStatus.NOT_FOUND.value());
    }
    else{
        customerRepository.save(customer);
        httpResponse.setStatus(HttpStatus.NO_CONTENT.value()); 
    }
}
  • @RequestMapping - contains the URL pattern that this endpoint will handle requests for. The method attribute is PUT, indicating that this method will process incoming HTTP PUT requests. 
  • @RequestBody - Requests are expected to contain a JSON representation of a Customer resource in the request body. We use the @RequestBody annotation on the method signature to map the incoming JSON payload to a Customer POJO.
  • Lines 4 & 5 - check that the supplied entity already exists and if it doesn't, set a HTTP 404 response code.
  • Lines 8 & 9 - Save the supplied entity. JPA will use the Customer Id to update the existing entity.Finally we set a HTTP 204 No Content to let the client know the operation succeeded but that we haven't returned anything in the response body.

Delete Customer Endpoint

The final endpoint we're going to create is for deleting an existing entity.

1
2
3
4
5
6
7
8
9
@RequestMapping(value = "/rest/customers/{customerId}", method = RequestMethod.DELETE)
public void removeCustomer(@PathVariable("customerId") Long customerId, HttpServletResponse httpResponse) {

    if(customerRepository.exists(customerId)){
        customerRepository.delete(customerId); 
    }
  
    httpResponse.setStatus(HttpStatus.NO_CONTENT.value());
}
  • @RequestMapping - contains the URL pattern that this endpoint will handle requests for. The method attribute is DELETE, indicating that this method will process incoming HTTP DELETE requests. 
  • @PathVariable annotation strips customer Id parameter from request the URL and maps it to customer Id method parameter,
  • Lines 4 & 5 - check that the supplied entity already exists and if it does, use the JPA repository to delete it.
  • Lines 8 -  Finally we set a HTTP 204 No Content to let the client know the operation succeeded, but that we haven't returned anything in the response body.

Test Data

We've now created all 5 endpoints, but before we can run the service we need to setup some test data. We'll use 2 classes, one to supply the test data and another to load that data on application startup. I've deliberately split this into two classes as I want to reuse the data provider later as part of the integration tests. The data provider simply returns a list of Customer objects as shown below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
package com.blog.samples.boot.rest.data;

import java.util.Arrays;
import java.util.List;

import org.joda.time.DateTime;
import org.springframework.stereotype.Component;

import com.blog.samples.boot.rest.model.Address;
import com.blog.samples.boot.rest.model.Customer;

@Component
public class DataBuilder {
 
    public List<Customer> createCustomers() {

        Customer customer1 = new Customer("Joe", "Smith", DateTime.parse("1982-01-10").toDate(),
             new Address("High Street", "Belfast", "Down", "BT893PY"));

        Customer customer2 = new Customer("Paul", "Jones", DateTime.parse("1973-01-03").toDate(),
             new Address("Main Street", "Lurgan", "Armagh", "BT283FG"));

        Customer customer3 = new Customer("Steve", "Toale", DateTime.parse("1979-03-08").toDate(),
             new Address("Main Street", "Newry", "Down", "BT359JK"));
  
        return Arrays.asList(customer1, customer2, customer3);
    }
}

Next we'll create a DataLoader and configure it to run on application startup.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
package com.blog.samples.boot.rest.data;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Component;

import com.blog.samples.boot.rest.repository.CustomerRepository;

import lombok.extern.slf4j.Slf4j;

@Component
@Slf4j
public class DataLoader implements ApplicationListener<ContextRefreshedEvent>{

    @Autowired
    private DataBuilder dataBuilder;
 
    @Autowired
    private CustomerRepository customerRepository;

    @Override
    public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {

        log.debug("Loading test data...");
        dataBuilder.createCustomers().forEach(customer -> customerRepository.save(customer));
        log.debug("Test data loaded...");
    }
}

On application startup Spring calls the onApplicationEvent method to get data from the DataProvider and save it to the database. 


Endpoint Integration Tests

Now that we have everything in place its time to write some integration tests to prove it all works. I like integration tests for rest endpoints as they allow us to test the HTTP endpoint, controller logic and and data access as a single unit. If this were production code we would of course supplement the integration tests with a suit of fine grained unit tests using mocked dependencies. For the sake of this tutorial though, we'll stick with the integration tests.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
@IntegrationTest({"server.port=0"})
public class CustomerControllerIT {

 @Value("${local.server.port}")
 private int port;
 private URL base;
 private RestTemplate template;

 @Autowired
 private DataBuilder dataBuilder;
 
 @Autowired
 private CustomerRepository customerRepository;
 
 private static final String JSON_CONTENT_TYPE = "application/json;charset=UTF-8"; 
 

In the class header we use a number of annotations to configure the tests. We need to tell Spring that this is an integration test and that it requires a WebApplicationContext to run.
A RestTemplate and URL are used to build the HTTP requests we'll send to the endpoints. We've also injected the DataBuilder which we'll use below to setup test data prior to each test.

Next we'll add a setup method that will run before each test, to build the base URL and set up test data.

1
2
3
4
5
6
7
8
9
@Before
public void setUp() throws Exception {
    this.base = new URL("http://localhost:" + port + "/rest/customers");
    template = new TestRestTemplate();  
  
    /* remove and reload test data */
    customerRepository.deleteAll();  
    dataBuilder.createCustomers().forEach(customer -> customerRepository.save(customer));  
}


Get All Customers Test

1
2
3
4
5
6
7
8
@Test
public void getAllCustomers() throws Exception {
    ResponseEntity<String> response = template.getForEntity(base.toString(), String.class);  
    assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));
  
    List<Customer> customers = convertJsonToCustomers(response.getBody());  
    assertThat(customers.size(), equalTo(3));  
}

This test sends a HTTP Get to localhost:8080/rest/customers and verifies that HTTP response code is 200. We use a convenience method to convert the JSON payload in the response body to a list of Customer objects, and check that 3 objects have been returned.

Get Customer By Id Test

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Test
public void getCustomerById() throws Exception {
  
    Long customerId = getCustomerIdByFirstName("Joe");
    ResponseEntity<String> response = template.getForEntity(String.format("%s/%s", base.toString(), customerId), String.class);
    assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));
    assertThat(response.getHeaders().getContentType().toString(), equalTo(JSON_CONTENT_TYPE));
  
    Customer customer = convertJsonToCustomer(response.getBody());
  
    assertThat(customer.getFirstName(), equalTo("Joe"));
    assertThat(customer.getLastName(), equalTo("Smith"));
    assertThat(customer.getDateOfBirth().toString(), equalTo("Sun Jan 10 00:00:00 GMT 1982"));
    assertThat(customer.getAddress().getStreet(), equalTo("High Street"));
    assertThat(customer.getAddress().getTown(), equalTo("Belfast"));
    assertThat(customer.getAddress().getCounty(), equalTo("Down"));
    assertThat(customer.getAddress().getPostcode(), equalTo("BT893PY"));
}

We start by looking up the target customer Id using the JPA repository created earlier. We then send a HTTP Get to localhost:8080/rest/customers/{ID} and verify the HTTP response code and content type. We convert the JSON payload in the response body to a a Customer object and check that the object is populated as expected.

Create Customer Test

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
@Test
public void createCustomer() throws Exception {

    Customer customer = new Customer("Gary", "Steale", DateTime.parse("1984-03-08").toDate(),
       new Address("Main Street", "Portadown", "Armagh", "BT359JK"));

    ResponseEntity<String> response = template.postForEntity("http://localhost:" + port + "/rest/customers", customer, String.class);
    assertThat(response.getStatusCode(), equalTo(HttpStatus.CREATED));
    assertThat(response.getHeaders().getContentType().toString(), equalTo(JSON_CONTENT_TYPE));
    assertThat(response.getHeaders().getFirst("Location"), containsString("/rest/customers/"));
  
    Customer returnedCustomer = convertJsonToCustomer(response.getBody());  
    assertThat(customer.getFirstName(), equalTo(returnedCustomer.getFirstName()));
    assertThat(customer.getLastName(), equalTo(returnedCustomer.getLastName()));
    assertThat(customer.getDateOfBirth(), equalTo(returnedCustomer.getDateOfBirth()));
    assertThat(customer.getAddress().getStreet(), equalTo(returnedCustomer.getAddress().getStreet()));
    assertThat(customer.getAddress().getTown(), equalTo(returnedCustomer.getAddress().getTown()));
    assertThat(customer.getAddress().getCounty(), equalTo(returnedCustomer.getAddress().getCounty()));
    assertThat(customer.getAddress().getPostcode(), equalTo(returnedCustomer.getAddress().getPostcode()));
}

In this test we create a Customer object, convert it to JSON and do a HTTP POST to localhost:8080/rest/customers. We verify the response type, content type, the existence of a Location header and the response body to ensure it contains a JSON representation of a Customer entity. The JSON is converted to an object for verification.

Update Customer Test

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
@Test
public void updateCustomer() throws Exception {

    Long customerId = getCustomerIdByFirstName("Joe");
    ResponseEntity<String> getCustomerResponse = template.getForEntity(String.format("%s/%s", base.toString(), customerId), String.class);
    assertThat(getCustomerResponse.getStatusCode(), equalTo(HttpStatus.OK));
    assertThat(getCustomerResponse.getHeaders().getContentType().toString(), equalTo(JSON_CONTENT_TYPE));
  
    Customer returnedCustomer = convertJsonToCustomer(getCustomerResponse.getBody());
    assertThat(returnedCustomer.getFirstName(), equalTo("Joe"));
    assertThat(returnedCustomer.getLastName(), equalTo("Smith"));
    assertThat(returnedCustomer.getDateOfBirth().toString(), equalTo("Sun Jan 10 00:00:00 GMT 1982"));
    assertThat(returnedCustomer.getAddress().getStreet(), equalTo("High Street"));
    assertThat(returnedCustomer.getAddress().getTown(), equalTo("Belfast"));
    assertThat(returnedCustomer.getAddress().getCounty(), equalTo("Down"));
    assertThat(returnedCustomer.getAddress().getPostcode(), equalTo("BT893PY"));
  
    /* convert JSON response to Java and update name */
    ObjectMapper mapper = new ObjectMapper();
    Customer customerToUpdate = mapper.readValue(getCustomerResponse.getBody(), Customer.class);
    customerToUpdate.setFirstName("Wayne");
    customerToUpdate.setLastName("Rooney");

    /* PUT updated customer */
    HttpHeaders headers = new HttpHeaders();
    headers.setContentType(MediaType.APPLICATION_JSON); 
    HttpEntity<Customer> entity = new HttpEntity<Customer>(customerToUpdate, headers); 
    ResponseEntity<String> response = template.exchange(String.format("%s/%s", base.toString(), customerId), HttpMethod.PUT, entity, String.class, customerId);
  
    assertThat(response.getBody(), nullValue());
    assertThat(response.getStatusCode(), equalTo(HttpStatus.NO_CONTENT));

    /* GET updated customer and ensure name is updated as expected */
    ResponseEntity<String> getUpdatedCustomerResponse = template.getForEntity(String.format("%s/%s", base.toString(), customerId), String.class);
    assertThat(getCustomerResponse.getStatusCode(), equalTo(HttpStatus.OK));  
    assertThat(getCustomerResponse.getHeaders().getContentType().toString(), equalTo(JSON_CONTENT_TYPE));
  
    Customer updatedCustomer = convertJsonToCustomer(getUpdatedCustomerResponse.getBody());
    assertThat(updatedCustomer.getFirstName(), equalTo("Wayne"));
    assertThat(updatedCustomer.getLastName(), equalTo("Rooney"));
    assertThat(updatedCustomer.getDateOfBirth().toString(), equalTo("Sun Jan 10 00:00:00 GMT 1982"));
    assertThat(updatedCustomer.getAddress().getStreet(), equalTo("High Street"));
    assertThat(updatedCustomer.getAddress().getTown(), equalTo("Belfast"));
    assertThat(updatedCustomer.getAddress().getCounty(), equalTo("Down"));
    assertThat(updatedCustomer.getAddress().getPostcode(), equalTo("BT893PY"));
}

This test has a number of steps. We begin by issuing a HTTP GET to retrieve an existing Customer resource. We then update that entity and issue a HTTP PUT request with the updated Customer passed as JSON in the request body. Finally we issue another HTTP GET request to retrieve the updated representation of the Customer resource and verify that it has been updated as expected.

Delete Customer Test

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Test
public void deleteCustomer() throws Exception {

    Long customerId = getCustomerIdByFirstName("Joe");  
    ResponseEntity<String> response = template.getForEntity(String.format("%s/%s", base.toString(), customerId), String.class);
    assertThat(response.getStatusCode(), equalTo(HttpStatus.OK));
    assertThat(response.getHeaders().getContentType().toString(), equalTo(JSON_CONTENT_TYPE));
  
    Customer customer = convertJsonToCustomer(response.getBody());
    assertThat(customer.getFirstName(), equalTo("Joe"));
    assertThat(customer.getLastName(), equalTo("Smith"));
    assertThat(customer.getDateOfBirth().toString(), equalTo("Sun Jan 10 00:00:00 GMT 1982"));
    assertThat(customer.getAddress().getStreet(), equalTo("High Street"));
    assertThat(customer.getAddress().getTown(), equalTo("Belfast"));
    assertThat(customer.getAddress().getCounty(), equalTo("Down"));
    assertThat(customer.getAddress().getPostcode(), equalTo("BT893PY"));
  
    /* delete customer */
    template.delete(String.format("%s/%s", base.toString(), customerId), String.class);
  
    /* attempt to get customer and ensure qwe get a 404 */
    ResponseEntity<String> secondCallResponse = template.getForEntity(String.format("%s/%s", base.toString(), customerId), String.class);
    assertThat(secondCallResponse.getStatusCode(), equalTo(HttpStatus.NOT_FOUND));
}

This test begins by retrieving an existing Customer resource using a HTTP GET request. We then issue a HTTP DELETE request to remove the entity, followed by another HTTP GET to check that the entity is no longer available.

Running from the Command Line

The integration tests we created above are handy but its also useful to test the service from the command line with CURL. To run the sample code from the command line follow the instructions below.
  1. cd to spring-boot-rest directory
  2. run mvn clean  package
  3. run java -jar target/spring-boot-rest-0.1.0.jar and the application should start up as shown below.

Testing from the Command Line

Below are a few sample CURL commands for calling the service from the command line. This is nice handy way of sending HTTP requests to an endpoint without the hassle of creating a client application.

Get Customer by Id

curl -i localhost:8080/rest/customers/1











Get All Customers

curl -i localhost:8080/rest/customers







Create New Customer (POST)

curl -i -H "Content-Type: application/json" -X POST -d '{"firstName":"JoeXXXXXXXXXXXXXXX","lastName":"SmithXXXXXXXXXXXXX","dateOfBirth":379468800000,"address":{"street":"High Street","town":"Belfast","county":"Down","postcode":"BT893PY"}}' localhost:8080/rest/customers







Update Customer (PUT)

 curl -i -H "Content-Type: application/json" -X PUT -d '{"id":3,"firstName":"Joe","lastName":"Smith333333","dateOfBirth":379468800000,"address":{"id":3,"street":"High Street","town":"Belfast","county":"Down","postcode":"BT893PY"}}' localhost:8080/rest/customers/3










Delete Customer

curl -i -X DELETE localhost:8080/rest/customers/2








Summary

After reading this post you should have the knowledge required to build and test a simple RESTful service with Spring Boot. The full source code for this tutorial is available on Github at  
https://github.com/briansjavablog/spring-boot-rest-tutorial, so feel free to download it and have a play around. If you found this material useful, please share it with others or leave a comment below.