Deploy a Java app as Docker container on AWS with Maven (Part 2)
Introduction
In the recent part 1 of this series, we published a docker image containing a simple microservice to to the AWS Elastic Container Registry (ECR).
ECR can be used like any other Docker registry to pull images and run them on any authenticated EC2 instance.
This article describes the necessary steps to provide a minimal EC2 instance that can run a docker container from an image in ECR. The instance is set up via the AWS CLI tools, and not using the Web Console, on purpose. As soon as you want to go serious with your AWS endeavours, the UI won’t be of much use - interaction with it cannot be feasibly automated. So, we prefer the CLI as it can be used in your setup scripts.
Prerequisite: An existing ECR registry with images
This article assumes that you have already pushed a docker image to an ECR registry in your AWS account. You should read part 1 of the series, where you will create a maven project that pushes docker images to ECR. You can also read the getting started guide on AWS ECR and push an image to your registry.
Provide an AWS EC2 key pair
Our docker container will be deployed on EC2, so naturally, we have to provide an instance. But there’s a lot more to it: The EC2 instance must be accessible for us via SSH and HTTP, it must run the docker agent and have access to ECR where our docker images reside.
First things first: To be able to log in into our instance later on, we first have to create a key pair. If you already have a key pair registered on your AWS account, you can safely skip this section.
To create an AWS key pair for your account, follow along the instructions given by the AWS documentation. On UNIX-like systems, the command is as follows:
1 | $ aws ec2 create-key-pair --key-name MyKeyPair --query 'KeyMaterial' --output text > MyKeyPair.pem |
Remember the location where you stored the key file. You will need it later.
Create a security group
By default, any EC2 instance you create won’t be accessible from your local machine. We will have to make sure you can log in to the instance via SSH to install docker and run images. Access must be granted over port 22 for SSH and, in case you use the docker image created in the last article, port 8080 for the web application.
This article will not go into great detail on AWS networking. The instance as well as all associated networking resources will be deployed in the account’s default VPC. If you don’t yet know what a VPC is, that’s fine - think of it as an isolated network in the AWS cloud where you can safely run your EC2 instances.
To open up the necessary ports for access from your local machine, we will use security groups. A security group in AWS behaves like a firewall preventing or granting access to your instances to a range of source IP addresses, protocols and ports. In our example, we need TCP over port 22 for SSH and TCP over port 8080 for HTTP from the IP address of your computer - or, if you don’t know your IP address and are lazy, for the whole world.
To create a security group, use the following command:
1 | $ aws ec2 create-security-group --group-name DockerHostSecurityGroup --description "A security group for the docker tutorial on aerben.me" |
Then go ahead and allow access for ports 22 and 8080 over TCP. The given commands will allow access to every source IP address by specifying the CIDR block 0.0.0.0/0
. If you know your public IP address or a range in which it will be, you can and should substitute the appropriate CIDR range.
1 | $ aws ec2 authorize-security-group-ingress --group-name DockerHostSecurityGroup --protocol tcp --port 22 --cidr "0.0.0.0/0" |
That’s all. We don’t have to explicitly set egress (“outbound”) rules because security groups in AWS are stateful. In simple terms, that means that all traffic that has been allowed in is also allowed back out.
The security group is now ready to use for the EC2 instance.
Run an EC2 instance
Now that we have all prerequisites in place, actually creating the instance is pleasantly simple.
All you need is the key and the security group you just created and a so-called Amazon Machine Image (AMI) ID. This is the image used to create the boot volume of your instance. The example below uses the AMI for Amazon Linux in the EU (Frankfurt) region. You can look up the Amazon Machine Image for your region here. Be sure to use the “HVM (SSD) EBS-backed 64 Bit” AMI.
The command to run a free-tier eligible instance is as follows:
1 | $ aws ec2 run-instances --image-id ami-5652ce39 --count 1 --instance-type t2.micro --key-name MyKeyPair --security-groups DockerHostSecurityGroup |
We will now have to wait until the instance comes alive. The following command will tell you the current state of your instance as well as its public IP address and Instance ID, both of which we will soon need:
1 | $ aws ec2 describe-instances --query "Reservations[*].Instances[*].[InstanceId,PublicIpAddress,State.Name]" |
As soon as the EC2 instance state switches to “running”, you can continue with the tutorial.
Connect to the instance and install docker
The EC2 instance is now running, so we should now go ahead and connect to it via SSH. Note that the “running” state of an EC2 instance just means that the virtual machine has started - the OS itself might still be booting up. That means that you might still have to wait some minutes to be able to connect to the server.
The default user for Amazon Linux is ec2-user
, so given the private key file you created above and the public IP address of your instance, you can open up the connection as follows:
1 | $ ssh -i MyKeyPair.pem ec2-user@35.158.97.195 |
You should now run sudo yum update -y
to install all updates.
Next, we need to install and start docker on our EC2 instance. Using yum, this isn’t hard at all:
1 | $ sudo yum install -y docker |
This will install docker on your machine. Now, the last thing we have to do is download and run the image and were done… right?
It’s not that simple. If we try to deploy the image via ECR (remember to replace the account id), the following will happen:
1 | $ sudo docker run -p 8080:8080 [[ACCOUNT_ID]].dkr.ecr.eu-central-1.amazonaws.com/spark-sample-service:1.0 |
Oh yeah, we’ve seen that already in the last article. We need to perform a docker login against ECR to pull an image.
Authenticate docker with ECR and run the image
If you have read the last article, this is no news for you: for the instance to gain access to ECR, you must first authenticate docker against the registry.
To that end, use the AWS ECR tools to retrieve credentials for logging in. The get-login
command returns a ready-to-use command for docker login
to authenticate.
1 | $ aws ecr get-login --no-include-email |
Then at last, start a container from your image
1 | $ sudo docker run -p 8080:8080 [[ACCOUNT_ID]].dkr.ecr.eu-central-1.amazonaws.com/spark-sample-service:1.0` |
and verify your setup by calling your service via the instance’s public IP address:
1 | curl 35.158.97.195:8080 |
And that’s it!
Don’t forget to terminate your instance as soon as you are done:
1 | aws ec2 terminate-instances --instance-ids i-0408fef6a295da99e |
Replace the instance id with the one of your own instance.
Conclusion
Over the course of the last two articles, we’ve learned a lot:
- Packaging a Java application as docker image with the Spotify Dockerfile Maven Plugin
- Authenticate Docker against the AWS Elastic Container Registry (ECR)
- Pulling and running images from ECR
- Creating AWS EC2 key pairs
- Granting network access to EC2 instances via security groups
- Starting and terminating EC2 instances via the command line
I hope you’ve enjoyed the series and am glad to hear your feedback in the comments section below!