How to deploy to AWS EC2 with GitHub actions ,secrets and docker compose

Photo by Ian Taylor on Unsplash

How to deploy to AWS EC2 with GitHub actions ,secrets and docker compose

In this post we would be learning how we can deploy to AWS EC2 with GitHub pipeline, secrets (for sensitive credentials) and docker compose.

Prerequisite :

  • Knowledge of AWS

  • Linux knowledge

  • Basic github action knowledge

  • AWS account (and a running ec2 instance)

  • A repo containing your basic app (we will be using a nest-js app in this lesson)

  • Basic knowledge of docker and Yaml (You can get links in my article on How to setup docker compose for development environment)

First step is to GitHub secrets to store our important credentials. Which will be mongodb-url . You can use any secret your application depends on .

On GitHub , navigate to you repository then click settings > secrets and variables > actions > New repository secrets .

I would be storing my mongo uri as MONGO_URI on GitHub secret.

Next step is to create an Elastic container registry on AWS.

To do this follow the steps below

  • Navigate to your amazon web console

  • type ECR in the search box

  • Click elastic container registry

  • click create repository

  • select private

  • enter your repository name (copy this as I will refer to it as yourecrname in the instructions below. Substitute it accordingly)

  • Click create repository

Now store the registry number as ECR_REGISTRY in github secrets

add AWS_SECRET_ACCESS_KEY as your IAM secret access key on AWS,

AWS_ACCESS_KEY_ID as your IAM access key id on AWS,

EC2_HOST as your EC2 ipv4 address on AWS,

EC2_USERNAME as ubuntu (if you used an Ubuntu AMI for your EC2 else check the appropriate EC2 username and input it) on AWS . Preferably use ubuntu AMI as the instruction below is written for it.

EC2_PRIVATE_KEY as your pem key ( the one you registered with the EC2 . Copy it and paste it in the secret box) on AWS.

AWS_REGION as your appropriate region.

we would be creating our docker-compose file which we would be using for deployment in our pipeline.

In the root of your project create a file name docker-compose.yourecrname.prod.yml

Paste the following code in your

services:
    api:
        image: ${ECR_REGISTRY}/yourecrname:1.0.0
        command: npm run start:prod
        environment:
            - MONGO_URI=${MONGO_URI}
        ports:
            - 8000:8000

Now let's explain what is going on here. The services block allows us to specify different services as we can use docker-compose to manage a multi-container application. In this case we are only running one application which is api.

The image tag is used to specify the image our container will be using. The line ${ECR_REGISTRY}/yourecrname:1.0.0 is referencing your ecr which we have stored in secrets and specifically your repository with the tag 1.0.0 . Once we start writing the pipeline you would see how this adds up.

The command tag is used to replace the default one define in the Dockerfile used to build the image . We are using nest js application so we are running npm run start:prod to spin up the application in production mode.

Then we are setting environmental variables for the container from the env variable that will be set after we ssh into the EC2 instance.

Then we are using ports to expose the port 8000 in the container to the port 8000 outside the container.

With that said , the next step is to create out github action pipeline to automate deployment to ec2. Carefully follow the steps below :

  • On your github repository click actions > Skip this and set up a workflow yourself

  • Then paste the content of the yaml file below into the input box provided.

  • The comment on each step explains the step.

  • Try to get an idea of what each step does for understanding sake

name: Deploy to AWS Ec2 #Name of action
    #specifying action trigger
    on:
       push:
        branches:
            - main

    jobs: # Set of Operation that will run a github virtual environment
        Build_and_push_image: # specifying one job
            runs-on: ubuntu-latest # specifying the OS the job will run on
            steps: # list or operations that will run
                - name: Checkout repository # Name of first operation
                  uses: actions/checkout@v2 # using an existing github action to checkout into the main repository (like git checkout branchname)

                - name: Install Docker #Operation to install docker listed
                  run: | # Run helps to run commands in the vitual environment (In this case we are running steps to install docker so we can build and push docker images to our ECR)
                    sudo apt-get update
                    sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
                    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
                    echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
                    sudo apt-get update
                    sudo apt-get -y install docker-ce docker-ce-cli containerd.io    

                - name: Configure AWS credentials # To push images to ECR we need to configure AWS credential so we can login
                  uses: aws-actions/configure-aws-credentials@v1 # uses an existing github action  and provides credentials from our secrets
                  with:
                    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
                    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
                    aws-region: ${{ secrets.AWS_REGION }}

                - name: Login to AWS ECR # This step logs into ECR with an existing github action after setting configurations
                  id: login-ecr
                  uses: aws-actions/amazon-ecr-login@v1

                - name: Build and push Docker image # This step pushes to ecr by using the docker command under run
                  env:
                    ECR_REGISTRY: ${{ secrets.ECR_REGISTRY }}
                    IMAGE_TAG: 1.0.0
                  run: |
                    docker build -t $ECR_REGISTRY/yourecrname:$IMAGE_TAG .
                    docker push $ECR_REGISTRY/yourecrname:$IMAGE_TAG

                - name: copy file via ssh key #We would need our docker compose file in the ec2 instance so we are copying it there ahead of time using an existing scp github action
                  uses: appleboy/scp-action@v0.1.7 # the githhub action we are using
                  with:  #passing the required credentials from our secret
                    host: ${{ secrets.EC2_HOST }}
                    username: ${{ secrets.EC2_USERNAME }}
                    key: ${{ secrets.EC2_PRIVATE_KEY }}
                    port: 22
                    source: "docker-compose.yourecrname.prod.yml"
                    target: "/home/ubuntu"
        pull_and_deploy_in_ec2:  # After pushing to ecr we need to pull from our ec2 and spin up the container. so we define a seperate job for it
            runs-on: ubuntu-latest # specifying OS for job to run in
            needs: Build_and_push_image
            steps:
            - name: Deploy to EC2
              uses: appleboy/ssh-action@master
              with:
                host: ${{ secrets.EC2_HOST }}
                username: ${{ secrets.EC2_USERNAME }}
                key: ${{ secrets.EC2_PRIVATE_KEY }}
                script: |

                # Installing aws cli

                sudo apt-get update

                sudo apt-get install -y awscli

                # Configure AWS credentials

                aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }}

                aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }}

                aws configure set default.region ${{ secrets.EC2_HOST }}

                # Install Docker inside the remote EC2 instance

                sudo apt-get update

                sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

                curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg

                echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

                sudo apt-get update

                sudo apt-get -y install docker-ce docker-ce-cli containerd.io

                # Authenticate to ECR

                aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.ECR_REGISTRY }}

                # Add environmental variable
                export ECR_REGISTRY=${{ secrets.ECR_REGISTRY }}
                export MONGO_URI="${{ secrets.MONGO_URI }}" # Added quote incase of characters that have other function, this could disrupt the process if not added

                # Add SSH user to the docker group
                sudo usermod -aG docker $USER

                # Install Docker Compose inside the remote EC2 instance
                sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
                sudo chmod +x /usr/local/bin/docker-compose

                # Clear existing container and images with added project name, This will prevent the command from clearing containers of other docker compose file
                docker-compose -f ~/docker-compose.yourecrname.prod.yml -p yourecrname down --rmi all

                # Run the Docker commands for deployment with added project name , future down command will clear containers and images with this project name
                docker-compose -f ~/docker-compose.yourecrname.prod.yml -p yourecrname up -d

Afterwards merge the changes to your main branch . Click actions to check the progress. Once deployed you can ssh into your EC2 via your terminal . Then run docker ps to view your running container . You can also run ls to view your docker-compose.yourecrname.prod.yml file.

I hope this helps . Thanks