AWS Automated Deployment With CI/CD Pipeline

by Pedro Alvarez 45 views

Hey guys! As a DevOps engineer, automating deployments is a game-changer. This guide dives into setting up a CI/CD pipeline to automatically deploy successful builds to AWS, ensuring releases happen smoothly without any manual intervention. We're talking zero-downtime deployments and easy rollbacks – let's get started!

Acceptance Criteria

Before we jump into the technical stuff, let’s define what we want to achieve. Our automated deployment pipeline needs to meet the following criteria:

  • Deployment triggered on the main branch: Any push to the main branch should kick off the deployment process.
  • Only deploys if all tests pass: We want to ensure our code is solid before it goes live, so deployments should only proceed if all tests pass.
  • Zero-downtime deployment: No one likes downtime! Our deployment strategy must ensure that the application remains available during the deployment process.
  • Rollback capability: If something goes wrong, we need a quick and easy way to roll back to the previous version.

Deployment triggered on main branch

To ensure our deployment pipeline is triggered automatically on the main branch, we need to configure our CI/CD system to listen for push events on this branch. This approach ensures that every merge or direct commit to the main branch initiates a new deployment cycle, keeping our production environment up-to-date with the latest changes. The immediate trigger allows for rapid feedback and continuous integration, where code changes are built, tested, and deployed in a streamlined process. We'll configure our GitHub Actions workflow to specifically monitor the main branch, ensuring that no manual intervention is needed to start a deployment after code integration. This automation not only saves time but also reduces the risk of human error, as the deployment process is standardized and consistently executed. Furthermore, by linking deployments directly to the main branch, we maintain a clear and auditable trail of changes, making it easier to track and manage releases. This setup provides a robust foundation for continuous deployment, allowing our team to focus on development rather than deployment logistics. We'll dive deeper into the specific configurations within our .github/workflows/deploy.yml file shortly, showing how we instruct GitHub Actions to watch for these crucial push events on the main branch. This configuration is the cornerstone of our automated deployment strategy, ensuring that our pipeline is responsive and efficient, and that our deployments are always in sync with our main codebase.

Only deploys if all tests pass

Ensuring that only tested code is deployed is a crucial part of a robust CI/CD pipeline. We aim to prevent deploying code that could introduce bugs or instability into the production environment. This is achieved by integrating a testing phase into our pipeline that runs before the deployment phase. If any tests fail during this phase, the pipeline will stop, preventing the flawed code from being deployed. This approach helps maintain high code quality and ensures that the application remains stable and reliable. Our tests typically include unit tests, integration tests, and sometimes end-to-end tests, depending on the complexity and needs of the project. Unit tests verify the functionality of individual components, integration tests ensure that different parts of the application work together correctly, and end-to-end tests simulate user interactions to confirm the application behaves as expected in a production-like environment. By making tests a mandatory step in our deployment pipeline, we're not just catching bugs early; we're also building confidence in our code. This confidence allows us to deploy more frequently and with less risk, which is a key principle of continuous delivery. In our GitHub Actions workflow, this is achieved through the test job, which executes our test suite and, if successful, allows the deploy job to proceed. We'll see how this is configured in the technical implementation section, making sure that the deployment hinges on the success of our tests.

Zero-downtime deployment

Achieving zero-downtime deployment is a top priority for any modern application. Users expect continuous availability, and downtime can lead to lost revenue, damaged reputation, and frustrated users. To achieve this, we employ a rolling deployment strategy, which involves updating the application in a way that minimizes service disruption. This strategy involves deploying the new version of the application alongside the old version, gradually shifting traffic to the new version as it becomes available and healthy. We avoid taking the entire application offline to deploy updates, which is the traditional (and disruptive) approach. Our zero-downtime deployment strategy includes several key components. First, we ensure that we have a health check endpoint that allows us to verify the application's readiness to serve traffic. This endpoint is checked regularly during the deployment process to ensure that the new version is functioning correctly before traffic is routed to it. Second, we use a load balancer to distribute traffic across multiple instances of the application. This allows us to update instances one by one, without affecting overall service availability. Third, we maintain backward compatibility between the old and new versions of the application. This ensures that the new version can handle requests from the old version and vice versa, which is crucial during the transition period. The detailed steps for this process will be elaborated in the technical implementation section, where we'll discuss how we use scp and ssh to deploy new JAR files and restart the Spring Boot application with minimal interruption. This zero-downtime approach is not just a technical requirement; it's a business imperative.

Rollback capability

Even with rigorous testing and careful deployment procedures, issues can sometimes arise in production. Therefore, having a rollback capability is crucial for quickly addressing problems and minimizing downtime. A rollback allows us to revert to the previous working version of the application if the new deployment introduces unexpected issues. This capability acts as a safety net, ensuring that we can quickly recover from problems without causing prolonged disruptions to users. Our rollback strategy involves keeping the previous version of the application readily available. This can be achieved by maintaining a backup of the previous deployment artifacts or by using a version control system to quickly revert to the previous commit. In our technical implementation, we maintain the previous JAR file and have a simple mechanism to switch back to it if necessary. The rollback process should be quick and easy to execute. Ideally, it should be automated as much as possible to reduce the risk of human error during a high-pressure situation. This automation can be achieved through scripting or by using features provided by the deployment platform. Testing the rollback procedure is just as important as testing the deployment procedure. Regularly practicing rollbacks ensures that the process is well-understood and that the team is prepared to execute it effectively when needed. We'll discuss how our scripts are structured to facilitate this in the technical implementation section, emphasizing the importance of a well-defined and tested rollback strategy in maintaining application stability.

Technical Implementation

Alright, let's get into the nitty-gritty! Here's how we'll implement our CI/CD pipeline using GitHub Actions and AWS.

1. GitHub Actions Deployment

GitHub Actions is our weapon of choice for automating our workflow. We'll create a deploy.yml file in the .github/workflows directory of our repository. This file will define our deployment pipeline.

# .github/workflows/deploy.yml
name: Deploy to AWS

on:
  push:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Run tests
        run: mvn clean test
  
  deploy:
    needs: test
    runs-on: ubuntu-latest
    if: success()
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Build application
      run: mvn clean package -DskipTests
    
    - name: Deploy to AWS
      env:
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        EC2_HOST: ${{ secrets.EC2_HOST }}
        EC2_USER: ${{ secrets.EC2_USER }}
        PRIVATE_KEY: ${{ secrets.EC2_PRIVATE_KEY }}
      run: |
        echo "$PRIVATE_KEY" > private_key.pem
        chmod 600 private_key.pem
        
        # Copy JAR to EC2
        scp -i private_key.pem -o StrictHostKeyChecking=no \
          target/*.jar ${EC2_USER}@${EC2_HOST}:/opt/app/app-new.jar
        
        # Deploy with zero downtime
        ssh -i private_key.pem -o StrictHostKeyChecking=no \
          ${EC2_USER}@${EC2_HOST} << 'EOF'
          sudo mv /opt/app/app-new.jar /opt/app/app.jar
          sudo systemctl restart springboot
        EOF

This YAML file defines our workflow. Let's break it down:

  • name: The name of our workflow, which will appear in the GitHub Actions UI.
  • on: Specifies the events that trigger the workflow. In our case, it's a push to the main branch.
  • jobs: Defines the jobs that make up our workflow. We have two jobs: test and deploy.
    • test: Runs our tests using Maven.
      • runs-on: Specifies the runner environment (Ubuntu in this case).
      • steps: Defines the steps to be executed.
        • uses: actions/checkout@v3: Checks out our code.
        • name: Run tests: A descriptive name for the step.
        • run: mvn clean test: Executes our tests using Maven.
    • deploy: Deploys our application to AWS.
      • needs: test: Specifies that this job depends on the test job. It will only run if the test job succeeds.
      • runs-on: Specifies the runner environment.
      • if: success(): Ensures that this job only runs if the previous jobs were successful.
      • steps: Defines the deployment steps.
        • uses: actions/checkout@v3: Checks out our code.
        • name: Build application: Builds our application using Maven.
        • run: mvn clean package -DskipTests: Builds our application and skips tests (since they were already run in the test job).
        • name: Deploy to AWS: Deploys our application to AWS.
          • env: Defines environment variables that will be used in the step.
            • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY: AWS credentials for accessing our AWS resources.
            • EC2_HOST: The hostname or IP address of our EC2 instance.
            • EC2_USER: The username for connecting to our EC2 instance.
            • PRIVATE_KEY: The private key for SSH access to our EC2 instance.
          • run: The commands to be executed for deployment.
            • echo "$PRIVATE_KEY" > private_key.pem: Writes the private key to a file.
            • chmod 600 private_key.pem: Sets the correct permissions on the private key file.
            • scp -i private_key.pem -o StrictHostKeyChecking=no target/*.jar ${EC2_USER}@${EC2_HOST}:/opt/app/app-new.jar: Copies the JAR file to our EC2 instance.
            • ssh -i private_key.pem -o StrictHostKeyChecking=no ${EC2_USER}@${EC2_HOST} << 'EOF' ... EOF: Executes commands on our EC2 instance.
              • sudo mv /opt/app/app-new.jar /opt/app/app.jar: Moves the new JAR file to the active application path.
              • sudo systemctl restart springboot: Restarts the Spring Boot application.

This workflow first runs the tests. If the tests pass, it builds the application and then deploys it to our EC2 instance. The deployment process involves copying the JAR file to the EC2 instance and then restarting the Spring Boot application. This is a basic example, and you might need to adjust it based on your specific needs.

2. Configure GitHub Secrets

Security is paramount, so we'll store our sensitive information as GitHub Secrets. These secrets are encrypted and only accessible within our workflow.

We'll need to configure the following secrets:

  • AWS_ACCESS_KEY_ID: Your AWS access key ID.
  • AWS_SECRET_ACCESS_KEY: Your AWS secret access key.
  • EC2_HOST: The hostname or IP address of your EC2 instance.
  • EC2_USER: The username for connecting to your EC2 instance.
  • EC2_PRIVATE_KEY: The private key for SSH access to your EC2 instance.

To add these secrets, go to your repository on GitHub, click on