AWS Deployment Strategies - Deploy applications to AWS using EC2, ECS, Lambda, and other AWS services with best practices. Maste...
DevOps & Deployment

AWS Deployment Strategies

Deploy applications to AWS using EC2, ECS, Lambda, and other AWS services with best practices. Master cloud deployment strategies for scalable applications.

TechDevDex Team
12/1/2024
28 min
#AWS#Cloud Deployment#EC2#ECS#Lambda#DevOps#Scalability

AWS Deployment Strategies

Amazon Web Services (AWS) provides a comprehensive suite of services for deploying and managing applications in the cloud. This guide covers various deployment strategies, from simple EC2 instances to serverless architectures.

AWS Deployment Overview

Core Services

Compute Services

  • EC2: Virtual servers in the cloud
  • ECS: Container orchestration service
  • EKS: Managed Kubernetes service
  • Lambda: Serverless compute service
  • Fargate: Serverless containers

Storage Services

  • S3: Object storage service
  • EBS: Block storage for EC2
  • EFS: Managed file system
  • FSx: Managed file systems

Database Services

  • RDS: Managed relational databases
  • DynamoDB: NoSQL database service
  • ElastiCache: In-memory caching
  • Redshift: Data warehouse service

Deployment Strategies

Traditional Deployment

  • EC2 Instances: Virtual machines
  • Load Balancers: Application Load Balancer (ALB)
  • Auto Scaling Groups: Automatic scaling
  • VPC: Virtual private cloud

Container Deployment

  • ECS: Container orchestration
  • EKS: Managed Kubernetes
  • Fargate: Serverless containers
  • ECR: Container registry

Serverless Deployment

  • Lambda: Function-as-a-Service
  • API Gateway: API management
  • S3: Static website hosting
  • CloudFront: Content delivery network

EC2 Deployment

Launching EC2 Instances

Basic EC2 Setup

bash
# Create key pair
aws ec2 create-key-pair --key-name my-key-pair --query 'KeyMaterial' --output text > my-key-pair.pem
chmod 400 my-key-pair.pem

# Launch instance
aws ec2 run-instances \
  --image-id ami-0c02fb55956c7d316 \
  --count 1 \
  --instance-type t2.micro \
  --key-name my-key-pair \
  --security-group-ids sg-12345678 \
  --subnet-id subnet-12345678

User Data Script

bash
#!/bin/bash
yum update -y
yum install -y docker
systemctl start docker
systemctl enable docker
usermod -a -G docker ec2-user

Auto Scaling Groups

Launch Template

json
{
  "LaunchTemplateName": "my-launch-template",
  "LaunchTemplateData": {
    "ImageId": "ami-0c02fb55956c7d316",
    "InstanceType": "t2.micro",
    "KeyName": "my-key-pair",
    "SecurityGroupIds": ["sg-12345678"],
    "UserData": "IyEvYmluL2Jhc2gKeXVtIHVwZGF0ZSAtcQ=="
  }
}

Auto Scaling Group

json
{
  "AutoScalingGroupName": "my-asg",
  "LaunchTemplate": {
    "LaunchTemplateName": "my-launch-template",
    "Version": "$Latest"
  },
  "MinSize": 1,
  "MaxSize": 10,
  "DesiredCapacity": 2,
  "VPCZoneIdentifier": "subnet-12345678,subnet-87654321"
}

Load Balancing

Application Load Balancer

json
{
  "LoadBalancerName": "my-alb",
  "Scheme": "internet-facing",
  "Type": "application",
  "Subnets": ["subnet-12345678", "subnet-87654321"],
  "SecurityGroups": ["sg-12345678"]
}

Target Group

json
{
  "Name": "my-target-group",
  "Protocol": "HTTP",
  "Port": 80,
  "VpcId": "vpc-12345678",
  "HealthCheckPath": "/health",
  "HealthCheckIntervalSeconds": 30,
  "HealthCheckTimeoutSeconds": 5,
  "HealthyThresholdCount": 2,
  "UnhealthyThresholdCount": 3
}

ECS Deployment

ECS Cluster Setup

Cluster Configuration

json
{
  "clusterName": "my-cluster",
  "capacityProviders": ["FARGATE", "FARGATE_SPOT"],
  "defaultCapacityProviderStrategy": [
    {
      "capacityProvider": "FARGATE",
      "weight": 1
    }
  ]
}

Task Definition

json
{
  "family": "my-task",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "256",
  "memory": "512",
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "name": "my-container",
      "image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:latest",
      "portMappings": [
        {
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/my-task",
          "awslogs-region": "us-west-2",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ]
}

Service Definition

json
{
  "serviceName": "my-service",
  "cluster": "my-cluster",
  "taskDefinition": "my-task",
  "desiredCount": 2,
  "launchType": "FARGATE",
  "networkConfiguration": {
    "awsvpcConfiguration": {
      "subnets": ["subnet-12345678", "subnet-87654321"],
      "securityGroups": ["sg-12345678"],
      "assignPublicIp": "ENABLED"
    }
  },
  "loadBalancers": [
    {
      "targetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-target-group/1234567890123456",
      "containerName": "my-container",
      "containerPort": 80
    }
  ]
}

Lambda Deployment

Serverless Functions

Basic Lambda Function

python
import json
import boto3

def lambda_handler(event, context):
    # Process the event
    print(f"Received event: {json.dumps(event)}")
    
    # Your business logic here
    response = {
        'statusCode': 200,
        'body': json.dumps({
            'message': 'Hello from Lambda!',
            'event': event
        })
    }
    
    return response

Lambda with API Gateway

yaml
# serverless.yml
service: my-serverless-app

provider:
  name: aws
  runtime: python3.9
  region: us-west-2
  environment:
    STAGE: ${opt:stage, 'dev'}

functions:
  hello:
    handler: handler.lambda_handler
    events:
      - http:
          path: /hello
          method: get
          cors: true
      - http:
          path: /hello
          method: post
          cors: true

plugins:
  - serverless-python-requirements

Lambda Layers

bash
# Create layer
mkdir python
pip install requests -t python/
zip -r requests-layer.zip python/

# Upload layer
aws lambda publish-layer-version \
  --layer-name requests \
  --zip-file fileb://requests-layer.zip \
  --compatible-runtimes python3.9

Database Deployment

RDS Setup

RDS Instance

json
{
  "DBInstanceIdentifier": "my-database",
  "DBInstanceClass": "db.t3.micro",
  "Engine": "postgres",
  "MasterUsername": "admin",
  "MasterUserPassword": "password123",
  "AllocatedStorage": 20,
  "VpcSecurityGroupIds": ["sg-12345678"],
  "DBSubnetGroupName": "my-db-subnet-group",
  "BackupRetentionPeriod": 7,
  "MultiAZ": false,
  "PubliclyAccessible": false
}

RDS Proxy

json
{
  "DBProxyName": "my-db-proxy",
  "EngineFamily": "POSTGRESQL",
  "Auth": [
    {
      "AuthScheme": "SECRETS",
      "SecretArn": "arn:aws:secretsmanager:us-west-2:123456789012:secret:my-db-secret"
    }
  ],
  "RoleArn": "arn:aws:iam::123456789012:role/rds-proxy-role",
  "VpcSubnetIds": ["subnet-12345678", "subnet-87654321"],
  "TargetGroupName": "my-target-group"
}

DynamoDB Setup

Table Creation

json
{
  "TableName": "my-table",
  "KeySchema": [
    {
      "AttributeName": "id",
      "KeyType": "HASH"
    }
  ],
  "AttributeDefinitions": [
    {
      "AttributeName": "id",
      "AttributeType": "S"
    }
  ],
  "BillingMode": "PAY_PER_REQUEST"
}

CI/CD with AWS

CodePipeline Setup

Pipeline Configuration

json
{
  "pipeline": {
    "name": "my-pipeline",
    "roleArn": "arn:aws:iam::123456789012:role/CodePipelineServiceRole",
    "artifactStore": {
      "type": "S3",
      "location": "my-pipeline-artifacts"
    },
    "stages": [
      {
        "name": "Source",
        "actions": [
          {
            "name": "SourceAction",
            "actionTypeId": {
              "category": "Source",
              "owner": "AWS",
              "provider": "S3",
              "version": "1"
            },
            "configuration": {
              "S3Bucket": "my-source-bucket",
              "S3ObjectKey": "source.zip"
            },
            "outputArtifacts": [
              {
                "name": "SourceOutput"
              }
            ]
          }
        ]
      },
      {
        "name": "Build",
        "actions": [
          {
            "name": "BuildAction",
            "actionTypeId": {
              "category": "Build",
              "owner": "AWS",
              "provider": "CodeBuild",
              "version": "1"
            },
            "configuration": {
              "ProjectName": "my-build-project"
            },
            "inputArtifacts": [
              {
                "name": "SourceOutput"
              }
            ],
            "outputArtifacts": [
              {
                "name": "BuildOutput"
              }
            ]
          }
        ]
      }
    ]
  }
}

CodeBuild Configuration

Buildspec

yaml
version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
      - REPOSITORY_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
      - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $REPOSITORY_URI:$IMAGE_TAG
      - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $REPOSITORY_URI:latest
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:$IMAGE_TAG
      - docker push $REPOSITORY_URI:latest
      - echo Writing image definitions file...
      - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
  files:
    - imagedefinitions.json

Monitoring and Logging

CloudWatch Setup

Log Groups

json
{
  "logGroupName": "/aws/lambda/my-function",
  "retentionInDays": 14
}

CloudWatch Alarms

json
{
  "AlarmName": "High-CPU-Utilization",
  "ComparisonOperator": "GreaterThanThreshold",
  "EvaluationPeriods": 2,
  "MetricName": "CPUUtilization",
  "Namespace": "AWS/EC2",
  "Period": 300,
  "Statistic": "Average",
  "Threshold": 80.0,
  "ActionsEnabled": true,
  "AlarmActions": [
    "arn:aws:sns:us-west-2:123456789012:my-topic"
  ]
}

X-Ray Tracing

python
import json
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all

# Patch all libraries
patch_all()

@xray_recorder.capture('my_function')
def lambda_handler(event, context):
    # Your function logic here
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

Security Best Practices

IAM Roles and Policies

EC2 Role

json
{
  "RoleName": "EC2-S3-Role",
  "AssumeRolePolicyDocument": {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": {
          "Service": "ec2.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
      }
    ]
  },
  "Policies": [
    {
      "PolicyName": "S3ReadOnly",
      "PolicyDocument": {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": [
              "s3:GetObject",
              "s3:ListBucket"
            ],
            "Resource": [
              "arn:aws:s3:::my-bucket",
              "arn:aws:s3:::my-bucket/*"
            ]
          }
        ]
      }
    }
  ]
}

VPC Security

Security Groups

json
{
  "GroupName": "web-servers",
  "Description": "Security group for web servers",
  "VpcId": "vpc-12345678",
  "SecurityGroupRules": [
    {
      "IpPermissions": [
        {
          "IpProtocol": "tcp",
          "FromPort": 80,
          "ToPort": 80,
          "IpRanges": [
            {
              "CidrIp": "0.0.0.0/0",
              "Description": "HTTP access from anywhere"
            }
          ]
        },
        {
          "IpProtocol": "tcp",
          "FromPort": 443,
          "ToPort": 443,
          "IpRanges": [
            {
              "CidrIp": "0.0.0.0/0",
              "Description": "HTTPS access from anywhere"
            }
          ]
        }
      ]
    }
  ]
}

Cost Optimization

Reserved Instances

bash
# Purchase reserved instances
aws ec2 purchase-reserved-instances-offering \
  --reserved-instances-offering-id 12345678-1234-1234-1234-123456789012 \
  --instance-count 1

Spot Instances

json
{
  "SpotPrice": "0.05",
  "InstanceCount": 1,
  "Type": "one-time",
  "ValidFrom": "2024-01-01T00:00:00Z",
  "ValidUntil": "2024-12-31T23:59:59Z",
  "LaunchSpecification": {
    "ImageId": "ami-0c02fb55956c7d316",
    "InstanceType": "t2.micro",
    "KeyName": "my-key-pair",
    "SecurityGroups": ["sg-12345678"]
  }
}

Conclusion

AWS provides a comprehensive platform for deploying applications at any scale. By understanding the various services and deployment strategies, you can choose the right approach for your specific needs, from simple EC2 deployments to complex serverless architectures.

The key to successful AWS deployment is starting with your requirements and choosing the appropriate services and patterns. With proper planning and implementation, AWS can provide a robust, scalable, and cost-effective platform for your applications.