Skip to content

Latest commit

 

History

History
203 lines (135 loc) · 8.93 KB

README.md

File metadata and controls

203 lines (135 loc) · 8.93 KB

Introduction

This is a modified version of the spring-petclinic-microservices Spring Boot sample application. Our modifications focus on showcasing the capabilities of Application Signals within a Spring Boot environment. If your interest lies in exploring the broader aspects of the Spring Boot stack, we recommend visiting the original repository at spring-petclinic-microservices.

In the following, we will focus on how customers can set up the current sample application to explore the features of Application Signals.

Disclaimer

This code for sample application is intended for demonstration purposes only. It should not be used in a production environment or in any setting where reliability/security is a concern.

Prerequisite

EKS demo

Deploy via Shell Scripts

Note that if you want to run the scripts in a shell inside an AWS Cloud9 environment, you need to ensure the environment has sufficient disk space, as building images can consume a lot of space. To increase the disk size to 50 GB, follow the instructions here: Resize Cloud9 Environment. Additionally, you need to disable AWS managed temporary credentials to avoid credential renewal interfering with script execution. Instructions can be found here: Disable AWS managed temporary credentials.

Build the sample application images and push to ECR

  1. Build container images for each micro-service application
./mvnw clean install -P buildDocker
  1. Create an ECR repo for each micro service and push the images to the relevant repos. Replace the aws account id and the AWS Region.
export ACCOUNT=`aws sts get-caller-identity | jq .Account -r`
export REGION='us-east-1'
./push-ecr.sh

Try Application Signals with the sample application

  1. Create an EKS cluster, enable Application Signals, and deploy the sample application to your EKS cluster. Replace new-cluster-name with the name that you want to use for the new cluster. Replace region-name with the same region in previous section "Build the sample application images and push to ECR".
cd scripts/eks/appsignals/one-step && ./setup.sh new-cluster-name region-name
  1. Clean up all the resources. Replace new-cluster-name and region-name with the same values that you use in previous step.
cd scripts/eks/appsignals/one-step && ./cleanup.sh new-cluster-name region-name

Please be aware that this sample application includes a publicly accessible Application Load Balancer (ALB), enabling easy interaction with the application. If you perceive this public ALB as a security risk, consider restricting access by employing security groups.

Deploy via Terraform

  1. Go to the terraform directory under the project. Prepare Terraform S3 backend and set required environment variables

    cd terraform/eks
    
    aws s3 mb s3://tfstate-$(uuidgen | tr A-Z a-z)
    
    export AWS_REGION=us-east-1
    export TFSTATE_KEY=application-signals/demo-applications
    export TFSTATE_BUCKET=$(aws s3 ls --output text | awk '{print $3}' | grep tfstate-)
    export TFSTATE_REGION=$AWS_REGION
  2. Deploy EKS cluster and RDS postgreSQL database.

    export TF_VAR_cluster_name=app-signals-demo
    export TF_VAR_cloudwatch_observability_addon_version=v2.1.0-eksbuild.1
    
    terraform init -backend-config="bucket=${TFSTATE_BUCKET}" -backend-config="key=${TFSTATE_KEY}" -backend-config="region=${TFSTATE_REGION}"
    
    terraform apply --auto-approve

    The deployment takes 20 - 25 minutes.

  3. Build and push docker images

    cd ../.. 
    
    ./mvnw clean install -P buildDocker
    
    export ACCOUNT=`aws sts get-caller-identity | jq .Account -r`
    export REGION=$AWS_REGION
    
    ./push-ecr.sh
  4. Deploy Kubernetes resources

    Change the cluster-name, alias and region if you configure them differently.

    aws eks update-kubeconfig --name $TF_VAR_cluster_name  --kubeconfig ~/.kube/config --region $AWS_REGION --alias $TF_VAR_cluster_name
    ./scripts/eks/appsignals/tf-deploy-k8s-res.sh
    
  5. Create Canaries and SLOs

    endpoint="http://$(kubectl get ingress -o json  --output jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')"
    cd scripts/eks/appsignals/
    ./create-canaries.sh $AWS_REGION create $endpoint
    ./create-slo.sh $TF_VAR_cluster_name $AWS_REGION
  6. Visit Application

    endpoint="http://$(kubectl get ingress -o json  --output jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')"
    
    echo "Visit the following URL to see the sample app running: $endpoint"
  7. Cleanup

    Delete ALB ingress, SLOs and Canaries before destroy terraform stack.

    kubectl delete -f ./scripts/eks/appsignals/sample-app/alb-ingress/petclinic-ingress.yaml
    
    ./cleanup-slo.sh $REGION
    
    ./create-canaries.sh $REGION delete
    
    cd ../../../terraform/eks
    terraform destroy --auto-approve

EC2 Demo

The following instructions describe how to set up the pet clinic sample application on EC2 instances. You can run these steps in your personal AWS account to follow along (Not recommended for production usage).

  1. Create resources and deploy sample app. Replace region-name with the region you choose.

    cd scripts/ec2/appsignals/ && ./setup-ec2-demo.sh --region=region-name
    
  2. Clean up after you are done with the sample app. Replace region-name with the same value that you use in previous step.

    cd scripts/ec2/appsignals/ && ./setup-ec2-demo.sh --operation=delete --region=region-name
    

K8s Demo

The following instructions set up an kubernetes cluster on 2 EC2 instances (one master and one worker node) with kubeadmin and deploy the pet clinic sample application to the cluster. You can run these steps in your personal AWS account to follow along (Not recommended for production usage).

  1. Build container images and push them to public ECR repo

    ./mvnw clean install -P buildDocker && ./push-public-ecr.sh
  2. Set up a kubernetes cluster and deploy sample app. Replace region-name with the region you choose.

    cd scripts/k8s/appsignals/ && ./setup-k8s-demo.sh --region=region-name
  3. Clean up after you are done with the sample app. Replace region-name with the same value that you use in previous step.

    cd scripts/k8s/appsignals/ && ./setup-k8s-demo.sh --operation=delete --region=region-name
    
    
    

ECS Demo

The following instructions set up an ECS cluster with all services running in Fargate. You can run these steps in your personal AWS account to follow along (Not recommended for production usage).

  1. Build container images and push them to private ECR repo. Replace region-name with the region you choose.

    export ACCOUNT=`aws sts get-caller-identity | jq .Account -r`
    export REGION=region-name
    ./mvnw clean install -P buildDocker && ./push-ecr.sh
  2. Set up a ECS cluster and deploy sample app. Replace region-name with the region you choose.

    cd scripts/ecs/appsignals && ./setup-ecs-demo.sh --region=region-name
  3. Clean up after you are done with the sample app. Replace region-name with the same value that you use in previous step.

    cd scripts/ecs/appsignals/ && ./setup-ecs-demo.sh --operation=delete --region=region-name