Automate the creation of a Consul cluster
-
We use AWS for hosting our solution
-
For operating our infrastructure we need a HA Consul cluster distributed over at least 3 availability zones
-
The setup and maintenance should be fully automated
-
Consul instances should discover themselves using DNS
- You must have Terraform installed on your computer.
- You must have Packer installed on your computer.
- You must have Docker installed on your computer.
- You must have an Amazon Web Services (AWS) account.
Configure AWS access keys as environment variables:
export AWS_ACCESS_KEY_ID=(access key id)
export AWS_SECRET_ACCESS_KEY=(secret access key)
# cd packer
# packer build consul.json
# export AWS_REGION="eu-central-1"
# export AMI_ID="<ami_id_from_packer_output>"
# export S3_BUCKET="unique-bucket-name"
# export DYNAMODB_TABLE="some-table-name"
# cd terraform/global/s3
# terraform init
# terraform plan -var region_name=$AWS_REGION -var bucket_name=$S3_BUCKET
# terraform apply -var region_name=$AWS_REGION -var bucket_name=$S3_BUCKET
# cd terraform/global/dynamodb
# terraform init
# terraform plan -var region_name=$AWS_REGION -var dynamodb_name=$DYNAMODB_TABLE
# terraform apply -var region_name=$AWS_REGION -var dynamodb_name=$DYNAMODB_TABLE
# make configure-remote-state
# make lint
# cd terraform/staging/services/consul
# terraform init
# terraform plan -var region=$AWS_REGION -var ami=$AMI_ID
# terraform apply -var region=$AWS_REGION -var ami=$AMI_ID
terraform destroy -var region=$AWS_REGION -var ami=$AMI_ID
- Configure CloudWatch to collect logs from consul servers
- Configure CloudWatch to monitor consul servers load
- Implement dynamic scaling for auto-scaling group
- Migrate to spot instances to save money
- Number: 1
- Processed bytes per CLB: 10 GB per month
- Monthly: 21.98 USD
- Number: 3 t2.micro Linux instances with a consistent workload, Amazon Elastic Block Storage (30 GB General Purpose SSD (gp2))
- Monthly: 31.73 USD