Skip to content

Latest commit

 

History

History
190 lines (116 loc) · 7.98 KB

README.md

File metadata and controls

190 lines (116 loc) · 7.98 KB

Django Rest Framework and Celery using Docker

Django Rest Framework, Celery, Redis and PostgreSQL, with sensible defaults, running in Docker containers, with AWS config and helpers.

Dev

We use Docker and docker-compose in development to package and deploy our application.

To decrypt env vars in env.dev and run dev, execute ./dev.sh. To see which containers are running, run docker ps. To stop all containers, run docker-compose stop.

To blow containers away and build them from scratch, use docker-compose rm then ./dev.sh.

Prerequisites

SSH into DB docker container, docker exec -it db /bin/bash, then run psql -U postgres to log into DB. Run commands in citext.sql.

SSH into API container, docker exec -it api /bin/ash. Source env vars by running source .env. Then create and run initial migrations.

python manage.py makemigrations users main
python manage.py migrate users
python manage.py migrate main
python manage.py migrate

Logs

PostgreSQL: open /var/lib/postgresql/data/postgresql.conf in the db container, also at dbdata/postgresql.conf, and add the following:

logging_collector = 'on'
log_statement = 'all'
log_line_prefix = '%t'

Then restart the db container, docker-compose restart db. Tail logs like this:

less +F dbdata/pg_log/`ls -1 dbdata/pg_log/ | tail -1`

API: docker-compose logs -f api.

Celery: Go to http://localhost:5555/tasks, log in with (CUSTOM_FLOWER_USERNAME, CUSTOM_FLOWER_PASSWORD).

Deploy

For devs only. Run ./build.sh. This will decrypt production env vars and build the backend_base and nginx images.

Then run ./deploy.sh <image_name>, and finish from the AWS Elastic Beanstalk console.

Rollback

Run ./retag_latest.sh <image_name> <tag> to tag an (old) image with the latest tag, then deploy from the AWS Elastic Beanstalk console. All images are tagged with git hash of source repo used to build image. These git hashes also live in the appversion table.

Manual deploy

Tag and push these images to ECR, then deploy from the AWS Elastic Beanstalk console.

# log in
aws ecr get-login --no-include-email --region us-west-2

# example: push the latest nginx image to ECR
docker tag nginx:latest 306439459454.dkr.ecr.us-west-2.amazonaws.com/nginx:latest
docker push 306439459454.dkr.ecr.us-west-2.amazonaws.com/nginx:latest

Make sure Elastic Beanstalk IAM user has permissions to read from ECR. If not deploy will fail.

Documentation

Additional documentation is generated programmatically.

Database schema UML

Using the graph_models extension.

  • on your local machine, run brew install graphviz
  • in Docker container, run python manage.py graph_models -a -X BaseModel > /tmp/uml.dot
  • copy dot file to your machine
  • run dot uml.dot -Tpng -o uml.png to generate image

Browsable API docs

Using this DRF -> OpenAPI tool. See https://SITE_URL/api/swagger/ or https://SITE_URL/api/redoc/.

Endpoints

NGINX forces HTTPS for requests to these endpoints. It also compresses responses.

AWS

We manage our infrastructure with AWS. We deploy to Elastic Beanstalk multi container Docker environments. This means the same Dockerfiles that build our images in dev build them in production.

SSL

We use ACM to generate certificates. ACM certificates can only be used with AWS load balancers and CloudFront distributions, but this is what we use to serve both our web app and our API. Here's an excellent tutorial on how to set up CloudFront for sites served by S3.

The certificate must be created in us-east-1 region to be used with CloudFront. This isn't well-documented. See certificate here.

Traffic between our backend AWS instances is not encrypted, because it's not necessary.

Terminating secure connections at the load balancer and using HTTP on the backend may be sufficient for your application. Network traffic between AWS resources cannot be listened to by instances that are not part of the connection, even if they are running under the same account.

RDS and ElastiCache

Our PostgreSQL and Redis instances are managed by RDS and ElastiCache.

There are many ways to grant RDS/ElastiCache ingress access to EC2 instances, for example:

  • find the default VPC security group that is assigned to both ElastiCache and RDS instances
  • add ingress rules to this group on ports 5432 and 6379 for api and celery-beat environment security groups

However, this creates an explicit dependency between the EC2 instance and our data stores. This means we won't be able rebuild or terminate our environment.

If you try to do so, you're entering a world of pain. Instead of failing fast, AWS kill your EC2 instances, then hang for an hour or more while it periodically informs you that your security groups can't be deleted.

Here's how to do it right: Grant Elastic Beanstalk environment access to RDS and ElastiCache automatically.

Basically, create a security group called redis-postgres-read. Then find the default VPC security group for RDS/ElastiCache and add ingress rules on ports 5432 and 6379 for the redis-postgres-read security group. Finally, add this group to EC2 security groups in Configuration > Instances in the Elastic Beanstalk environment console.

Testing connections

To test connectivity between EC2 instances and data stores:

# ssh into ec2 box and test connectivity
nc -v <pg_url> 5432
nc -v <redis_url> 6379

# to test credentials for conencting to postgres
sudo yum install postgresql-devel
PGPASSWORD=password psql -h <pg_url> -U postgres -d postgres

Env vars

Application configuration is stored in environment variables. These are stored in the env.dev and env.prod files.

dev.sh decrypts env vars in env.dev then runs our containers. These env vars are sourced by docker-compose.yml and our containers.

Editing env vars

Make sure you have the .vault-password file, with the correct password, in the root of the repo. To decrypt env vars, run python3 tools/vault.py --infile=env.<dev|prod>. To encrypt them again run python3 tools/vault.py --infile=env.<dev|prod> --encrypt.

Committing

To make sure unencrypted env vars don't get committed, run cd .git/hooks && ln -s ../../pre-commit && cd - from the root of this repo. The pre-commit hook fails if env files are not encrypted, or if code doesn't pass mypy checks.

Create StaffUser

# ssh into ec2 instance

# ssh into docker container
sudo docker exec -it <container_id> /bin/ash

source .env
python manage.py staffuser --email <email> --password <password> --full_name <full_name>
# create superuser, or change superuser password
python manage.py staffuser --superuser --email <email> --password <password>

Linting and Code Style

Enforced by flake8 linter.

pip install flake8
pip install flake8-commas

Check .flake8 for rules. Run linter on all files, ignoring line length warnings: flake8 . | grep -v E501.

Type Checking with Mypy

pip install mypy

Check mypy.ini for config options.

Mypy cheat sheet: http://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html.

  • mypy src
  • mypy src --check-untyped-defs