Deploy Django with Zappa + AWS

Saúl GN
14 min readDec 28, 2021

--

A quick guide to deploying a Django application to AWS using Zappa

What this article is?

A guide of the basic configurations to deploy easily a Django application to AWS Lambda + API Gateway using a serverless framework named Zappa

What this article is not?

A “how-to” about Django or Python programming, a “how-to” use the AWS service, an explanation about why this way is the better to deploy an application

AWS

If you are confident with AWS, jump to the Network configuration section

From Wikipedia amazon web services are:

on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis

Among all the services that AWS provides us, several services help us host an application in almost all programming languages in an on-demand way and it also provides us services to avoid many of the common configurations for a production-grade application, for example, the server.

In this article, we are using the Lambda and S3 services to host and deploy the main functionalities of a basic application.

S3(Amazon Simple Storage Service)

AWS S3 is an object storage service that allows us its name said store objects in the cloud, objects could be media files, configuration files, web static files like JS and HTML files, and also compressed files that will be used by other AWS applications like Lamda.

We will use this service as an application, media, and static storage service.

Lambda

From AWS lambda is defined as a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring, and logging.

Lamda is one of the most popular Serverless options and it removes the responsibility for resources maintenance and configuration from us, that's the reason that it could be one of the easiest ways to deploy an application with all the needed features without effort

RDS(Amazon Relational Database Service)

It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.

We will use it as a database service for our project, it’s the best option to go into production because it allows us to escalate our database without any effort.

Account creation

AWS has a free tier that allows testing almost all basic stuff that we need if you wants to know more about the free tier look at this link, to create your account you need at least two things:

  • A valid and active credit card: This allows AWS to verify your identity and blocks that you create an account each time the free tier it's over.
  • A valid email: this doesn’t need an explanation

To create an account go to this page:

The process is really easy, so I won’t cover all these things in this article but don’t be afraid to post your doubts in the comments or email me

Configure AWS account in local machine

It's time to get our hands dirty, first, we must configure the AWS credentials in our local machine, all the steps will work in Linux, macOS, and good terminals in windows like GitBash, CMDR, or a WSL.

Sign in to your AWS account and go to the IAM panel through the Account menu -> Security credentials -> Access Keys and create new credentials

Create a new access key

Download the file and keep it in a secure location, if you want better security control it's better to create an IAM role and give it the proper permissions.

Create a file named credentials in your profile inside the .aws folder, the path should look like these ~/.aws/credentials (you could use whatever text editor) and put the created keys

[default]
aws_access_key_id = __your_access_key__
aws_secret_access_key = __your_secret_key__

To run Zappa, we need to install the AWS-CLI it is really easy so you should follow the steps in this link according to your operating system, once you installed it run the next command

AWS --version

And the output should look like this

aws-cli/2.2.20 Python/3.8.8 Darwin/20.6.0 exe/x86_64 prompt/off

After this configuration in our local machines, we need to create some resources in AWS in order to run properly our project

Network configuration: VPC

An AWS VPC is a virtual network inside our cloud and its very useful to keep things grouped and isolated, for this project we need a VPC and all its related components, let's start with VPC

  • Name: identifier four our VPC, it should be descriptive (maybe contains a common prefix)
  • IPv4 CIDR: This is the primary ip block for the subnets

Network configuration: Public subnet

To allow traffic from outside our VPC we need to create some resources that can be available from the public internet for this we need to create a public subnet with the next data:

  • VPC ID: Previously created VPC
  • Name: Identifier for the subnet
  • Availability zone: Zone that determines where the subnet will live
  • IPv4 CIDR: This is the IP block for this subnet and its inside the primary VPC IP block

With this, the subnet will be created but it’s missing an internet gateway yet to be publicly available

Network configuration: Private subnets

The purpose of this subnets is to keep the RDS database isolated to the public world with this we are adding a security layer for our data store. The configuration is very similar to the public subnet but increasing the CIDR block

Network configuration: Route tables

A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed

Network configuration: Route tables Association

Public route table should be associated with public subnet and private route table should be associated with private subnets

Network configuration: Internet gateway

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.

We need to attach the Internet Gateway in order to allow internet communication

Edit public route table to relate with the internet gateway

Network configuration: Elastic IPs

An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is allocated to your AWS account, and is yours until you release it

Network configuration: NAT gateway

A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.

We need this to access the resources outside our VPC, for example, S3 bucket

Add route to private network

Network configuration: Security Groups

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups.

This ends our network configuration and let’s continue with our data source.

Create a PostgresSQL DB in RDS

AWS RDS console and click in Create Database and fill the form that will open with the next data:

  • Database creation method: Standard create
  • Engine: PostgreSQL
  • Version: 13.3
  • DB instance size: Dev/Test (or if you are ready for prod, choose Production)
  • DB instance identifier: my_project_name_db
  • Master username: my-project-user
  • Master password: [generate a secure password]
  • Instance class: minimum t3.micro
  • Storage: Default settings or what your project needs

Connectivity: Here we will use the things that we created previously

  • VPC: Previously created
  • Subnet group: Create new
  • Public access: No
  • VPC Security group: Choose existing
  • Availability zone: No preference
  • Database authentication: Password

Click in create button, and after this (the creation could take a little bit)it will appear in RDS console and click in the details, should show an endpoint like this:

my-project-database.cfbndsdfs89tmivjn.us-east-2.rds.amazonaws.com

Keep this for next sections, the next steps are to create a Django project, configure AWS resources and create the Zappa configuration

Create a Django project

For this project, we will use pipenv but if you are more comfortable with pip or another package manager you could use it, to install pipenvfollow the instructions from this page:

Once installed pipenv, create a folder where our project will be stored:

mkdir my-project-name && cd my-project_name

In the folder root create a file named Pipfile with the next content:

Initial Pipfile

In order to use Lambda we need to use Python 3.8 because Python 3.9 it not supported yet, after create the file we can install the basic required libraries with this command

pipenv install django zappa django-storages boto3

After the install Pipfile must looks like this (maybe versions are different)

Once Django has been installed we proceed to create the project

django-admin startproject my_project_name .

This command going to generate a files structure like this:

📦my-project-name
┣ 📂my_project_name
┃ ┣ 📜__init__.py
┃ ┣ 📜asgi.py
┃ ┣ 📜settings.py
┃ ┣ 📜urls.py
┃ ┗ 📜wsgi.py
┣ 📜.gitignore
┣ 📜LICENSE
┣ 📜Pipfile
┣ 📜Pipfile.lock
┣ 📜README.md
┗ 📜manage.py

After the project creation was successful we start the server with the next command and could see the default page in http://localhost:8000

pipenv shellpython manage.py migratepython manage.py runserver
Initial Django page

To test the next Zappa deployments let’s activate the Django administration panel, to do this we need to create a super user with the command:

python manage.py createsuperuser
Username (leave blank to use 'my_user'): my_super_user
Email address: my_super_user@gmail.com
Password:
Password (again):
Superuser created successfully.

Go to http://localhost:8000/admin and enter the data of the user just created

Django Admin login
Admin initial

This ends our basic configuration for Django project, let’s start with Zappa

Zappa

Zappa is great for deploying serverless Python microservices with frameworks such as Flask and Bottle for hosting large web applications and CMSes with Django. You can also deploy any Python WSGI application as well

I think there is no doubts about what Zappa is, but in my own words is a tool that allow us to deploy python web applications to the cloud with minimal effort

Configure project for Zappa deploy

Previously we already installed Zappa, so next steps are create the default configuration

pipenv shell
zappa init

This will ask us some stuff in order to configure the default configuration

What do you want to call this environment (default 'dev'): prodAWS Lambda and API Gateway are only available in certain regions. Let's check to make sure you have a profile set up in one that will work.
We found the following profiles: default, and my_another_profile. Which would you like us to use? (default 'default'):default
Your Zappa deployments will need to be uploaded to a private S3 bucket.
If you don't have a bucket yet, we'll create one for you too.
What do you want to call your bucket? (default 'zappa-8j4plosoe'): my-zappa-s3-bucket
It looks like this is a Django application!
What is the module path to your projects's Django settings?
We discovered: my_project_name.settings
Where are your project's settings? (default 'my_project_name.settings'):my_project_name.settings
You can optionally deploy to all available regions in order to provide fast global service.
If you are using Zappa for the first time, you probably don't want to do this!
Would you like to deploy this application globally? (default 'n') [y/n/(p)rimary]:n

After all configurations will answered Zappa-cli will show us the configurations summary as json, you should to confirm and that’s it!

We can test our deploy with the next command:

zappa deploy prod

But.. what happen? The deployment failed, we can inspect the deployment with the next command:

zappa tail prod

And it show us that there is an error with database configuration, to solve this we need two things, create environment variables and configure project to work with the previously created database.

Create environment configuration

First we need to create a new s3 bucket to store our environment variables file, to do this go to s3 aws console and click in Create bucket button, add a name that is not repeated, keep all the default values.

Once the bucket is created, go inside it and create a folder called my-project-configs we will put the file inside this folder.

Create a json file with the next content and save it as prod.json:

{
"RDS_DB_NAME": "my_project_name_db",
"RDS_USERNAME": "my-project-name-user",
"RDS_PASSWORD": "my-project-name-db-pass",
"RDS_HOSTNAME": "my-project-name-rds.rds.amazonaws.com",
"RDS_PORT": "5432",
"DJANGO_DEBUG": "False",
"ENVIRONMENT": "production",
"USE_S3": "True"
}

Click on upload file and choose the configuration file, once its uploaded click on it and the details will be opened, copy/keep the s3 URI to use it in the project configuration

Modify Zappa and Django configuration

Open the zappa settings file zappa_settings.json and add the next properties to the current json

SubnetsIds and SecurityGroupIds are the attributes from our RDS database and are necessary to connect lamda with our instance, you can found it in the rds console clicking in database instance detail

With this we have the infra configuration, lets modify the Django settings, open settings.py and modify databases section, at top file add the next import

import os

And change the configuration to match with this:

To allow python connect with RDS-PostgreSQL we need to install a new dependency, we will use this if you are in macOS, but if you are in a linux distribution you can install psycopg2-binary

pipenv install aws-psycopg2

Once the module is installed, we can retry the deployment

If the deployment is correct it must show the url of your application (if some error appears you can use zappa tail prod), something like:

https://sssdfcsx5zp78sdfihyh.execute-api.us-east-2.amazonaws.com/prod

Maybe you will see an error about disallowed host to fix this modify the next two configurations in settings.py

This configuration will deactivate all the de debug features that comes with Django. Retry the deployment but this time because the app is already deployed we need to update it

zappa update prod

It works! But… no styles are displayed and if we try to login an error appears, to fix this we need to configure commands to migrate and collect statics

Configure migrations and collectstatic steps

First we need to install an utility module to handle commands and zappa:

pipenv install zappa-django-utils

Once this module is installed we will proceed to configure django project to handle collect static and s3 bucket

Go to your S3 Console and create a new open s3 bucket you can choose the name but something descriptive is welcome (staticandmediafiles or something like that), then add the next variable to the prod.json file and upload again

"AWS_STORAGE_BUCKET_NAME": "staticandmediafiles",
"USE_S3": "True"

Create a new file named storages_backend.pyinside our principal project and put the next content:

Open again the settings.py file, add storages and zappa_django_utilsto installed applications and locate the static files configuration modify it to match with the next:

Update the application and execute collectstatic with the next zappa command:

zappa update prod
zappa manage prod "collectstatic --noinput"
Django admin with static

In order to login in the application exec the next command:

zappa manage prod create_pg_db
zappa manage prod migrate
zappa manage prod create_admin_user

Those command will generate a generic admin user that we can use to login in the application, once the command finished use the credentials that appear in console to login

Then enter to the admin and that’s it the app its up and running

And that’s it this configuration its the minimal for our production-like application, it could be improved with some security practices.

In next article we will use Terraform to automate the AWS resources generation.

Github

PR with the changes in this article:

Next articles (Django + Zappa DevOps series):

  • Automate AWS Resources with Terraform (In progres)
  • Integrate Django with Circle CI (Pending)
  • Sentry for Error Report in Django (Pending)
  • Django and Datadog (Pending)

Resources

--

--