Full Guide to Deploy Production NodeJS Web App on AWS EC2 with SSL, Nginx, Docker | Updated 2024
Ready to deploy your production NodeJS Web App?
In this guide, we will cover deploying a NodeJS application onto an AWS EC2 server (or any server of your choice). This guide covers:
- Understanding the infrastructure
- Dockerising your application with Docker Compose
- Getting an SSL Certificate from Cloudfare
- Installing Docker and Nginx on EC2 Amazon Linux 2
- Setting up nginx for SSL and port forwarding on your server
- Deploying on EC2
Prerequsities:
- An EC2 instance you can SSH into
- A domain registered on Cloudfare
Understanding the Infrastructure
The diagram above illustrates how this setup will be configured. Starting from the client, when they make a request to your domain they will be proxied through Cloudfare to your public EC2 IP. The client-Cloudfare connection will be encrypted through Cloudfare's free edge certificate, and this serves as a reverse proxy to hide the IP address of your EC2 instance. The Cloudfare-Origin Server connection is encrypted with a Cloudfare Origin Certificate, which we will set up in this guide.
When the request hits your server, it will reach Nginx, which acts as a load-balancer and port-forwarder. This will take the request to the correct port on your system where your Dockerised application is listening for incoming requests.
Dockerising your Application
Docker is a service that allows you to run code within isolated containers. Containers are like a virtual machine, with its own operating system and create a sandbox environment to run code. Docker Compose is a tool for defining and running multi-container applications. It automates the build process and allows us to run the program, with features such as restart on failure, similar to pm2 or Node forever.
Dockerfile
To Dockerise your application, first create a `Dockerfile` in the root of your project:
# syntax=docker/dockerfile:1
FROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN
npm install --global nodemon
RUN npm install
COPY . .
ENV NODE_ENV=production
EXPOSE 8000
CMD ["npm", "start"]
The FROM
command specifies building a container from a base image. Docker provides many different images with different preinstalled software and operating systems. In this example, we use the Node image, as we want to run a NodeJS API, and this will have npm and NodeJS preinstalled.
We then specify our working directory with WORKDIR
, which is where the code will go in the container.
First, we COPY
the package.json files from our computer into the container, and RUN
the npm install
commands. Here we also install nodemon
globally, which is what we are using to run the NodeJS application.
Then, we COPY
the rest of the code into the container.
We EXPOSE
port 8000, which means that the container will listen to port 8000 on our machine - make sure this is the same port configured on your program.
Finally, we run CMD
npm start
to run the code - but you can change this to whatever command your program uses to start
It is also important to specify a .dockerignore
file, which tells Docker to not copy over certain files:
# .dockerignore
node_modules
dist
Docker Compose
Now let's set up Docker Compose. To do this, we create a docker-compose.yml
file:
services:
my-app:
image: node:22-alpine
expose:
- "8000"
ports:
- "8000:8000"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
working_dir: /app/
env_file:
- .env
restart: on-failure
In this file we list the 'services', here we have named it 'my-app'. We specify the base image and tell it which ports to expose - the same as in the Dockerfile. We then provide build information - the Dockerfile path and the 'context', which is the path to your code relative to the docker-compose.yml
file.
Volumes specify the persistent data generated by containers, which means that when the container is stopped, this data will still exist and not be deleted.
We then specify the working directory and the location of the env file and tell the program to restart on failure.
Congrats! You have successfully dockerised your application. If you have docker on your local machine, you can test it by:
docker-compose build
docker-compose up
Getting an SSL Certificate from Cloudfare
Edge Certificate
The first step is to secure the Client-Cloudfare connection. Cloudfare already has this set up, so we just have to enforce it. On your Cloudfare dashboard, navigate to the SSL/TLS section on the sidebar, and go to Overview:
In the first setting, you will be able to change your encryption mode. Set it to Full (strict):
This enforces strict encryption across the entire connection. Now, on the sidebar, navigate to the Edge Certificates tab. Scroll down and enable the Always Use HTTPS option:
This redirects all HTTP requests to HTTPS.
Origin Certificate
The next step is to secure the Cloudfare-Origin Server connection. This process requires Nginx, so in this step, we will just download the certificate and key and later you will see how to register it with Nginx.
On the sidebar, under SSL/TLS, and go to Origin Server.
Under the Origin Certificates section, click the Create Certificate button.
Leave the default settings, but add additional hostnames if needed- if you will be using subdomains make sure you have *.your-domain.com and your-domain.com listed as hostnames.
Click Create and you will be taken to a page which displays your certificate and private key.
Copy your certificate and private key and make note of them in a password manager or protected file - do not share these with anyone.
Installing Docker and Nginx on EC2
Now we need to install Docker and Nginx on our server. The EC2 instance used in this tutorial runs on Amazon Linux 2, so depending on your OS you may have to adjust the commands.
SSH into your server and let's begin.
Installing Docker and Docker Compose on Amazon Linux 2
Adapted from: Vivek Gite.
Start by updating yum
:
sudo yum update
Install Docker:
sudo yum install docker
Add group membership for the default ec2-user so you can run all docker commands without using the sudo command:
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
Install Docker Compose:
wget https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)
sudo mv docker-compose-$(uname -s)-$(uname -m) /usr/local/bin/docker-compose
sudo chmod -v +x /usr/local/bin/docker-compose
Enable Docker service to start on system boot:
sudo systemctl enable docker.service
Start Docker:
sudo systemctl start docker.service
Installing Nginx on Amazon Linux 2
Install with yum
:
sudo yum install nginx
To start the nginx service, and add to systemctl
:
sudo systemctl start nginx
When you make changes to the nginx config, you will have to do:
sudo systemctl restart nginx
SSL and Port Forwarding with Nginx
Nginx is a powerful web server that can be used as a reverse proxy, and load balancer, amongst other capabilities. In this solution, we will use it as a proxy and port forwarder, to direct all incoming requests to the correct port.
First, SSH into your EC2 instance (this tutorial uses Amazon Linux 2) and save your certificate and private key:
Create file
touch /etc/ssl/www.
Edit file
sudo nano /etc/ssl/www.
Paste your certificate into the terminal editor and save the file (^X, return, return). Do the same with your key file, but set the filename as: /etc/ssl/www.
.
To configure nginx, we use a configuration file. To access this file do:
sudo nano /etc/nginx/nginx.conf
Now you will be able to edit the configuration. Scroll down to the HTTP block and replace the current server
section to:
server {
# Listen to HTTPS port - port 443
listen 443 ssl;
listen [::]:443 ssl;
server_name
ssl_certificate /etc/ssl/www.
ssl_certificate_key /etc/ssl/www.
access_log /var/log/nginx/nginx.vhost.access.log;
error_log /var/log/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://0.0.0.0:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Replace
with your domain name. What this tells Nginx to do is to listen on the default HTTPS port, 443, and forward these requests to the port in which your application is listening at. For each program you deploy and every subdomain you associate with it, you will need to add a new server block and define the subdomain under the server_name
section.
Exit the terminal editor, and run the following command to check your Nginx configuration for any errors:
sudo nginx -t
If all is good, restart the Nginx service to apply the changes:
sudo systemctl restart nginx
Now you are ready to deploy the application.
Deploying on EC2
The first thing to check is to make sure your EC2 Security Groups are configured correctly. Make sure you have created a security group that allows incoming requests on port 443 from all IPv4 and IPv6 addresses. This makes your HTTPS port publicly available.
To deploy, the first thing to do is get your code onto the EC2 instance, whether you use GitHub and git clone
or an SSH copy command.
When the code is ready, cd
into the code's root directory and build the Docker container:
docker-compose build
Once it has built, you can now run the container in the background (-d
flag):
docker-compose up -d
To view the application logs, simply:
docker-compose logs
And there you go! Your program should be available on your domain. 🥳