In this note, I create a highly available environment using Amazon EC2 instances spread over multiple availability zones attached to an application load balancer. I also have a link to my GitHub repository with the code. In the past, I wrote a note on how to toggle traffic between three EC2 instances in three availability zones using an application load balancer. However, the EC2 instances were in a public subnet. I will do the same on this note, except the private subnet will host the EC2 instances. I’ll demonstrate how to create EC2 instances in a private subnet (no access to the internet via an internet gateway) and then attach them to an application load balancer in a public subnet that toggles traffic to the EC2 instances.

Before that, let me quickly explain the fundamental difference between a public and a private subnet. Any subnet attached to a route table with a route to 0.0.0.0/0 using an internet gateway is a public subnet. Hence, a private subnet does not have a route to 0.0.0.0/0 in its route table via the internet gateway. A private subnet can have a route to 0.0.0.0/0 using a NAT gateway in the public subnet.

In the following paragraphs, I’ll briefly describe all the resources I created and how they are linked. If you are interested and want to follow along, here is a link to the GitHub repository: add-aws-elb-ec2-private-subnet-terraform. Please note that the branch name = ec2-lb. You can broadly classify the activities into four steps:

1. Create the network resources UPDATE: As of March 2025, the network stack was converted to consume from a VPC module instead of the Terraform configuration, as shown below. Passing variables to the VPC module achieves the same functionality as previously achieved from the Terraform code written earlier and described below for the network resources.

I created a VPC with a CIDR size of /24 which was sufficient for me  -256 IP addresses. Then I broke that CIDR into six subnets (three public and three private) of /27. A /27 will have 32 IP addresses. I used my fav CIDR tool for that. 79-image-2 Then, I created an internet gateway in the VPC. The (external) application load balancer requires an internet gateway to communicate with the internet. I also added a route to 0.0.0.0/0 in the public subnet’s route table to use the internet gateway. 79-image-3 The user data script that the EC2 instances are using requires access to the internet, and hence I also created three elastic IP addresses in the VPC and three NAT gateways in the public subnet of the VPC. Then I attached the three NAT gateways to the three elastic IP addresses. The EC2 instances in the private subnet use the NAT gateway to communicate with the internet. 79-image-4 Then, I created a route in the three private subnet’s route tables with a route to 0.0.0.0/0 using the NAT gateway. 79-image-5 Please note that if the EC2 instances do not need to communicate with the internet to install anything, there is no need to create the three elastic IP addresses and three NAT gateways.

2. Create the compute resource I created three EC2 instances in three subnets spread across three availability zones. 79-image-6 I also created two separate security groups, one for the application load balancer with ingress and egress for all IP addresses and the other for the EC2 instances with ingress from the load balancer security group only and egress to all IP addresses. This approach tightens up the security of the EC2 instances.

3. Create an Amazon S3 bucket for load balancer access logs Per AWS-Docs, access logs contain detailed information about requests sent to the load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Following the steps outlined at enable-access-logging, I created the S3 bucket, enabled server-side encryption, and attached a permissions policy. Amazon S3-managed keys (SSE-S3) are the only server-side encryption option supported. 79-image-8 Please pay special attention to the policy since it depends on the region where the resources are created.

4. Create the application load balancer Finally, I created a target group and attached the three EC2 instances to the target group. 79-image-7 I followed that by creating an application load balancer in the public subnets in the VPC and a listener with a default action of forwarding the request to the target group. 79-image-10 I also automated the process using GitHub actions pipelines with Terraform, Checkov, and Infracost. You can access the pipeline YAML files in the .github/workflows folder in the GitHub repository. GitHub Actions deployed the terraform code in this repository using the pull request-based workflow process. You can read more about that at -implement pull request-based workflow using Terraform, Infracost, Checkov, and GitHub actions.

And that is all there is to create an AWS application load balancer with EC2 instances in a private subnet. I hope this note was helpful. Let me know if you have any questions.