You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-2Lines changed: 20 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,12 +117,13 @@ A lot of `ci.yml` is self-explanatory but here are some notes to help clarify so
117
117
* Does an install and build for the Lambda function.
118
118
* And finally zips up everything.
119
119
* **Update Lambda migration function**: Updates the AWS Lambda function code for database migrations using the AWS CLI. The workflow waits for the update to be completed before the function is invoked.
120
+
* **Build and push frontend Docker image**: Note that this step uses a Dockerfile in the root of the project, rather than the Dockerfile in the frontend directory. I wanted to keep the Dockerfile in the frontend directory for local development purposes, e.g., `frontend/Dockerfile` uses the Angular server and port 4200.
The diagram above illustrates the major parts of the application infrastructure:
126
+
The diagram above illustrates the major parts of the application infrastructure. Here are some descriptions and notes:
126
127
127
128
* **Interview Prep VPC**: "The Virtual Private Cloud (VPC) is a logically isolated network within the AWS cloud where we can launch and manage AWS resources. It provides a secure environment to group and connect related resources and services, such as EC2 instances, RDS databases, and ECS clusters. The VPC allows us to define our own IP address range, create subnets, and configure route tables and network gateways, ensuring that our infrastructure is both secure and scalable." (GitHub Copilot came up with such a great explanation here that I'm just going to use it as-is.)
128
129
* **Availability zones A and B**: `us-east-1a` and `us-east-1b`. These zones, along with their corresponding public and private subnets, enhance the app's resilience. Currently, one task each for the ECS frontend and backend is deployed, but this can be scaled to distribute tasks across both availability zones.
@@ -134,13 +135,20 @@ The diagram above illustrates the major parts of the application infrastructure:
134
135
* **Public route table**: The public routing table is associated with the public subnets and directs traffic to the internet through the Internet Gateway. This allows resources in the public subnets, such as the load balancer and bastion host, to communicate with the internet.
135
136
* **Private route table**: The private routing table is associated with the private subnets and directs traffic to the internet through the NAT Gateway. This allows resources in the private subnets, such as the ECS services and RDS database, to access the internet for updates and patches while keeping them isolated from direct internet access.
136
137
* **Internet gateway**: Allows resources within the VPC to communicate with the internet.
137
-
* **NAT gateway**: The NAT gateway is in public subnet A but both private subnets can use it via the private route table. We *could* add a NAT gateway to public subnet B to ensure higher availability and fault tolerance.
138
+
* **NAT gateway**: The NAT gateway is in public subnet A but both private subnets can use it via the private route table. We *could* add a NAT gateway to public subnet B to ensure higher availability and fault tolerance. The NAT gateway is used, for example, by ECS tasks to pull Docker images from the ECR. It's also used by the migrate Lambda function to get values from AWS Systems Manager Parameter Store.
138
139
* **API gateway**: This serves as a proxy that forwards the path, data, method, and other request details to the backend API server. This allows the backend to handle the actual processing of the requests. The API gateway is set up to handle CORS, limiting web browser requests to pages served from our frontend host. In the future we'll add an API key and authorization to restrict usage of the API.
139
140
* **ECR for hosting frontend and backend Docker images**: Our GitHub workflow builds and pushes Docker images of our frontend and backend apps to the AWS Elastic Container Registry, tagging the latest build as... "latest."
140
141
* **ECS cluster with frontend and backend ECS services**: The GitHub workflow updates the backend and frontend services in the AWS Elastic Container Service. These services are where our frontend and backend servers actually run, providing the necessary environments for our applications to operate.
141
142
* **RDS-hosted Postgres database instance**: The application uses an instance of Postgres hosted by the AWS Relational Database Service. It runs in the private subnets.
142
143
* **Migrate Lambda function**: This is a function run by the GitHub workflow. A workflow step packages up the migration files with the Lambda function itself and then invokes the function.
143
144
* **Route 53-hosted domains**: `dev.interviewprep.onyxdevtutorials.com` and `api.dev.interviewprep.onyxdevtutorials.com`. The DNS configuration in Route 53 connects the frontend and backend domains to the load balancer. This is achieved using alias records that point to the load balancer's DNS name and zone ID.
145
+
* **CIDR Blocks**: CIDR (Classless Inter-Domain Routing) blocks are used to define IP address ranges within the VPC.
146
+
* **VPC CIDR Block**: This is set to `10.0.0.0/16`, allowing for 65,536 possible IP addresses -- which is plenty for this project.
147
+
* **Subnet CIDR Blocks**: Each subnet gets 256 IP addresses:
148
+
* **Public Subnet A**: 10.0.1.0/24 provides 256 IP addresses.
149
+
* **Public Subnet B**: 10.0.2.0/24 provides 256 IP addresses.
150
+
* **Private Subnet A**: 10.0.3.0/24 provides 256 IP addresses.
151
+
* **Private Subnet B**: 10.0.4.0/24 provides 256 IP addresses.
144
152
145
153
## Costs
146
154
@@ -260,6 +268,16 @@ Once you're connected to the bastion host, you can directly access the database:
path_part="{proxy+}"# Path part that acts as a catch-all proxy for any request path.
10
10
11
-
depends_on=[ aws_api_gateway_rest_api.api ]
11
+
depends_on=[ aws_api_gateway_rest_api.api ]# Ensure the API is created before creating the resource.
12
12
}
13
13
14
14
resource"aws_api_gateway_method""proxy_method" {
15
15
rest_api_id=aws_api_gateway_rest_api.api.id
16
16
resource_id=aws_api_gateway_resource.proxy.id
17
-
http_method="ANY"
18
-
authorization="NONE"
19
-
api_key_required=false
17
+
http_method="ANY"# Handle every type of HTTP request
18
+
authorization="NONE"# No authorization required (yet)
19
+
api_key_required=false# No API key required (yet)
20
20
request_parameters={
21
21
"method.request.path.proxy"=true
22
22
}
23
+
# This configuration allows the API Gateway to serve as a proxy for my actual backend application, handling all types of HTTP requests and forwarding them to the backend.
23
24
}
24
25
26
+
// Define the OPTIONS method for the proxy resource (for CORS preflight requests)
# Define the integration between the proxy resource and the backend application. Basically, the API Gateway will forward all requests to the backend application.
# Defines how API Gateway should handle the OPTIONS method for the proxy resource. In this case, it uses a MOCK integration to generate a mock response.
# In Amazon API Gateway, an aws_api_gateway_method_response specifies the possible responses from an API Gateway, while an aws_api_gateway_integration_response maps the response from an integration to the API Gateway response.
85
+
86
+
# This resource specifies the response parameters (headers) that the integration should return. It is part of the integration setup and tells API Gateway what to include in the response when the OPTIONS method is called.
# This resource specifies the method response parameters (headers) that the method should return. It is part of the method setup and ensures that the headers specified in the integration response are actually included in the final response sent to the client.
# This effectively triggers a redeployment whenever I do `terraform apply`, even if there are no actual changes to the configuration. I need to experiment with this setting.
112
122
triggers={
113
123
redeployment ="${timestamp()}"
114
124
}
115
125
126
+
# Minimize downtime by creating the new deployment before destroying the old one. And... because I don't think AWS would let me destroy the API given that it's in use by the load balancer.
116
127
lifecycle {
117
128
create_before_destroy=true
118
129
}
119
130
}
120
131
132
+
# An API Gateway stage is a logical reference to a lifecycle state of your API (for example, dev, test, prod). Stages are used to manage and deploy different versions of your API, allowing you to test changes in a development environment before promoting them to production.
method_path="*/*"# The path and method for which these settings apply. The format is HTTP_METHOD/RESOURCE_PATH. You can use */* to apply the settings to all methods and resources.
136
148
settings {
137
-
metrics_enabled=true
138
-
logging_level="INFO"
139
-
data_trace_enabled=true
149
+
metrics_enabled=true# Enable CloudWatch metrics for the method.
150
+
logging_level="INFO"# E.g., INFO, ERROR
151
+
data_trace_enabled=true# Can generate a large volume of log data, especially for APIs with high traffic or large payloads.
0 commit comments