Production infrastructure & migrations
How the Info PPAAI App runs in AWS, including Aurora, ECS, secrets, and migrations.
Production infrastructure & migrations
This project runs the Info PPAAI App on AWS using:
- VPC + public subnets for the ALB and ECS Fargate tasks
- Aurora PostgreSQL Serverless v2 for the database
- ECS Fargate for the Next.js web app and a one-off db_migrate task
- Secrets Manager for
DATABASE_URLandBETTER_AUTH_SECRET - CloudWatch Logs for the migration task only
Database connectivity
- The Aurora cluster is not publicly accessible and lives in the VPC.
- A dedicated
dbsecurity group only allows port 5432 from the ECS tasks security group. - The
DATABASE_URLsecret stored in Secrets Manager looks like:
postgresql://appuser:<password>@<aurora-endpoint>:5432/info_ppaai_app?ssl=true- Both the web task and the
db_migratetask receive this secret as theDATABASE_URLenvironment variable. - Because the base image does not ship the AWS RDS CA bundle, the tasks set:
NODE_TLS_REJECT_UNAUTHORIZED=0so that Node/bun accepts the RDS certificate. This is acceptable here because the database is only reachable from within the VPC via security group rules, not from the public internet.
Migrations inside ECS (db_migrate task)
Migrations are run inside AWS, not from your laptop:
- A dedicated
db_migrateECS Fargate task runs:
cd packages/db && bun run db:push- It uses the same image as the web service and the same
DATABASE_URLsecret. - The task is triggered via the Makefile:
make ecs-db-migrate # run migrations once
# or as part of a full deploy
make deploy # build, push, apply infra, then run db_migrateDebugging migrations
-
The
db_migratetask writes logs to CloudWatch Logs:- Log group:
/ecs/info-ppaai-app-prod-db-migrate - Retention: 7 days
- Log group:
If a migration fails (for example due to SSL / certificate configuration or schema issues), check this log group for the Drizzle / Postgres error message.
Environment configuration & deploy pipeline
Production configuration is driven by standard env files and passed into AWS via OpenTofu/Terraform and ECS environment variables.
Env files
-
.env: local development configuration (not used directly in production). -
.env.production: authoritative production app configuration (gitignored). Example:APP_NAME=info-ppaai APP_HOST=ppaai.info.nl EMAIL_FROM=no-reply@ppaai.info.nl GOOGLE_CLIENT_ID=... GOOGLE_CLIENT_SECRET=... MICROSOFT_CLIENT_ID=... MICROSOFT_CLIENT_SECRET=...
Do not store the production DATABASE_URL here. The database URL is generated and stored in AWS Secrets Manager by Terraform (infra/secrets.tf) and injected into ECS as a secret.
The root Makefile loads env files in this order for infra commands:
infra/.env→ AWS credentials for local OpenTofu.env→ baseline env.env.production→ overrides.envwhen present
This means local make plan / make apply / make deploy can use the same production configuration as CI.
GitHub Actions: Deploy to AWS
The Deploy to AWS workflow (.github/workflows/deploy.yml) expects a single repository secret:
PROD_ENV_B64: base64-encoded contents of.env.production.
During the deploy job, the workflow:
- Decodes
PROD_ENV_B64into.env.productioninside theinfradirectory. - Sources this file into the shell (
set -a; . ./.env.production; set +a). - Exports selected variables as Terraform inputs (
TF_VAR_*), which the ECS task definition uses to set environment variables for the app.
Better Auth and its social providers read from process.env (for example GOOGLE_CLIENT_ID, MICROSOFT_CLIENT_ID) and only enable a provider when the relevant variables are present.
Helper: encoding .env.production for CI
Use the b64-encode-prod-env Make target to generate the payload for PROD_ENV_B64:
make b64-encode-prod-env- On macOS, the base64 string is copied to the clipboard via
pbcopy. - On other platforms, the string is printed to stdout.
Paste this value into the PROD_ENV_B64 repository secret in GitHub.
With this setup, local make deploy and the GitHub Actions pipeline both use the same .env.production configuration to provision ECS environment variables.