Terraform¶
Overview¶
Infrastructure is managed with Terraform (>= 1.6.0) using the AWS provider (~> 5.0). State is stored remotely in S3 with file-based locking and encryption.
Each environment lives in its own AWS account and has its own state file. Changes are currently applied locally — CI-based apply is planned but not yet implemented.
Repository Structure¶
infra/
├── live/
│ ├── prod/eu-central-1/ # Production root module
│ └── stage/eu-central-1/ # Staging root module
└── modules/
├── core/ # Core API service
│ ├── api/
│ └── ecs/
├── atrax/ # Atrax crawler/controller
│ ├── api/
│ ├── ecr/
│ ├── ecs/
│ └── node/
└── vault/ # Vault analytics pipeline
├── clickhouse/
├── ecs/
├── etl/
├── grafana/
└── vault-api/
Each live/<env>/<region>/ directory is an independent root module with its own backend configuration. You run
terraform init and terraform apply from within that directory.
State Backend¶
State is stored in per-account S3 buckets with encryption and locking:
| Environment | Bucket | Key |
|---|---|---|
| prod | cookiehub-terraform-state-prod-759286286879 |
prod/eu-central-1/terraform.tfstate |
| stage | cookiehub-terraform-state-stage-258618559895 |
stage/eu-central-1/terraform.tfstate |
Both use use_lockfile = true and encrypt = true (AES256).
Naming Convention¶
All resource names follow the pattern {env}-{region_short}-{group}-{component}:
This is generated from locals.tf in each root module:
locals {
region_short = {
eu-central-1 = "euc1"
eu-west-1 = "euw1"
}
name_prefix = "${var.environment}-${local.region_short}-${var.group}"
base_tags = {
Environment = var.environment
Region = var.region
Group = var.group
ManagedBy = "terraform"
}
}
The name_prefix and base_tags are passed down to every module.
Module Conventions¶
Each module follows a standard file layout:
| File | Purpose |
|---|---|
main.tf |
Resource definitions |
variables.tf |
Input variables |
outputs.tf |
Output values |
versions.tf |
Provider constraints |
data.tf |
Data sources (when needed) |
Modules are small and purpose-specific. A typical service module manages:
- ECS task definition (with SSM secrets)
- ALB target group and listener rules
- Route53 DNS record
- IAM roles and security group
- CloudWatch log group
All modules receive common variables: name_prefix, environment, region, group, vpc_id, subnet_ids, and
base_tags.
How to Apply Changes¶
Prerequisites¶
- AWS CLI configured with named profiles for each account
- Terraform >= 1.6.0 installed
Workflow¶
# 1. Set the right AWS profile
export AWS_PROFILE=stage # or prod
# 2. Navigate to the environment
cd live/stage/eu-central-1
# 3. Initialize (first time or after adding modules)
terraform init
# 4. Review changes
terraform plan
# 5. Apply
terraform apply
Manual applies only
There is no CI pipeline for Terraform yet. All changes are applied locally. Always run
terraform plan first and review the output carefully before applying.
Environment Differences¶
| Production | Stage | |
|---|---|---|
| AWS Account | 759286286879 | 258618559895 |
| Vault services | Yes | Yes |
| Atrax services | No | Yes |
| Core API | Not yet | Yes (created manually) |
| Public ALB | No (internal only) | Yes |
| ECS instance type | c7i.xlarge | t3.small / t3.medium |
What's Not in Terraform¶
Some resources were created manually and don't have corresponding .tf files:
- Core API ECS service in stage — task definition, service, target group, and ALB rules were created by hand
(2026-03-25). Plan is to codify these into
modules/core/. - GitHub Actions OIDC provider and deploy roles — created via console.