Environments: Dev, Staging, Production
This chapter covers the two dominant patterns for managing multiple environments, when each fits, and the trade-offs nobody warns you about.
The Problem
You have one Terraform config. You want to run it in dev, staging, and production. Each environment has:
- Different sizes (prod uses bigger instances).
- Different names (dev bucket is
notes-dev, prod isnotes-prod). - Different credentials, possibly different AWS accounts.
- Different state (you don't want
terraform applyon dev to even see prod).
Terraform doesn't prescribe an answer. The community has converged on two patterns. Both work; one is more widely recommended.
Pattern 1: Workspaces
One Terraform config, multiple Terraform workspaces. Each workspace has its own state key in the same backend.
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
terraform workspace select dev
terraform apply -var-file=envs/dev.tfvars
terraform workspace select prod
terraform apply -var-file=envs/prod.tfvars
Directory structure:
notes/
├── main.tf
├── variables.tf
├── outputs.tf
├── versions.tf
└── envs/
├── dev.tfvars
├── staging.tfvars
└── prod.tfvars
Inside the config, reference the workspace:
locals {
environment = terraform.workspace
}
resource "aws_s3_bucket" "notes" {
bucket = "notes-${local.environment}"
}
Pros:
- One copy of the code.
- Easy to add ephemeral environments (
terraform workspace new pr-123).
Cons:
- All environments use the same code at the same moment. You can't stage a change in dev and leave prod alone.
- One bad apply in "prod" workspace hits production immediately.
- Rolling back requires state surgery.
- Cross-environment differences get crammed into conditionals (
var.environment == "prod" ? big : small). - If prod and dev need different provider versions (rare but real), workspaces can't do it.
Workspaces work best for truly ephemeral environments: feature branches, PR previews, short-lived test stacks.
Pattern 2: Directory per Environment
Separate directory per environment. Each is its own Terraform config with its own state.
notes/
├── modules/
│ └── notes/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
├── environments/
│ ├── dev/
│ │ ├── main.tf # module "notes" { source = "../../modules/notes" ... }
│ │ ├── backend.tf # backend "s3" { key = "notes/dev.tfstate" ... }
│ │ └── terraform.tfvars
│ ├── staging/
│ │ ├── main.tf
│ │ ├── backend.tf
│ │ └── terraform.tfvars
│ └── prod/
│ ├── main.tf
│ ├── backend.tf
│ └── terraform.tfvars
Each environment's main.tf:
# environments/prod/main.tf
provider "aws" {
region = "us-east-1"
}
module "notes" {
source = "../../modules/notes"
environment = "prod"
retention_days = 365
instance_type = "m5.large"
}
To deploy to prod:
cd environments/prod
terraform init
terraform apply
Pros:
- Environments are truly isolated.
- Changes to one env don't touch others until the env's own directory is applied.
- You can have different provider versions, different modules, different code per env.
- Git diffs per environment are obvious.
- Blast radius of a mistake is one environment.
Cons:
- More files and directories. Some duplication in backend/provider config.
- Promoting a change from dev → prod means applying in each directory.
Why Directory per Environment Usually Wins
The isolation is what you want in practice. "I can change dev without risking prod" is the whole point. Workspaces give you the opposite: any change to the config is a change to all environments, the moment you terraform apply in that workspace.
Most production-grade teams use directory per environment.
Reducing Duplication
The directory-per-env pattern repeats some boilerplate. Options:
Shared Modules, Thin Environment Configs
Put as much as possible in shared modules. Each environment is a thin wrapper that sets variables.
# environments/prod/main.tf (maybe 20 lines)
terraform {
required_version = ">= 1.6"
required_providers { aws = { source = "hashicorp/aws", version = "~> 5.0" } }
backend "s3" { key = "notes/prod.tfstate" ... }
}
provider "aws" {
region = "us-east-1"
}
module "platform" {
source = "../../modules/platform"
environment = "prod"
# ...prod-specific settings
}
Terragrunt
A wrapper tool that removes the duplication without abandoning directory-per-env. Chapter 11 covers it.
Symlinks (Hack)
Some teams symlink versions.tf across environments so there's one file. Works. Most engineers dislike it.
Separate AWS Accounts per Environment
Production-grade setups often use a separate AWS account per environment:
account-dev IAM users, dev resources, dev state bucket.
account-staging Staging resources, staging state bucket.
account-prod Prod resources, prod state bucket.
Benefits:
- True blast radius isolation. Can't accidentally delete prod from dev.
- Billing per environment.
- Separate IAM, separate CloudTrail, separate everything.
With directory per env, each directory assumes its own AWS account. Your AWS CLI profile or IAM role determines the account. Provider config:
provider "aws" {
region = "us-east-1"
profile = "prod" # or assume_role for automation
}
State Separation
Whatever pattern you pick, each environment must have its own state. Not optional.
- With workspaces: automatic (each workspace has its own state key).
- With directory per env: each directory's
backendblock has a distinctkey.
Never one state for all environments. You'll regret it.
tfvars per Environment
With directory per env, each directory has its own terraform.tfvars (auto-loaded). Optionally, you can keep a shared file and override:
environments/
├── common.tfvars # applies to every env
├── dev/
│ └── terraform.tfvars # dev-specific overrides
└── prod/
└── terraform.tfvars # prod-specific overrides
Apply with:
cd environments/prod
terraform apply -var-file=../common.tfvars # common first, then terraform.tfvars auto-loads
Or use Terragrunt for this explicitly (Chapter 11).
Promoting Changes
Workflow for a change to infrastructure:
- Edit the shared module (or copy the change into each env if not shared).
- Apply in dev. Verify.
- Apply in staging. Verify.
- Apply in prod.
PR workflow: CI plans in all environments; reviewer checks both the diff and the plan output.
Don't rubber-stamp environments. The whole point of multi-env is to catch issues before prod.
What About Branches per Environment?
Some teams suggest: main is prod, staging branch is staging, dev branch is dev. Apply from each branch.
Don't. Problems:
- Merge conflicts between branches accumulate.
- "Promoting" a change is a git merge, which can do surprising things.
- Rolling back is a revert across multiple branches.
- No clear "what's in prod" commit.
Use one branch (main), with directories per environment. The directory you apply in is the environment.
Common Pitfalls
One state for all environments. A bad plan in dev can touch prod resources. Fatal. Separate state per env, always.
Hardcoded environment in shared modules. A module that checks var.environment == "prod" everywhere is a wrapper around two distinct configs. Split it.
Workspaces plus a team of 10. The "select right workspace" step is a footgun. Directory per env is more explicit.
Copy-pasting changes to each env directory. Put shared logic in a module. Environments should be thin.
Using the same backend key for different envs. Self-explanatory.
Mixing up dev and prod credentials on your laptop. Use AWS profiles and set them via env var in the shell. AWS_PROFILE=prod at the start of a session; mistakes become harder.
Next Steps
Continue to 09-ci-cd.md to automate plan and apply with a real CI pipeline.