Remote Backends: Sharing State Safely
This chapter shows you how to store state in S3 with DynamoDB locking so a team can collaborate without overwriting each other.
Why Remote State
Local state works until two people touch it:
- Ada applies on her laptop. State is on her laptop.
- Grace pulls latest, runs plan. Terraform doesn't know what exists; state isn't in the repo (and shouldn't be). Grace's plan proposes to recreate everything.
Remote state solves this: state lives in a shared location both engineers can access, with locking so they can't collide.
Three benefits:
- Shared. Everyone sees the same state.
- Locked. Concurrent applies fail fast instead of corrupting state.
- Durable. Cloud storage with versioning survives laptop failures.
The S3 Backend
AWS-native and the most common choice. State file in an S3 bucket, lock in DynamoDB.
Config goes in the terraform block:
terraform {
backend "s3" {
bucket = "my-company-tfstate"
key = "projects/notes/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "tf-locks"
}
}
bucket: S3 bucket holding state.key: path within the bucket. Convention: one key per project or env.region: where the bucket lives.encrypt: server-side encryption. Always true.dynamodb_table: lock table. Without it, no locking.
After changing the backend, run:
terraform init
Terraform will detect the backend change and offer to migrate.
The Bootstrap Problem
The S3 bucket and DynamoDB table need to exist before you can use them as a backend. Chicken and egg.
Solutions:
Bootstrap by Hand
Create the bucket and DynamoDB table manually (or with a small throwaway Terraform config using local state). One-time cost.
A Separate "Bootstrap" Terraform Config
Keep a small repo (or directory) that manages the state bucket and lock table with local state. Apply it once, then forget.
# bootstrap/main.tf
resource "aws_s3_bucket" "tfstate" {
bucket = "my-company-tfstate"
}
resource "aws_s3_bucket_versioning" "tfstate" {
bucket = aws_s3_bucket.tfstate.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "tfstate" {
bucket = aws_s3_bucket.tfstate.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "tfstate" {
bucket = aws_s3_bucket.tfstate.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "tf_locks" {
name = "tf-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
Apply once with local state. The bucket and table now exist. Every other project uses them as a backend.
Use HCP Terraform / Terraform Cloud
Their hosted backend is a managed alternative. Zero bootstrap.
Versioning
Enable S3 bucket versioning on the state bucket. Why:
- Every state write is a new object version.
- If state gets corrupted (bug, bad manual edit), restore a previous version.
- The overhead is trivial.
Shown above in the bootstrap example. Don't skip it.
Migrating from Local to Remote
Already have a local terraform.tfstate? Add the backend block, init, migrate:
# edit main.tf to add the backend block
terraform init
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Answer yes. Local state is uploaded. Terraform is now using remote state.
You can safely delete terraform.tfstate and terraform.tfstate.backup locally after migration. Commit the change to the .tf files.
Partial Backend Config
Hardcoding the backend in .tf files is awkward when bucket names differ per environment. Use a backend block with minimal config, and pass the rest at init time:
terraform {
backend "s3" {
# config comes from -backend-config
}
}
At init:
terraform init \
-backend-config="bucket=my-company-tfstate" \
-backend-config="key=projects/notes/terraform.tfstate" \
-backend-config="region=us-east-1" \
-backend-config="dynamodb_table=tf-locks"
Or via a file:
# backends/prod.conf
bucket = "my-company-tfstate"
key = "projects/notes/prod.tfstate"
region = "us-east-1"
dynamodb_table = "tf-locks"
terraform init -backend-config=backends/prod.conf
Workspaces
Workspaces let one backend hold multiple states. Switch between them:
terraform workspace list
terraform workspace new staging
terraform workspace select staging
terraform apply # applies the staging state
terraform workspace select production
terraform apply # applies the production state
With the S3 backend, each workspace gets its own state key:
env:/staging/projects/notes/terraform.tfstate
env:/production/projects/notes/terraform.tfstate
Why Workspaces Are Often Not the Answer
Workspaces share the same .tf code. That sounds like a feature: same config, different environments. In practice:
- All environments live in one config. A bug shipped to dev ships the same config to prod.
- You can't easily have different provider versions per env.
- Reviewing "which environments does this PR touch?" is invisible.
- Separate states are easy to forget to select (
terraform applyon the wrong workspace).
For real multi-environment work, many teams prefer separate directories per environment, each with its own backend key. Chapter 8 covers this.
Workspaces are fine for simple cases (a short-lived ephemeral environment per PR). For dev/staging/prod, think twice.
Other Backends
S3 is common. Others exist:
azurerm Azure Blob Storage.
gcs Google Cloud Storage.
remote HCP Terraform / Terraform Enterprise.
consul HashiCorp Consul.
etcdv3 etcd cluster.
kubernetes Kubernetes Secret.
http Generic HTTP endpoint.
Pick based on your cloud. AWS → S3. GCP → GCS. Azure → azurerm. Mixed cloud → S3 still works, or use HCP.
The HCP Terraform Backend
Managed, hosted by HashiCorp.
terraform {
cloud {
organization = "your-org"
workspaces {
name = "notes-prod"
}
}
}
Benefits: UI for runs, built-in policy checks (Sentinel), run history, variable management, VCS integration.
Costs: pricing past the free tier, vendor lock-in to HashiCorp.
Fine choice if you want someone else to operate the state infrastructure.
IAM Permissions for State
The identity running Terraform needs:
s3:GetObject,s3:PutObject,s3:DeleteObjecton the state key.s3:ListBucketon the bucket.dynamodb:GetItem,PutItem,DeleteItemon the lock table.
Plus whatever permissions Terraform needs for the resources it manages (much broader).
A minimal state-access policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-company-tfstate"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::my-company-tfstate/projects/notes/*"
},
{
"Effect": "Allow",
"Action": ["dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem"],
"Resource": "arn:aws:dynamodb:us-east-1:*:table/tf-locks"
}
]
}
Scope by key prefix so each project only touches its own state.
Reading Other Configs' State
Sometimes project A needs the output of project B. For that, terraform_remote_state:
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "my-company-tfstate"
key = "projects/network/terraform.tfstate"
region = "us-east-1"
}
}
resource "aws_instance" "web" {
subnet_id = data.terraform_remote_state.network.outputs.public_subnet_id
}
Use sparingly. It creates a run-time dependency between projects; if project B changes its outputs, project A breaks. Prefer explicit interfaces (SSM Parameter Store, Secrets Manager) when coupling matters.
Common Pitfalls
No locking. S3 alone gives you no locking. Add DynamoDB.
No versioning. Without versioning, a bad state write is unrecoverable. Enable it.
Public state bucket. Sensitive. Enable public access block. Deny public ACLs.
Single state for everything. One giant state for all infrastructure means every apply locks the world. Split by project or environment.
Mixing local and remote state. Keep it one or the other. Migrations work; oscillating doesn't.
Sharing backends across teams with no isolation. One team's bad plan can lock another team's state. Separate keys, separate IAM scopes.
Next Steps
Continue to 08-environments.md to manage dev, staging, and production without copy-pasting your repo three times.