Introduction: What Terraform Is and Why It Exists
This chapter explains what Terraform is, what Infrastructure as Code solves, and walks you through your first apply on AWS.
The Problem Infrastructure as Code Solves
You log into the AWS console to set up a new service. You click: create a VPC, add subnets, configure a NAT gateway, launch some EC2s, put a load balancer in front, open some security groups, point a Route 53 record at the load balancer.
Three months later, you need to do it again for staging. You open the console. You click. You remember most of it. You get the subnet CIDRs slightly wrong. The load balancer has a different idle timeout than prod.
Six months later, a new engineer joins and asks, "how did this get built?" You don't remember. Nobody remembers. The console has no audit log of why you chose an m5.large over a t3.medium.
This is the problem Infrastructure as Code solves. Instead of clicking, you write code that describes the infrastructure you want. You commit it. You review it. You apply it deterministically. Staging is a copy of prod that differs only in the variables you passed. New engineers read the code.
IaC is not optional at scale. It is optional for hobby projects. It pays for itself the first time you need to rebuild something.
What Terraform Is
Terraform is a tool by HashiCorp (and now OpenTofu, see below) that reads a declarative configuration (what you want the world to look like) and makes the cloud match.
Key properties:
- Declarative. You describe the desired state. Terraform figures out what API calls to make.
- Stateful. Terraform keeps a record of what it has created, so it knows what to change, add, or delete.
- Provider-based. A plugin system means Terraform can manage AWS, GCP, Azure, Cloudflare, GitHub, Kubernetes, Datadog, and hundreds of other things through one interface.
- Plan-before-apply. Terraform always shows you what it will change before it changes it.
You write files ending in .tf. Each file is a chunk of configuration. Terraform reads them all together, builds a dependency graph, and makes the world match.
Terraform vs Alternatives
The shape of the decision:
- CloudFormation (AWS-only): similar to Terraform, tightly integrated with AWS. Use it if you're 100% AWS and want no extra dependency. Terraform wins when you have multiple providers or want a larger community.
- AWS CDK (TypeScript, Python, etc.): real programming languages compile to CloudFormation. Good if your team prefers code over HCL. Less declarative; more runtime magic.
- Pulumi: like CDK but multi-cloud. Same trade-offs.
- Ansible, Chef, Puppet: configuration management for servers. Complementary, not competitive; you often use Terraform to create VMs and Ansible to configure them.
- Kubernetes manifests, Helm: for workloads on Kubernetes, not for provisioning the cluster itself.
Terraform is the most widely used IaC tool. The job market reflects that. It's the default unless you have a specific reason to pick something else.
OpenTofu
In August 2023, HashiCorp changed Terraform's license from MPL (true open source) to BSL (Business Source License, with restrictions). The community forked it as OpenTofu, an open-source drop-in replacement.
OpenTofu 1.6 and Terraform 1.6 are compatible. Configurations that work on one work on the other. Commands are identical (tofu init vs terraform init). The ecosystem is gradually recognizing both.
For this tutorial, we'll say "terraform" throughout. If you're using OpenTofu, substitute tofu and everything else is the same.
Which should you use? OpenTofu if you want true open source and community governance. Terraform (HashiCorp) if you want HCP Terraform (the hosted service) or you're in an enterprise already on that path. For learning, either works.
Installing
macOS
brew install terraform # or: brew install opentofu
Linux (Ubuntu/Debian)
wget -O- https://apt.releases.hashicorp.com/gpg | \
sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
Version Management with tfenv
For switching between versions (you'll need this sooner than you think):
brew install tfenv
tfenv install 1.6.0
tfenv use 1.6.0
Check:
terraform version
# Terraform v1.6.0 on darwin_arm64
AWS Credentials
Terraform uses the standard AWS credential chain: environment variables, a credentials file, or IAM role (on EC2, EKS, etc.).
The Quick Way
Install the AWS CLI, run aws configure:
aws configure
# AWS Access Key ID: AKIA...
# AWS Secret Access Key: ...
# Default region name: us-east-1
# Default output format: json
Verify:
aws sts get-caller-identity
For Production
Use IAM Identity Center (SSO), not long-lived access keys. Or, for CI, use OIDC federation so no secret is ever stored. Chapter 9 covers this properly.
For learning, a plain IAM user with programmatic access is fine.
Your First Config
Create a directory, make a file, run a command. That's it.
mkdir tf-intro && cd tf-intro
main.tf:
terraform {
required_version = ">= 1.6"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "notes" {
bucket = "ada-notes-2026-04-19" # must be globally unique; edit this
}
Replace the bucket name with something unique (S3 bucket names are global).
The Core Loop: init, plan, apply, destroy
init
Terraform downloads the AWS provider plugin and sets up its internal state directory.
terraform init
Output is verbose. The key line is "Terraform has been successfully initialized!"
A .terraform/ directory appears, plus a .terraform.lock.hcl file (commit this to git).
plan
Terraform compares your config to what exists (nothing, yet) and prints a diff.
terraform plan
Terraform will perform the following actions:
# aws_s3_bucket.notes will be created
+ resource "aws_s3_bucket" "notes" {
+ bucket = "ada-notes-2026-04-19"
+ ...
}
Plan: 1 to add, 0 to change, 0 to destroy.
Read every plan carefully. A plan that surprises you is a plan you shouldn't apply.
apply
terraform apply
Terraform prints the same plan and asks for confirmation. Type yes. In a few seconds:
aws_s3_bucket.notes: Creating...
aws_s3_bucket.notes: Creation complete after 2s [id=ada-notes-2026-04-19]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
You now have an S3 bucket. Check the AWS console; it's there.
Terraform also wrote a file called terraform.tfstate. That's its record of what exists. Chapter 5 covers state in detail.
destroy
terraform destroy
Same pattern: shows the plan, asks for confirmation, deletes the bucket.
Always destroy what you don't need. Unused AWS resources cost money.
What Just Happened
Behind the scenes, Terraform:
- Read your
.tffiles. - Loaded the AWS provider.
- Built a dependency graph (trivial here: one node).
- Called the AWS API (
CreateBucket). - Recorded the result in
terraform.tfstate. - Later, compared the config to state and destroyed the resource.
Every future chapter is variations and extensions of this loop.
Common Pitfalls
S3 bucket name already taken. Bucket names are global. Edit until it's unique.
Wrong region. If you forget to set a region, Terraform uses AWS CLI's default. Watch the plan output to make sure it's what you expected.
"Access denied" errors. Your IAM user needs permissions for whatever resource you're creating. For learning, PowerUserAccess is fine; in production, scope permissions tighter.
Running apply without reading the plan. Don't. The plan is the single most important output Terraform gives you. Read it.
Committing terraform.tfstate to git. Don't. It contains sensitive data and the whole team will fight over it. Chapters 5 and 7 cover where state belongs.
Next Steps
Continue to 02-hcl-basics.md to read and write the configuration language.