Chapter 3: Compute

Azure offers multiple compute options to match different workload types, from raw VMs to fully managed platforms.

Compute Options (control vs. managed spectrum)
┌───────────────────────────────────────────────────────────────┐
│  More control ◄──────────────────────────────► More managed   │
│                                                               │
│  Bare Metal → VMs → Container Instances → AKS → App Service  │
│                                                 → Functions   │
└───────────────────────────────────────────────────────────────┘

Azure Virtual Machines

Virtual Machines (VMs) are IaaS: you manage the OS and everything on top of it. Use them when you need full control, run legacy applications, or lift-and-shift on-premises servers.

VM Series

SeriesPurposeExample SKUs
B (Burstable)Dev/test, low-traffic appsB2s, B4ms
D (General Purpose)Web servers, small databasesD2s_v5, D4s_v5
E (Memory Optimised)In-memory databases, analyticsE4s_v5, E8s_v5
F (Compute Optimised)CPU-intensive batch jobsF4s_v2, F8s_v2
N (GPU)ML training, renderingNC6s_v3, ND40rs_v2
M (Memory Heavy)SAP HANA, very large DBsM128ms
L (Storage Optimised)Big data, NoSQLL8s_v3

Creating a VM

# Create a Linux VM
az vm create \
  --resource-group myapp-rg \
  --name mywebvm \
  --image Ubuntu2204 \
  --size Standard_B2s \
  --admin-username azureuser \
  --generate-ssh-keys \
  --output json

# The command outputs the public IP
# Connect via SSH
ssh azureuser@<public-ip>

# Create a Windows VM
az vm create \
  --resource-group myapp-rg \
  --name mywinvm \
  --image Win2022Datacenter \
  --size Standard_D2s_v5 \
  --admin-username adminuser \
  --admin-password "MyP@ssword123!"

Useful VM Operations

# List VMs and their status
az vm list -d --output table

# Start / Stop / Restart
az vm start   --resource-group myapp-rg --name mywebvm
az vm stop    --resource-group myapp-rg --name mywebvm   # stops but still charged for disk
az vm deallocate --resource-group myapp-rg --name mywebvm  # stops + frees compute (no charge)
az vm restart --resource-group myapp-rg --name mywebvm

# Resize a VM
az vm resize \
  --resource-group myapp-rg \
  --name mywebvm \
  --size Standard_D4s_v5

# Show VM details (IP, disk, OS)
az vm show -d \
  --resource-group myapp-rg \
  --name mywebvm \
  --output table

# Open a port in the NSG
az vm open-port \
  --resource-group myapp-rg \
  --name mywebvm \
  --port 80

# Run a command on a VM without SSH (very useful for bootstrapping)
az vm run-command invoke \
  --resource-group myapp-rg \
  --name mywebvm \
  --command-id RunShellScript \
  --scripts "apt-get update && apt-get install -y nginx"

VM Disks

Disk TypeMax IOPSUse Case
Standard HDD500Dev/test, infrequent access
Standard SSD6,000Web servers, light databases
Premium SSD20,000Production databases, I/O intensive
Premium SSD v280,000High-performance databases
Ultra Disk160,000Extreme I/O (SAP HANA, SQL Server)
# Attach a new managed disk
az vm disk attach \
  --resource-group myapp-rg \
  --vm-name mywebvm \
  --name myDataDisk \
  --size-gb 128 \
  --sku Premium_LRS \
  --new

VM Scale Sets (VMSS)

Scale Sets let you run many identical VMs that auto-scale based on demand.

# Create a scale set
az vmss create \
  --resource-group myapp-rg \
  --name myScaleSet \
  --image Ubuntu2204 \
  --vm-sku Standard_B2s \
  --instance-count 2 \
  --admin-username azureuser \
  --generate-ssh-keys

# Configure autoscale: 2–10 VMs based on CPU
az monitor autoscale create \
  --resource-group myapp-rg \
  --resource myScaleSet \
  --resource-type Microsoft.Compute/virtualMachineScaleSets \
  --name autoscale-rules \
  --min-count 2 \
  --max-count 10 \
  --count 2

# Scale out when CPU > 70%
az monitor autoscale rule create \
  --resource-group myapp-rg \
  --autoscale-name autoscale-rules \
  --condition "Percentage CPU > 70 avg 5m" \
  --scale out 2

# Scale in when CPU < 30%
az monitor autoscale rule create \
  --resource-group myapp-rg \
  --autoscale-name autoscale-rules \
  --condition "Percentage CPU < 30 avg 5m" \
  --scale in 1

Azure App Service

App Service is a fully managed PaaS for hosting web applications and APIs. No OS patching, no server management. Supports Node.js, Python, .NET, Java, PHP, Ruby, and containers.

App Service Plans

The plan defines the compute resources your app runs on:

TierNameUse CaseAuto-scale
FreeF1Dev/test, 60 min/day CPUNo
SharedD1Dev/testNo
BasicB1–B3Dev/test, low trafficNo
StandardS1–S3ProductionYes (up to 10 instances)
PremiumP1v3–P3v3High-traffic productionYes (up to 30 instances)
IsolatedI1v2–I3v2VNet isolation, complianceYes (up to 100 instances)
# Create a plan and web app
az appservice plan create \
  --name myapp-plan \
  --resource-group myapp-rg \
  --sku S1 \
  --is-linux

az webapp create \
  --resource-group myapp-rg \
  --plan myapp-plan \
  --name myapp-api \
  --runtime "NODE:20-lts"

# Set environment variables (app settings)
az webapp config appsettings set \
  --resource-group myapp-rg \
  --name myapp-api \
  --settings \
    NODE_ENV=production \
    DATABASE_URL="postgresql://..." \
    API_KEY="@Microsoft.KeyVault(SecretUri=https://...)"

# Enable continuous deployment from GitHub
az webapp deployment source config \
  --resource-group myapp-rg \
  --name myapp-api \
  --repo-url https://github.com/myorg/myrepo \
  --branch main \
  --manual-integration

# Stream live logs
az webapp log tail \
  --resource-group myapp-rg \
  --name myapp-api

# Scale out to 5 instances
az appservice plan update \
  --resource-group myapp-rg \
  --name myapp-plan \
  --number-of-workers 5

Deployment Slots

Slots are separate instances of your app (staging, canary) with their own URLs. You can swap slots with zero downtime.

# Create a staging slot
az webapp deployment slot create \
  --resource-group myapp-rg \
  --name myapp-api \
  --slot staging

# Deploy to staging first
az webapp deploy \
  --resource-group myapp-rg \
  --name myapp-api \
  --slot staging \
  --src-path ./app.zip \
  --type zip

# Swap staging → production (zero downtime)
az webapp deployment slot swap \
  --resource-group myapp-rg \
  --name myapp-api \
  --slot staging \
  --target-slot production

Azure Container Instances (ACI)

ACI runs containers without managing any VMs or orchestrators. Ideal for short-lived jobs, batch processing, or testing.

# Run a container (billed per second)
az container create \
  --resource-group myapp-rg \
  --name mycontainer \
  --image nginx:latest \
  --ports 80 \
  --dns-name-label myapp-container-$RANDOM \
  --cpu 1 \
  --memory 1.5

# Get the FQDN
az container show \
  --resource-group myapp-rg \
  --name mycontainer \
  --query ipAddress.fqdn \
  --output tsv

# Stream container logs
az container logs \
  --resource-group myapp-rg \
  --name mycontainer \
  --follow

# Delete the container
az container delete \
  --resource-group myapp-rg \
  --name mycontainer \
  --yes

ACI supports private container images from Azure Container Registry (see DevOps chapter).

Azure Kubernetes Service (AKS)

AKS is a fully managed Kubernetes service. Azure handles the control plane (API server, etcd, scheduler), so you only pay for the worker nodes.

When to use AKS vs App Service

ScenarioUse AKSUse App Service
Microservices with complex dependencies
Need sidecar containers / service mesh
Run any container workload
Simple web app / API
No Kubernetes expertise on team
Need deployment slots

Creating an AKS Cluster

# Create the cluster (2 node default pool)
az aks create \
  --resource-group myapp-rg \
  --name myapp-aks \
  --node-count 2 \
  --node-vm-size Standard_D2s_v5 \
  --enable-managed-identity \
  --generate-ssh-keys

# Get kubectl credentials
az aks get-credentials \
  --resource-group myapp-rg \
  --name myapp-aks

# Verify connection
kubectl get nodes

Deploying to AKS

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myregistry.azurecr.io/myapp:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: "250m"
            memory: "256Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: LoadBalancer
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 8080
kubectl apply -f deployment.yaml
kubectl get service myapp-service --watch

AKS Scaling

# Manual scale: change number of nodes
az aks scale \
  --resource-group myapp-rg \
  --name myapp-aks \
  --node-count 5

# Enable cluster autoscaler (1–10 nodes)
az aks update \
  --resource-group myapp-rg \
  --name myapp-aks \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 10

# Horizontal Pod Autoscaler (kubectl)
kubectl autoscale deployment myapp --cpu-percent=60 --min=2 --max=20

Choosing the Right Compute

Decision Tree: Which compute should I use?
                        │
              Need full OS control?
              ┌──────────────────┐
              │ Yes              │ No
              ▼                  ▼
         Azure VM           Run containers?
                        ┌──────────────────┐
                        │ Yes              │ No
                        ▼                  ▼
              Need orchestration?    Event-driven /
              ┌──────────────────┐   short-lived?
              │ Yes    │ No       │   ┌──────┐
              ▼        ▼          ▼   │ Yes  │ No
             AKS      ACI     Functions App Service

Next Steps

Continue to 04-storage.md to learn about Azure's storage services: blobs, files, queues, and tables.