Human-in-the-Loop: Interrupts and Approvals
This chapter shows how to pause a graph for approval, edit the state, and resume: the pattern behind any agent you actually trust with real work.
Why Pause
A fully autonomous agent is a fine demo. It's scary in production. Before an agent:
- Sends an email.
- Commits code.
- Makes a payment.
- Posts to Slack.
You want a human to see what's about to happen and say yes. That's human-in-the-loop.
LangGraph gives you two tools for this: interrupts and state editing. Both require a checkpointer (Chapter 5).
interrupt: Pause Inside a Node
The interrupt() function pauses the graph. Whatever you pass to interrupt() is surfaced to the caller as the pause reason.
from langgraph.types import interrupt, Command
def email_draft(state: State) -> State:
body = compose_email(state)
approval = interrupt({
"action": "send_email",
"to": state["recipient"],
"body": body,
})
if approval["approved"]:
return {"messages": state["messages"] + [f"Sent: {body}"]}
else:
return {"messages": state["messages"] + [f"Cancelled: {approval['reason']}"]}
When execution hits interrupt(...), the graph pauses. The caller sees what the node wanted to do. The caller then decides.
To resume, pass Command(resume=...):
config = {"configurable": {"thread_id": "user-42"}}
# Run until interrupt
result = graph.invoke(initial_state, config=config)
print(result)
# May return an intermediate state indicating the interrupt happened.
# Inspect the pending interrupt
state = graph.get_state(config)
print(state.tasks) # shows pending tasks with interrupt reasons
# Resume with user's decision
graph.invoke(
Command(resume={"approved": True}),
config=config,
)
The interrupt(...) call inside the node returns whatever you passed to resume. The node continues from there.
Interrupt Before and After Nodes
Alternative pattern: interrupt at node boundaries, not inside the node.
graph = builder.compile(
checkpointer=memory,
interrupt_before=["send_email", "make_payment"],
)
The graph pauses before running send_email or make_payment. The caller sees the pending node, can inspect state, optionally modify state, then resume.
# Run, pauses before send_email
graph.invoke(initial, config=config)
# Inspect
state = graph.get_state(config)
print(state.next) # ('send_email',)
# Optionally edit state
graph.update_state(config, {"messages": [edited_message]})
# Resume
graph.invoke(None, config=config)
Similar pattern with interrupt_after=["critical_node"]: pause after the node runs, before the next one. Use to review results before they propagate.
interrupt vs interrupt_before: Which One
interrupt()inside a node: when the pause reason depends on runtime data computed by that node. "Here's the email draft; should I send it?" needs the draft.interrupt_before: when the pause is always at the same step, no data computation needed. "Always pause before calling the external payment API."
interrupt() is more flexible. interrupt_before is simpler when you know exactly where to stop.
Editing State Mid-Run
update_state() (from Chapter 5) is how you edit state while paused. The graph continues from the edited state.
# Agent drafts an email; interrupt pauses
state = graph.get_state(config)
draft = state.values["draft"]
# Human edits the draft
edited = ask_human_to_edit(draft)
# Push the edit into state
graph.update_state(config, {"draft": edited})
# Resume
graph.invoke(None, config=config)
Use for:
- User editing a draft before it's sent.
- User correcting a fact the agent got wrong.
- Tuning agent behavior live.
A Full Approval Example
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt, Command
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
class State(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]
draft: str
def draft_email(state: State) -> State:
# In real code, call LLM here.
draft = f"Subject: Update\n\nHi, quick status update: {state['messages'][-1].content}"
return {"draft": draft}
def approve(state: State) -> State:
decision = interrupt({
"draft": state["draft"],
"question": "Approve and send?",
})
if decision == "approve":
return {"messages": [AIMessage(content=f"Email sent: {state['draft']}")]}
return {"messages": [AIMessage(content="Email cancelled.")]}
memory = MemorySaver()
graph = (
StateGraph(State)
.add_node("draft", draft_email)
.add_node("approve", approve)
.add_edge(START, "draft")
.add_edge("draft", "approve")
.add_edge("approve", END)
.compile(checkpointer=memory)
)
config = {"configurable": {"thread_id": "email-1"}}
# Run until interrupt
graph.invoke(
{"messages": [HumanMessage(content="Project finished on time.")], "draft": ""},
config=config,
)
# Check what the agent wants
state = graph.get_state(config)
print("Waiting at:", state.next)
print("Draft:", state.values["draft"])
# Simulate approval
graph.invoke(Command(resume="approve"), config=config)
# Final state
print(graph.get_state(config).values["messages"][-1].content)
# "Email sent: Subject: Update\n\nHi, quick status update: Project finished on time."
Replace the approval with anything: Slack bot, web UI, CLI prompt, email round-trip.
Patterns
Approval Gate
Binary yes/no before a risky action.
decision = interrupt({"action": "...", "args": {...}, "question": "Proceed?"})
if decision != "yes":
return {"cancelled": True}
Review-and-Edit
User can tweak the draft before it's used.
edited = interrupt({"draft": state["draft"], "editable": True})
return {"draft": edited}
Clarification
Agent asks a question; user answers before the agent continues.
question = generate_question(state)
answer = interrupt({"question": question, "context": state["..."]})
return {"messages": [HumanMessage(content=answer)]}
Multi-Step Approval
Several approvals in a row, each pausing.
def approval_chain(state: State):
if state["risk_level"] > 5:
interrupt({"require": "manager_approval"})
if state["cost"] > 1000:
interrupt({"require": "finance_approval"})
# If we get here, both approved.
return {"approved": True}
Each interrupt is independent; resuming passes through each in turn.
Resuming Across Sessions
The whole point of persistence plus interrupts: the graph can pause, the process can die, a new process can resume days later.
- First process starts the graph, hits interrupt, stores thread_id in a DB or queue.
- UI asks the human, days later.
- Any process (possibly a new one) loads the checkpointer, calls
graph.invoke(Command(resume=...), config={"configurable": {"thread_id": "..."}}).
Durability is SQLite-backed or Postgres-backed (Chapter 5). The graph doesn't care about process boundaries.
Common Pitfalls
Using interrupts without a checkpointer. No state to resume from. LangGraph errors.
Expecting invoke to return the pause data. It returns the state as of the pause; you inspect get_state(config).tasks for interrupt details.
Side effects before the interrupt. If your node sends the email and then asks for approval, you've just sent the email. Interrupt first, act after.
Forgetting to pass the config on resume. LangGraph needs the thread ID. Without it, it starts a fresh invocation.
Interrupts in a loop with no progress. Every iteration pauses; you approve; next iteration pauses again. Design the loop so an approval covers subsequent iterations or adds to a list.
Next Steps
Continue to 07-streaming.md to show a running graph to your users.