Introduction: What LangGraph Is and Why Graphs
This chapter explains what LangGraph is, why it uses graphs instead of chains, and walks you through running your first graph.
The Limits of Chains
LangChain, when it was just "chains", was a pipeline metaphor. Input goes in at one end, each step transforms it, output comes out the other end.
That shape fits some tasks: a RAG query, a summarization, a translation. It doesn't fit most real agent work.
Consider what you actually want an agent to do:
- Read a user message.
- Decide whether to call a tool.
- If so, call it. Observe the result.
- Decide whether to call another tool.
- If not, respond to the user.
That's a loop. Specifically, a loop with conditional branches. A chain can't express that naturally. You end up writing custom orchestration code around the chain, and at that point you're building an ad-hoc graph.
LangGraph is LangChain's answer: a framework where the orchestration is the code. Your agent is a graph with nodes (steps), edges (transitions), and shared state (the conversation, the scratchpad). The LLM is one node among many. The "tool call loop" is an actual cycle in the graph.
What LangGraph Is
LangGraph is a Python library (with a TypeScript version) that gives you three things:
- State. A typed dictionary (or Pydantic model) that every node reads and writes. Changes merge via reducers, so "add a message" doesn't overwrite previous messages.
- Nodes. Plain functions that take the current state and return a partial update. A node can call an LLM, call a tool, run business logic, anything.
- Edges. Rules for which node runs next. Edges can be fixed (always go from A to B) or conditional (go to B or C depending on state). Cycles are allowed and normal.
You build the graph, compile it, and invoke it. LangGraph handles the execution, state merging, and (if you want) persistence, streaming, and interrupts.
LangGraph vs Alternatives
Rough shape of the decision:
- LangChain chains. Linear pipelines. Use when your workload is "one LLM call with pre and post-processing", not a loop.
- LangGraph. Stateful, branching, looping. Use for agents and multi-step workflows.
- CrewAI. Higher-level, role-based ("CEO agent delegates to researcher agent"). Opinionated. Use when its abstractions fit.
- AutoGen. Conversation-based multi-agent. Agents chat with each other. Different mental model.
- No framework. A
whileloop, an LLM call, and some routing code. Viable for small agents; gets painful around persistence and streaming.
LangGraph sits at a useful level: general-purpose enough to build anything, structured enough to avoid reinventing orchestration.
Installing
pip install langgraph langchain-anthropic
That's the minimum. For persistence with SQLite:
pip install langgraph-checkpoint-sqlite
For Postgres:
pip install langgraph-checkpoint-postgres
Claude API Credentials
Get an API key from console.anthropic.com. Set it in your environment:
export ANTHROPIC_API_KEY="sk-ant-..."
Or a .env plus python-dotenv. Whatever you're used to.
Test it:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-sonnet-4-5")
print(llm.invoke("Say hello in three words.").content)
If that prints something, you're set.
Your First Graph
A graph with one node that appends a message:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class State(TypedDict):
messages: list[str]
def greet(state: State) -> State:
return {"messages": state["messages"] + ["hello from the graph"]}
builder = StateGraph(State)
builder.add_node("greet", greet)
builder.add_edge(START, "greet")
builder.add_edge("greet", END)
graph = builder.compile()
result = graph.invoke({"messages": ["starting"]})
print(result["messages"])
# ['starting', 'hello from the graph']
What just happened:
- You declared a
Statetype (a TypedDict with one field,messages). - You wrote a node (
greet) that takes state and returns a partial update. - You built a graph: START → greet → END.
- You compiled it and invoked it with initial state.
- LangGraph ran the
greetnode, merged its return into state, and returned the final state.
This is the whole model. Every chapter extends it.
A Slightly More Interesting Graph
One node that calls Claude:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, AIMessage
class State(TypedDict):
messages: list
llm = ChatAnthropic(model="claude-sonnet-4-5")
def chat(state: State) -> State:
response = llm.invoke(state["messages"])
return {"messages": state["messages"] + [response]}
graph = (
StateGraph(State)
.add_node("chat", chat)
.add_edge(START, "chat")
.add_edge("chat", END)
.compile()
)
result = graph.invoke({
"messages": [HumanMessage(content="What's the capital of France?")]
})
print(result["messages"][-1].content)
# Paris.
Same shape, real LLM. From here, the rest of the tutorial is:
- Add more nodes (Chapter 2).
- Add branching (Chapter 3).
- Let the LLM call tools (Chapter 4).
- Remember conversations (Chapter 5).
- Pause for human input (Chapter 6).
- Stream the output (Chapter 7).
The Vocabulary
A few terms you'll see constantly.
- StateGraph: the builder object. You add nodes and edges to it.
- Node: a function that takes state and returns a partial update.
- Edge: a transition from one node to another. Fixed or conditional.
- Compile: turn the builder into an executable graph.
- Invoke: run the graph to completion and return the final state.
- Stream: run the graph and yield intermediate state as it progresses.
- Checkpointer: something that saves state between invocations (Chapter 5).
- Thread: a conversation identifier. Used with checkpointers to resume.
- Reducer: a function that merges state updates. Defaults to "replace"; "add" is common for lists.
What's Different from LangChain
If you've used LangChain already:
- Graph nodes are like chain runnables, but they compose differently.
- State replaces passing a dict through a chain.
- Cycles and branches are first-class.
- Messages are the same (langchain-core's
BaseMessagesubclasses). - Tools are the same.
LangGraph is additive. You still use LangChain's primitives (LLMs, tools, messages); LangGraph is the orchestration layer.
Common Pitfalls
Confusing state with inputs. Every node gets the full state. What you pass to invoke is the initial state. What a node returns is a partial update, not the new state.
Forgetting the edge to END. A node with no outgoing edge hangs. Every path must terminate.
Mutating state. Node functions should return partial updates, not mutate the state dict. Mutations may or may not persist depending on the reducer.
Ignoring types. TypedDict gives you editor support and catches bugs. Use it.
Running without a checkpointer for a conversation. If state doesn't persist, the "conversation" is fresh every invoke. Chapter 5 covers checkpointing.
Next Steps
Continue to 02-state-and-graphs.md to understand the core abstraction.