Tools and Agents: The Basic Agent Loop

This chapter adds tool-calling to a graph, wires up ToolNode, and builds the classic agent loop: plan, call tool, observe, repeat.

The Agent Loop, Conceptually

Every agent does roughly this:

1. Receive user message.
2. Send conversation to LLM.
3. If LLM responded with a tool call, execute the tool and append the result.
4. Go back to step 2.
5. If LLM responded with a normal message, return it.

A chain can't express step 4 (go back). A graph can. This chapter builds that loop.

Defining Tools

A LangChain tool is a function with metadata (name, description, argument schema). Claude uses the metadata to decide when to call the tool and what arguments to pass.

The easiest way: the @tool decorator.

from langchain_core.tools import tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city.

    Args:
        city: The name of the city.
    """
    # In a real tool, you'd call a weather API here.
    return f"It's 18°C and sunny in {city}."

@tool
def get_time(timezone: str) -> str:
    """Get the current time in a specific timezone.

    Args:
        timezone: An IANA timezone string, e.g. 'Europe/London'.
    """
    from datetime import datetime
    import zoneinfo
    now = datetime.now(zoneinfo.ZoneInfo(timezone))
    return now.strftime("%Y-%m-%d %H:%M:%S %Z")

The docstring becomes the tool's description. The function signature becomes the argument schema. Both are sent to Claude so it knows how to call the tool.

For more control, use StructuredTool or subclass BaseTool. @tool covers 90% of cases.

Binding Tools to the LLM

bind_tools tells the LLM which tools exist.

from langchain_anthropic import ChatAnthropic

tools = [get_weather, get_time]
llm = ChatAnthropic(model="claude-sonnet-4-5").bind_tools(tools)

Now when you invoke the LLM, it can decide to return tool calls instead of (or alongside) text.

from langchain_core.messages import HumanMessage

response = llm.invoke([HumanMessage(content="What's the weather in London?")])
print(response.tool_calls)
# [{'name': 'get_weather', 'args': {'city': 'London'}, 'id': 'toolu_...', ...}]

The response has tool_calls (a list) and sometimes content (text). Whether content is present varies; Claude sometimes says something before calling a tool, sometimes not.

ToolNode: The Prebuilt Handler

Manually executing tool calls is tedious. LangGraph ships ToolNode: a node that:

  1. Reads the last message in state.
  2. Finds tool_calls on it.
  3. Executes each tool with the provided args.
  4. Returns a list of ToolMessage results to append to state.
from langgraph.prebuilt import ToolNode

tool_node = ToolNode(tools)

You add it as a node, and route to it when the agent wants tools.

Building the Agent Loop

Full example:

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_core.tools import tool

# Tools

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"It's 18°C and sunny in {city}."

@tool
def get_time(timezone: str) -> str:
    """Get the current time in an IANA timezone."""
    from datetime import datetime
    import zoneinfo
    return datetime.now(zoneinfo.ZoneInfo(timezone)).strftime("%H:%M %Z")

tools = [get_weather, get_time]
tool_node = ToolNode(tools)

# LLM

llm = ChatAnthropic(model="claude-sonnet-4-5").bind_tools(tools)

# State

class State(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]

# Nodes

def call_llm(state: State) -> State:
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

# Routing

def route(state: State) -> str:
    last = state["messages"][-1]
    if last.tool_calls:
        return "tools"
    return END

# Graph

graph = (
    StateGraph(State)
    .add_node("agent", call_llm)
    .add_node("tools", tool_node)
    .add_edge(START, "agent")
    .add_conditional_edges("agent", route, {"tools": "tools", END: END})
    .add_edge("tools", "agent")    # loop back
    .compile()
)

# Invoke

result = graph.invoke({
    "messages": [HumanMessage(content="What's the weather in Tokyo, and what time is it there?")]
})

for msg in result["messages"]:
    print(f"[{msg.__class__.__name__}] {getattr(msg, 'content', '') or msg.tool_calls}")

Expected output shape:

[HumanMessage] What's the weather in Tokyo, and what time is it there?
[AIMessage] [{'name': 'get_weather', 'args': {'city': 'Tokyo'}, ...},
             {'name': 'get_time', 'args': {'timezone': 'Asia/Tokyo'}, ...}]
[ToolMessage] It's 18°C and sunny in Tokyo.
[ToolMessage] 19:15 JST
[AIMessage] The weather in Tokyo is 18°C and sunny, and the current time is 19:15 JST.

Claude asked for two tools, ToolNode ran both, results came back, Claude used them to answer.

The Prebuilt create_react_agent

All of the above is packaged in create_react_agent:

from langgraph.prebuilt import create_react_agent

agent = create_react_agent(
    model="claude-sonnet-4-5",
    tools=[get_weather, get_time],
)

result = agent.invoke({
    "messages": [HumanMessage(content="Weather in Tokyo?")]
})

When to use it:

  • You want the standard agent loop without writing it.
  • You don't need custom nodes between LLM and tools.

When not to:

  • You need custom routing (e.g. conditional tool access based on user role).
  • You need intermediate processing between LLM and tools (validation, audit log).
  • You're building something beyond the standard loop (multi-agent, human-in-the-loop with custom interrupts).

For anything non-trivial, write the loop explicitly. It's not much code and you control the shape.

Tool Execution Errors

Tools can throw. By default, ToolNode catches exceptions and returns the error as a ToolMessage with status="error". The LLM then sees the error and decides what to do (retry, try a different tool, give up).

@tool
def divide(a: float, b: float) -> float:
    """Divide two numbers."""
    return a / b

Call with b=0, Claude gets back a ToolMessage with the ZeroDivisionError. It can react.

To customize, pass handle_tool_errors to ToolNode:

tool_node = ToolNode(
    tools,
    handle_tool_errors="Tool call failed. Please try different arguments.",
)

Or a function:

def on_error(exc: Exception) -> str:
    return f"Tool error: {type(exc).__name__}: {exc}"

tool_node = ToolNode(tools, handle_tool_errors=on_error)

Forcing a Tool Call

Sometimes you want the LLM to always call a specific tool.

llm = ChatAnthropic(model="claude-sonnet-4-5").bind_tools(
    tools,
    tool_choice={"type": "tool", "name": "get_weather"},
)

Claude will call get_weather on its next turn. Common when you want structured output: define a tool whose args are the structured type, force it, take the args.

Custom Tool Execution

For edge cases, write your own tool node:

from langchain_core.messages import ToolMessage

def custom_tool_node(state: State) -> State:
    last = state["messages"][-1]
    tool_calls = last.tool_calls or []

    results = []
    for call in tool_calls:
        if call["name"] not in {t.name for t in tools}:
            results.append(ToolMessage(
                tool_call_id=call["id"],
                content=f"Unknown tool: {call['name']}",
                status="error",
            ))
            continue
        tool = next(t for t in tools if t.name == call["name"])
        try:
            output = tool.invoke(call["args"])
            results.append(ToolMessage(
                tool_call_id=call["id"],
                content=str(output),
            ))
        except Exception as exc:
            results.append(ToolMessage(
                tool_call_id=call["id"],
                content=f"Error: {exc}",
                status="error",
            ))

    return {"messages": results}

Use when ToolNode's defaults don't fit. For most cases, they do.

Limiting the Tool Loop

A buggy LLM could call tools forever. Besides recursion_limit, you can guard in your routing:

def route(state: State) -> str:
    last = state["messages"][-1]
    if not last.tool_calls:
        return END

    # Limit: stop after N tool calls total
    tool_count = sum(1 for m in state["messages"] if isinstance(m, ToolMessage))
    if tool_count >= 10:
        return END

    return "tools"

For production agents, always cap the loop.

Common Pitfalls

Forgetting bind_tools. The LLM doesn't know the tools exist; it responds with text.

Not using add_messages reducer. Messages get overwritten each node. The loop breaks.

Tool that blocks indefinitely. A tool making an HTTP call without a timeout hangs the graph. Always set timeouts.

Ignoring tool errors. If you swallow them in the tool and return "success", the LLM thinks the call worked. Return errors honestly; the LLM handles them well.

Forgetting AIMessage isn't always the end. An AIMessage with tool_calls means "call these tools"; it's not the final answer. Only AIMessages without tool_calls are answers.

Next Steps

Continue to 05-persistence.md so your agent remembers what happened last time.