LangGraph provides two types for advanced graph control flow:
Command — Combines a state update with routing in a single return value. Instead of using conditional edges, a node can return Command(update={...}, goto="next_node") to update state and control which node runs next.
Send — Enables map-reduce (fan-out) patterns by dispatching work to a node with custom state. Returning a list of Send objects from an edge function runs the target node once per Send, in parallel.
Both types are useful for building dynamic, multi-path agents in LangGraph.
Galileo’s GalileoCallback and GalileoAsyncCallback handle Command and Send automatically. No additional configuration is needed — just pass the callback as usual and all control flow data is captured.
Command lets a node update state and route to the next node in one step, replacing the need for separate conditional edges.
from typing import TypedDictfrom langchain_openai import ChatOpenAIfrom langgraph.graph import StateGraph, START, ENDfrom langgraph.types import Commandfrom galileo.handlers.langchain.handler import GalileoCallbackclass State(TypedDict): query: str category: str response: strdef classify(state: State) -> Command: """Classify the query and route to the appropriate handler.""" query = state["query"] # In practice, you might use an LLM to classify the query if "refund" in query.lower(): return Command(update={"category": "billing"}, goto="billing_handler") return Command(update={"category": "general"}, goto="general_handler")def billing_handler(state: State) -> dict: return {"response": f"Billing support for: {state['query']}"}def general_handler(state: State) -> dict: return {"response": f"General support for: {state['query']}"}# Build the graphgraph = StateGraph(State)graph.add_node("classify", classify)graph.add_node("billing_handler", billing_handler)graph.add_node("general_handler", general_handler)graph.add_edge(START, "classify")graph.add_edge("billing_handler", END)graph.add_edge("general_handler", END)app = graph.compile()# Run with Galileo logging — Command routing is captured automaticallygalileo_callback = GalileoCallback(project="langgraph-command-example")result = app.invoke( {"query": "I need a refund for my last order"}, config={"callbacks": [galileo_callback]},)print(result["response"])
In this example, the classify node returns a Command that sets the category field and routes to either billing_handler or general_handler. Galileo captures the full Command return value, including the routing decision.
Send enables fan-out patterns where multiple instances of a node run in parallel, each with different input state.
import operatorfrom typing import Annotated, TypedDictfrom langchain_openai import ChatOpenAIfrom langgraph.graph import StateGraph, START, ENDfrom langgraph.types import Sendfrom galileo.handlers.langchain.handler import GalileoCallbackclass OverallState(TypedDict): topics: list[str] summaries: Annotated[list[str], operator.add]class SummaryState(TypedDict): topic: str summaries: Annotated[list[str], operator.add]def fan_out(state: OverallState) -> list[Send]: """Dispatch each topic to a summarize node in parallel.""" return [Send("summarize", {"topic": t}) for t in state["topics"]]def summarize(state: SummaryState) -> dict: """Summarize a single topic.""" # In practice, you would call an LLM here return {"summaries": [f"Summary of {state['topic']}"]}# Build the graphgraph = StateGraph(OverallState)graph.add_node("summarize", summarize)graph.add_conditional_edges(START, fan_out)graph.add_edge("summarize", END)app = graph.compile()# Run with Galileo logging — each Send is captured automaticallygalileo_callback = GalileoCallback(project="langgraph-send-example")result = app.invoke( {"topics": ["AI safety", "Quantum computing", "Climate change"]}, config={"callbacks": [galileo_callback]},)print(result["summaries"])
Here, fan_out returns a list of Send objects — one per topic. LangGraph runs the summarize node for each topic in parallel, and the results are aggregated via the operator.add reducer. Galileo logs each parallel execution.
When a tool returns a Command, it can include a ToolMessage in the state update. Galileo automatically extracts the ToolMessage for proper tool-call logging.
from typing import Anyfrom langchain_core.messages import ToolMessagefrom langchain_core.tools import toolfrom langgraph.types import Command@tooldef lookup_order(order_id: str) -> Command: """Look up an order by ID and return the result as a Command.""" # Simulate an order lookup order_data = {"order_id": order_id, "status": "shipped", "total": 49.99} # Return a Command that includes a ToolMessage in the state update. # Galileo automatically extracts the ToolMessage for proper logging. return Command( update={ "messages": [ ToolMessage( content=f"Order {order_id}: status={order_data['status']}, total=${order_data['total']}", tool_call_id="placeholder", # LangGraph fills this automatically ) ] } )
This pattern is useful when a tool needs to both return a result to the LLM (via ToolMessage) and update other parts of the graph state at the same time.