Skip to main content

Overview

GalileoMiddleware is a middleware component that integrates with LangGraph agents to provide comprehensive tracing and logging. Unlike the callback-based approach, middleware automatically intercepts agent execution at key points:
  • Agent lifecycle: Tracks when an agent starts and completes
  • Model calls: Logs all LLM invocations with prompts, responses, and metadata
  • Tool calls: Captures tool invocations including function names, arguments, and outputs
  • Async support: Full support for both synchronous and asynchronous agent execution

Basic usage

To use GalileoMiddleware, simply add it to the middleware parameter when creating a LangChain agent: The middleware automatically handles all logging internally. When the agent is invoked:
  1. An agent node is created to track the overall execution
  2. Each model call creates an LLM node with prompt and response details
  3. Each tool call creates a tool node with function name, arguments, and output
  4. All nodes are linked hierarchically under the agent node

Configuration options

GalileoMiddleware accepts the following parameters:
  • galileo_logger (optional): A custom GalileoLogger instance. If not provided, a default logger is created.
  • start_new_trace (default: True): Whether to start a new trace on agent invocation. Set to False to add to an existing trace.
  • flush_on_chain_end (default: True): Whether to flush logs to Galileo when the agent completes.
  • ingestion_hook (optional): A callback function that receives TracesIngestRequest objects before they’re sent to Galileo.

Custom logger

You can provide a custom logger instance to integrate with existing logging infrastructure:
Python
from galileo.logger import GalileoLogger

# Create a custom logger
logger = GalileoLogger(
    project_name="my-agent-project",
    console_output=True
)

# Use it with middleware
agent = create_agent(
    model,
    tools=[get_weather, get_stock_price],
    middleware=[GalileoMiddleware(galileo_logger=logger)]
)

Trace management

By default, each agent invocation creates a new trace. You can control trace behavior:

Add to existing trace

To add agent execution to an existing trace, use a shared logger with start_new_trace=False:
Python
# Create a logger and start a trace
logger = GalileoLogger()
session_id = logger.create_session()
trace_id = logger.create_trace(session_id)

# Create middleware that adds to existing trace
middleware = GalileoMiddleware(
    galileo_logger=logger,
    start_new_trace=False
)

# The agent execution will be added to the existing trace
agent = create_agent(model, tools=[...], middleware=[middleware])
agent.invoke({"messages": [...]})

Manual flush control

If you want to control when logs are flushed (e.g., for batch processing):
Python
# Disable automatic flushing
middleware = GalileoMiddleware(
    galileo_logger=logger,
    flush_on_chain_end=False
)

# Execute multiple agent calls
agent.invoke({"messages": [...]})
agent.invoke({"messages": [...]})

# Manually flush when ready
logger.flush()

What gets logged

GalileoMiddleware captures the following information:

Agent node

  • Input state (messages)
  • Output state (final messages)
  • Execution time

Model call nodes

  • Model name and configuration (temperature, etc.)
  • Input messages (including system message if present)
  • Output response
  • Tools available to the model
  • Timing metrics (start time, time to first token if available)

Tool call nodes

  • Tool/function name
  • Tool arguments (serialized)
  • Tool output
  • Execution time

Comparison with GalileoCallback

GalileoMiddleware and GalileoCallback provide similar functionality but use different approaches:
FeatureGalileoMiddlewareGalileoCallback
Integration pointLangGraph agents via middleware parameterLangChain components via callbacks parameter
Setup complexitySimple - add to middleware listManual - pass to each component
Agent supportNative support for LangGraph agentsRequires callback setup
FlexibilityAutomatic agent-level tracingFine-grained control over individual components
Use caseLangGraph agents with minimal setupComplex LangChain applications with custom needs
Use GalileoMiddleware when:
  • You’re building LangGraph agents
  • You want automatic, drop-in logging
  • You prefer simpler setup
Use GalileoCallback when:
  • You need fine-grained control over logging
  • You’re working with complex LangChain applications
  • You want to log specific components selectively

Async support

GalileoMiddleware fully supports asynchronous execution. The middleware automatically handles both sync and async contexts:
Python
# Async agent usage
async def main():
    agent = create_agent(
        model,
        tools=[get_weather, get_stock_price],
        middleware=[GalileoMiddleware()]
    )
    result = await agent.ainvoke({
        "messages": [HumanMessage(content="...")]
    })
    print(result)
The middleware uses the appropriate handler (GalileoBaseHandler or GalileoAsyncBaseHandler) based on the execution context.

Best practices

  1. Use middleware for LangGraph agents: For LangGraph-based agents, middleware provides the simplest integration
  2. Add meaningful metadata: Include relevant project and session information in your logger configuration
  3. Configure flush behavior: For high-volume applications, consider disabling auto-flush and batch your logs
  4. Share loggers: Use the same logger instance across middleware for unified trace management
  5. Monitor execution: Review the hierarchical traces in Galileo to understand agent behavior

Example

You can find a complete example of using GalileoMiddleware with a LangGraph agent in the LangChain Middleware Example.

Next steps

Cookbooks