How LangChain’s DeepAgents Bring Real Planning and Memory to LLM Workflows
Why shallow agents fall short
Basic LLM agents that repeatedly call external tools are simple to build, but they struggle with long, multi-step tasks. Without the ability to plan ahead, delegate, and persist intermediate state, these agents tend to be ‘shallow’: they react step-by-step but can lose context, repeat work, or fail to coordinate complex workflows.
DeepAgents architecture: adding depth to agents
The deepagents library addresses these limitations by providing an architecture inspired by advanced systems such as Deep Research and Claude Code. It equips agents with four foundational capabilities that let them operate more like project managers than single-step tool users:
- A Planning Tool: lets the agent break a complex task into a sequence of manageable steps before executing them.
- Sub-Agents: lets the main agent spawn smaller, specialized agents to handle focused parts of a larger task.
- File System Access: a persistent workspace for notes, drafts, and artifacts so the agent can save progress and resume later.
- A Detailed System Prompt: clear, long-term instructions that guide the agent’s behavior, priorities, and output format.
These components combine to let developers build agents capable of planning, state management, and modular execution.
Core capabilities in practice
Planning and task breakdown
DeepAgents include a built-in write_todos-style planning tool that helps transform a high-level request into ordered subtasks. As the agent completes work or discovers new information, it can update the plan and track progress.
Context management and file tools
Rather than stuffing everything into the LLM context, DeepAgents rely on file tools (ls, read_file, write_file, edit_file) to persist intermediate information. This prevents context overflow and enables agents to work on large or detailed projects without losing important details.
Sub-agent orchestration
Subagents allow delegation: the main agent keeps the overall view while subagents focus on research, drafting, editing, or other narrow jobs. This modular approach preserves clarity in the main agent’s context while enabling deep, specialist work inside each sub-agent.
Long-term memory
With integrations like LangGraph’s Store, agents can intentionally persist knowledge across sessions. That enables continuity: picking up prior work, recalling past decisions, or building cumulative research.
Setup and dependencies
Install the required packages before running the example:
!pip install deepagents tavily-python langchain-google-genai langchain-openai
Environment variables
This tutorial uses the OpenAI API key to run the Deep Agent, but DeepAgents work with other providers (Gemini, Anthropic, etc.). Set environment variables for the providers you plan to use:
import os
from getpass import getpass
os.environ['TAVILY_API_KEY'] = getpass('Enter Tavily API Key: ')
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
os.environ['GOOGLE_API_KEY'] = getpass('Enter Google API Key: ')
Importing the necessary libraries
Load the Python libraries and create a Tavily client instance used later by the example tools:
import os
from typing import Literal
from tavily import TavilyClient
from deepagents import create_deep_agent
tavily_client = TavilyClient()
Tools: adding web search
Deep Agents can use tools just like conventional tool-using agents. In this example, the agent gets a Tavily-powered internet_search helper to gather real-time documents from the web:
from typing import Literal
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent
def internet_search(
query: str,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
"""Run a web search"""
search_docs = tavily_client.search(
query,
max_results=max_results,
include_raw_content=include_raw_content,
topic=topic,
)
return search_docs
Sub-Agents: specialized workers
A key advantage of DeepAgents is the ability to spawn subagents with their own role, tools, and prompts. In the example, two subagents are defined:
- policy-research-agent: conducts deep research into AI policy, regulations, and ethics; it uses internet_search and returns a structured research output.
- policy-critique-agent: reviews the draft report for accuracy, citation quality, tone, and completeness without editing the file directly.
Example subagent prompts and definitions:
sub_research_prompt = """
You are a specialized AI policy researcher.
Conduct in-depth research on government policies, global regulations, and ethical frameworks related to artificial intelligence.
Your answer should:
- Provide key updates and trends
- Include relevant sources and laws (e.g., EU AI Act, U.S. Executive Orders)
- Compare global approaches when relevant
- Be written in clear, professional language
Only your FINAL message will be passed back to the main agent.
"""
research_sub_agent = {
"name": "policy-research-agent",
"description": "Used to research specific AI policy and regulation questions in depth.",
"system_prompt": sub_research_prompt,
"tools": [internet_search],
}
sub_critique_prompt = """
You are a policy editor reviewing a report on AI governance.
Check the report at `final_report.md` and the question at `question.txt`.
Focus on:
- Accuracy and completeness of legal information
- Proper citation of policy documents
- Balanced analysis of regional differences
- Clarity and neutrality of tone
Provide constructive feedback, but do NOT modify the report directly.
"""
critique_sub_agent = {
"name": "policy-critique-agent",
"description": "Critiques AI policy research reports for completeness, clarity, and accuracy.",
"system_prompt": sub_critique_prompt,
}
System prompt and workflow guidance
DeepAgents ship with a robust default system prompt, but custom prompts tuned to your task will yield better results. In the example, the custom policy_research_instructions prompt turns the agent into an expert policy researcher and prescribes a clear workflow: save the question, use a research subagent, draft the report, optionally request critique, and revise.
policy_research_instructions = """
You are an expert AI policy researcher and analyst.
Your job is to investigate questions related to global AI regulation, ethics, and governance frameworks.
1️⃣ Save the user's question to `question.txt`
2️⃣ Use the `policy-research-agent` to perform in-depth research
3️⃣ Write a detailed report to `final_report.md`
4️⃣ Optionally, ask the `policy-critique-agent` to critique your draft
5️⃣ Revise if necessary, then output the final, comprehensive report
When writing the final report:
- Use Markdown with clear sections (## for each)
- Include citations in [Title](URL) format
- Add a ### Sources section at the end
- Write in professional, neutral tone suitable for policy briefings
"""
Defining the main Deep Agent
The create_deep_agent helper ties the model, tools, system prompt, and subagents together. In the example, the model is initialized with OpenAI’s gpt-4o, but the code also shows how to switch to Google Gemini if desired. DeepAgents defaults to Claude Sonnet 4.5 when no model is provided, while still supporting many backends via LangChain.
model = init_chat_model(model="openai:gpt-4o")
# model = init_chat_model(model="google_genai:gemini-2.5-flash")
agent = create_deep_agent(
model=model,
tools=[internet_search],
system_prompt=policy_research_instructions,
subagents=[research_sub_agent, critique_sub_agent],
)
Invoking the agent
Once configured, invoke the agent with a user query and let it orchestrate planning, sub-agent calls, file operations, and revisions:
query = "What are the latest updates on the EU AI Act and its global impact?"
result = agent.invoke({"messages": [{"role": "user", "content": query}]})
Next steps
This example demonstrates how DeepAgents let you build modular, persistent, and planful LLM workflows. To adapt this pattern to your needs, customize the system prompt, add or swap tools, and define domain-specific subagents. The approach scales from research briefs to multi-step automation pipelines.