[ PLAYBOOK · 09 ] · MAY 13, 2026 · 7 min

LangChain in 2026: what to keep, what to skip.

LangChain v1.0 rewrote the framework on top of LangGraph. Keep LCEL for linear pipelines, create_agent for stateful work, the partner packages as adapters, and LangSmith for tracing. Skip the rest and learn the model SDKs.


The take

If you choose LangChain in 2026, you are choosing LangGraph. That is the load-bearing fact most teams still miss. The LangChain and LangGraph 1.0 release in October 2025 collapsed the framework onto a single graph runtime. What used to be a separate "agents library" is now LangGraph with conventions on top. The original chain abstractions, the AgentExecutor, the imports out of langchain.agents in the way teams remember them, got renamed into the langchain-classic package and put on a slower release cadence.

The framework you bet on in 2023 got rewritten under you, twice. The right move in 2026 is to keep the parts that earn their keep, skip the parts that have been deprecated under you, and drop to the model SDKs when neither path fits.

What v1.0 actually is

A practical map of what lives where, because the package names changed and most second-page Google results are still pre-v1.

langchain v1.x. Thinner than v0.x. Contains LCEL (the prompt | model | parser expression language), the new create_agent entry point that returns a LangGraph graph, a content-block message format that normalizes reasoning traces, citations, tool calls, and multimodal parts across providers, plus a few utility surfaces. This is the layer most application code should import from.

langgraph v1.x. The durable-execution runtime. State, checkpointing, time travel, human-in-the-loop interrupts, persistence, and streaming all live here. LangChain agents are LangGraph graphs underneath, so this dependency is non-optional for anything beyond a one-shot pipeline.

langchain-classic. The old surface (the v0.x AgentExecutor, the legacy chain helpers) renamed and frozen-ish. Receives security fixes and minimal patches. New code should not import from it.

Partner packages. The provider adapters (langchain-anthropic, langchain-openai, langchain-google-genai, langchain-aws, langchain-mistralai, and around fifty others) live in their own repositories with their own release cadence. They are the reason the framework still offers portability across providers.

LangSmith. The tracing, evaluation, and prompt-management service. Same company. Optional, but tightly integrated.

What earns its place in 2026

Four pieces of the v1 surface are worth the dependency.

LCEL for linear pipelines. RAG retrievers, document Q&A, structured-output pipelines, simple prompt-and-parse flows: LCEL is fast to write, easy to debug, and ships with batching, streaming, and async out of the box. A typical RAG chain is six lines. We use LCEL anywhere the work is genuinely a pipeline (input flows forward, no loops, no branching, no waiting for a human). The moment the flow needs state across turns, LCEL stops paying.

create_agent for agents over a tool surface. The new create_agent takes a model, a tool list, and an optional system prompt, and returns a LangGraph graph ready to invoke. In our audits, most agent paths (read tools, query a database, call an API, return an answer) fit this level. Beneath it sits a LangGraph state machine you can step into when the abstraction does not fit. Above it sits the same content-block message format the partner packages emit. That layering is the part of v1.0 that genuinely improves on the v0.x agent experience.

Partner packages as the adapter layer. If you have a real reason to switch providers (cost shifts, capability shifts, vendor outages, regulatory constraints), the partner packages are the cheapest way to keep the option open. One model.invoke() works against Claude Opus 4.7, GPT-5, Gemini 3, Bedrock-hosted Llama, and Mistral models with the same interface. The cost is a thin abstraction over each provider's SDK; the upside is a one-line provider swap when conditions change.

LangSmith for production tracing. If the stack is LangChain or LangGraph, LangSmith is the path of least resistance for tracing, dataset management, and prompt versioning. The integration is one environment variable. The alternative (rolling your own tracing layer, or wiring Langfuse through the SDK) is fine, but it costs setup time that LangSmith does not. The pick depends on data sovereignty: LangSmith is hosted, Langfuse self-hosts via Docker Compose in minutes.

What to skip

Five patterns we routinely flag in audits.

AgentExecutor and anything in langchain-classic. The original agent loop has been superseded by create_agent and the LangGraph runtime. The classic package gets fewer fixes than core, and every tutorial older than late 2025 still teaches it. New code that imports from langchain.agents.AgentExecutor is shipping legacy on day one. The migration guide covers the swap; in the migrations we have run, the diff for a stock agent is small and mechanical.

MemorySaver in production. The default in-memory checkpointer is for prototypes. State lives in process memory, evaporates on restart, and shares nothing across replicas. Production runs on the Postgres or Redis checkpointer, with persistence keys you control, and with serialization tests for every state type the graph touches. The persistence layer is the most footgun-prone part of the runtime, and the failure mode (lost session state at the worst possible moment) is invisible until it bites a real user.

LCEL for stateful workflows. Conditional branching, loops, human-in-the-loop interrupts, persistent sessions: these belong in LangGraph. Forcing them into LCEL with custom runnables and side-effecting parsers produces code where every chain has a bespoke wrapper and nobody can find the bug. The line between LCEL and LangGraph is roughly the line between a function and a state machine. Pick by what the work actually is.

Multi-agent decomposition until you have earned it. Most multi-agent setups should not exist. They rarely beat a single agent with the right tool surface, they multiply the cost (one inference call becomes five), and they fail in ways that are hard to debug because the failure shape moves across agents between runs. Anthropic's own write-up on a research agent that genuinely needed multi-agent decomposition is worth reading; the takeaway is that the cases where the pattern earns its complexity are narrower than the marketing suggests.

Pre-v1 tutorials. Anything written before October 2025 is talking about a framework that no longer exists. The official changelog and the LangChain v1.x docs are the only safe starting points. Stack Overflow answers from 2024 and YouTube videos from before the rewrite are background noise at best, dangerous at worst.

When we replace the framework with raw SDK

The framework's value is portability and conventions. When the portability is not paying and the conventions are getting in the way, the right move is to drop to the model SDK directly.

Three signals show up.

Bleeding-edge model features. Extended thinking on Claude, computer use, prompt caching, batch processing, structured tool inputs, multimodal streaming: these features arrive in Anthropic's SDK and OpenAI's SDK weeks before the partner packages catch up. Code that depends on the latest capability either waits or drops out of the framework. We drop out.

You are not switching providers. The abstraction tax is justified by the option to swap. If the codebase has been on Anthropic for three years and the team has no plan to move, the abstraction is overhead with no benefit. We have seen 800-line wrappers around langchain-anthropic that exist only to push the same parameters through the framework when a 200-line direct integration would do the same thing and let new engineers read the code.

The codebase has accumulated framework debt. Imports from three major versions of langchain. Custom runnables with no documentation. RunnablePassthrough chains five levels deep. When the framework code is more complex than the business logic, the framework is no longer paying. Migrating one path at a time to the raw SDK is usually faster than a full v0 to v1 rewrite, because the raw SDK is more stable than the framework's history.

A 5-step rollout for an existing codebase

If you are on a pre-v1 LangChain codebase, this is the order we recommend.

Step 1. Inventory imports. Grep the codebase for from langchain and group by module. Anything from langchain.agents.AgentExecutor, langchain.chains, or pre-v1 memory classes is on the migration list. Anything from langchain_core, the partner packages, or v1 langchain stays.

Step 2. Mark legacy paths. Add a comment # classic on every import from langchain-classic or its v0.x equivalent. This is a navigation aid for the team, not a refactor yet. The goal is to make the migration boundary visible in PRs.

Step 3. Swap one agent at a time. Pick the lowest-traffic agent path. Migrate it from AgentExecutor to create_agent. Run the same eval set against both code paths in parallel for a week. Promote when the v1 path matches or beats the v0 path on your binary pass-fail rubric.

Step 4. Move state to a real checkpointer. Replace any MemorySaver usage with the Postgres or Redis checkpointer. Test serialization for every state type the graph touches. Add a chaos test that kills the process mid-graph and resumes; if state does not round-trip cleanly, the persistence layer is not ready for production.

Step 5. Add LangSmith or equivalent tracing. If there is no tracing yet, this is the highest-leverage half-day of work in the migration. The traces are what let you compare v0 and v1 agent behavior side by side. Without them, the migration is opinion. With them, it is measurement.

The teams that come out of this in good shape are not the ones that rewrote everything. They are the ones that drew a line, kept what still pays, replaced what does not, and dropped to the SDK where the framework stopped helping.