-
Notifications
You must be signed in to change notification settings - Fork 19.9k
Description
Checked other resources
- This is a feature request, not a bug report or usage question.
- I added a clear and descriptive title that summarizes the feature request.
- I used the GitHub search to find a similar feature request and didn't find it.
- I checked the LangChain documentation and API reference to see if this feature already exists.
- This is not related to the langchain-community package.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Feature Description
I would like to request the addition of an OutputFixingParser (or a similar fail-safe parsing utility) within LangChain’s core parsing framework.
This component would automatically correct or normalize LLM outputs so they match the required schema before raising an error.
Use Case
Problem / Motivation
When working with structured outputs (e.g., StructuredOutputParser, Pydantic schemas, JSON responses), LLMs often return:
-
Slightly malformed JSON
-
Missing/extra keys
-
Format deviations that break strict parsers
Currently, developers need to:
-
Write custom validation and retry logic
-
Create ad-hoc wrappers around existing parsers
-
Manually prompt-engineer to avoid formatting issues
This adds unnecessary overhead and makes pipelines less robust.
Proposed Solution
Introduce a parser—similar to OutputFixingParser from earlier LangChain versions—that:
-
Takes a target parser and expected schema
-
Attempts to parse the raw output
-
If parsing fails, automatically uses an LLM to “repair” the output
-
Re-parses until valid (or until a retry limit is reached)
Example desired interface:
parser = OutputFixingParser.from_llm(
llm=chat_model,
parser=structured_parser
)Or an alternative such as:
parser = AutoRepairingParser(
llm=chat_model,
schema=my_schema
)Benefits
- More robust structured-output workflows
- Less boilerplate for parsing + correction
- Higher reliability across model providers
- Easier migration from older LangChain workflows
Alternatives Considered
Manual RetryWithFix log
Additional Context
This feature existed in earlier versions of LangChain, and many users relied on it. Bringing back an equivalent parser would greatly help maintain compatibility and simplify structured output pipelines.