Skip to content

LangChain 1.0 Alpha Release Testing & Review

RiyazPrynAI edited this page Sep 18, 2025 · 2 revisions

LangChain 1.0

Package Installation:

  • pip install --pre -U langchain

New Introductions:

  • create_agent__
  • Structured output logic
  • new middleware API

sample code for short-term memory (thread-level persistence)

from dotenv import load_dotenv
from langchain.agents import create_agent
from langgraph.checkpoint.memory import InMemorySaver
load_dotenv(dotenv_path="../.env", verbose=True)

agent = create_agent(
    "openai:gpt-5-nano",
    tools=[],
    checkpointer=InMemorySaver(),
)

THREAD_ID = "user_123"

while True:
    user_question = input("Enter your question: ")
    if user_question.lower() == "quit":
        break
    response = agent.invoke(
        {"messages": [("user", user_question)]},
        config={"configurable": {"thread_id": THREAD_ID}}
    )
    print(response['messages'][-1].content

Structured output logic:

  • LangChain’s prebuilt ReAct agent create_agent() handles structured output automatically. The user sets their desired structured output schema, and when the model generates the structured data, it’s captured, validated, and returned in the 'structured_response' key of the agent’s state.

Error Handling:

Issue1: Mutiple Structured Outputs Error

  • When a model incorrectly calls multiple structured output tools, the agent provides error feedback in a ToolMessage and prompts the model to retry:
from pydantic import BaseModel, Field
from typing import Union
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy

class ContactInfo(BaseModel):
    name: str = Field(description="Person's name")
    email: str = Field(description="Email address")

class EventDetails(BaseModel):
    event_name:str=Field(description="Name of the event")
    date:str=Field(description="Event date")

class AllInfo(BaseModel):
    name: str = Field(description="Person's name")
    email: str = Field(description="Email address")
    event_name:str=Field(description="Name of the event")
    date:str=Field(description="Event date")


agent=create_agent(
    model="openai:gpt-5",
    tools=[],
    response_format=ToolStrategy(Union[ContactInfo, EventDetails])
)

agent.invoke({
    "messages": [{"role": "user", "content": "Extract info: get all fields {name}, {email} , {event_name},{date} from John Doe (john@email.com) is organizing Tech Conference on March 15th"}]
})

output of the code in a screenshot

image

Review: Multiple Structured Outputs Issue

Observed Behavior:
When requesting all fields from both ContactInfo and EventDetails, the model returned:
structured_response: EventDetails(event_name='Tech Conference', date='March 15th')

Analysis:
Error handling is functioning as intended — the tool message correctly prompts the model to select a single tool. However, in practice, the model ends up choosing one class (tool) at random across multiple executions, rather than consistently returning both.

Review Question:
Why is the model unable to invoke two tools simultaneously and return a combined schema containing fields from both ContactInfo and EventDetails in a single structured response?

Issue 2 : Schema validation error

  • When structured output doesn’t match the expected schema, the agent provides specific error feedback:
from pydantic import BaseModel, Field
from typing import Optional
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy

class ProductRating(BaseModel):
    rating: Optional[int] = Field(description="Rating from 1-5", ge=1, le=5)
    comment: str = Field(description="Review comment")



agent = create_agent(
    model="openai:gpt-5",
    tools=[],
    response_format=ToolStrategy(
        schema=ProductRating,
        handle_errors="Please provide a valid rating between 1-5 and include a comment."),  # Default: handle_errors=True
    prompt="You are a helpful assistant that parses product reviews. Do not make any field or value up."
)

agent.invoke({
    "messages": [{"role": "user", "content": "Parse this: Amazing product, 6/10!"}]
})

Screenshot of code run output

image

Expected Response

================================= Tool Message =================================
Name: ProductRating

Error: Failed to parse structured output for tool 'ProductRating': 1 validation error for ProductRating.rating
  Input should be less than or equal to 5 [type=less_than_equal, input_value=6, input_type=int].
 Please fix your mistakes.

Review: Schema Validation Handling Issue

Observed Behavior:
The agent did not handle the error by returning a ToolMessage.
Additionally, the model did not attempt a follow-up call to revalidate the response.
Instead, it returned the initial (invalid) response without retrying.

Review Question:
Why did the agent fail to trigger error handling via a tool message and prompt the model to revalidate, rather than accepting and returning the first invalid output?

Langchain-core 1.0

Package Installation

  • pip install --pre -U langchain-core

Core Addition

  • .content_blocks property in "messages"

https://docs.langchain.com/oss/python/langchain/messages#standard-content-blocks

LangGraph 1.0

Package Installation:

  • pip install --pre -U langgraph