-
Notifications
You must be signed in to change notification settings - Fork 0
LangChain 1.0 Alpha Release Testing & Review
Alpha Release Notes: https://blog.langchain.com/langchain-langchain-1-0-alpha-releases/
- pip install --pre -U langchain
- create_agent__
- Structured output logic
- new middleware API
create_agent : https://docs.langchain.com/oss/python/langchain/agents
from dotenv import load_dotenv
from langchain.agents import create_agent
from langgraph.checkpoint.memory import InMemorySaver
load_dotenv(dotenv_path="../.env", verbose=True)
agent = create_agent(
"openai:gpt-5-nano",
tools=[],
checkpointer=InMemorySaver(),
)
THREAD_ID = "user_123"
while True:
user_question = input("Enter your question: ")
if user_question.lower() == "quit":
break
response = agent.invoke(
{"messages": [("user", user_question)]},
config={"configurable": {"thread_id": THREAD_ID}}
)
print(response['messages'][-1].contentYou can find implementation of this code under section approach 3 at: https://github.com/PrynAI/PrynAI-LangGraph-Agents/blob/main/All-Agents-Notebooks/question_answering_agent.ipynb
- LangChain’s prebuilt ReAct agent create_agent() handles structured output automatically. The user sets their desired structured output schema, and when the model generates the structured data, it’s captured, validated, and returned in the 'structured_response' key of the agent’s state.
- When a model incorrectly calls multiple structured output tools, the agent provides error feedback in a ToolMessage and prompts the model to retry:
from pydantic import BaseModel, Field
from typing import Union
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
class ContactInfo(BaseModel):
name: str = Field(description="Person's name")
email: str = Field(description="Email address")
class EventDetails(BaseModel):
event_name:str=Field(description="Name of the event")
date:str=Field(description="Event date")
class AllInfo(BaseModel):
name: str = Field(description="Person's name")
email: str = Field(description="Email address")
event_name:str=Field(description="Name of the event")
date:str=Field(description="Event date")
agent=create_agent(
model="openai:gpt-5",
tools=[],
response_format=ToolStrategy(Union[ContactInfo, EventDetails])
)
agent.invoke({
"messages": [{"role": "user", "content": "Extract info: get all fields {name}, {email} , {event_name},{date} from John Doe (john@email.com) is organizing Tech Conference on March 15th"}]
})
Observed Behavior:
When requesting all fields from both ContactInfo and EventDetails, the model returned:
structured_response: EventDetails(event_name='Tech Conference', date='March 15th')
Analysis:
Error handling is functioning as intended — the tool message correctly prompts the model to select a single tool. However, in practice, the model ends up choosing one class (tool) at random across multiple executions, rather than consistently returning both.
Review Question:
Why is the model unable to invoke two tools simultaneously and return a combined schema containing fields from both ContactInfo and EventDetails in a single structured response?
- When structured output doesn’t match the expected schema, the agent provides specific error feedback:
from pydantic import BaseModel, Field
from typing import Optional
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
class ProductRating(BaseModel):
rating: Optional[int] = Field(description="Rating from 1-5", ge=1, le=5)
comment: str = Field(description="Review comment")
agent = create_agent(
model="openai:gpt-5",
tools=[],
response_format=ToolStrategy(
schema=ProductRating,
handle_errors="Please provide a valid rating between 1-5 and include a comment."), # Default: handle_errors=True
prompt="You are a helpful assistant that parses product reviews. Do not make any field or value up."
)
agent.invoke({
"messages": [{"role": "user", "content": "Parse this: Amazing product, 6/10!"}]
})
================================= Tool Message =================================
Name: ProductRating
Error: Failed to parse structured output for tool 'ProductRating': 1 validation error for ProductRating.rating
Input should be less than or equal to 5 [type=less_than_equal, input_value=6, input_type=int].
Please fix your mistakes.
Observed Behavior:
The agent did not handle the error by returning a ToolMessage.
Additionally, the model did not attempt a follow-up call to revalidate the response.
Instead, it returned the initial (invalid) response without retrying.
Review Question:
Why did the agent fail to trigger error handling via a tool message and prompt the model to revalidate, rather than accepting and returning the first invalid output?
- pip install --pre -U langchain-core
- .content_blocks property in "messages"
https://docs.langchain.com/oss/python/langchain/messages#standard-content-blocks
- pip install --pre -U langgraph