-
Notifications
You must be signed in to change notification settings - Fork 2
[BOT ISSUE] LangChain4j instrumentation only supports OpenAI-backed chat models #60
Description
Summary
The LangChain4j instrumentation only wraps OpenAiChatModel and OpenAiStreamingChatModel. All other LangChain4j model providers — including Anthropic, Google Gemini, Azure OpenAI, and Mistral — are explicitly skipped with a warning, producing no LLM-level span data for those calls.
Unlike the Spring AI integration (where unsupported providers can fall through to direct SDK auto-instrumentation, e.g. the GenAI module intercepts Client$Builder.build()), LangChain4j model providers use their own internal HTTP clients rather than the upstream SDK clients. This means no other auto-instrumentation module catches these calls.
What is missing
BraintrustLangchain.wrap(OpenTelemetry, AiServices) (lines 36-51) performs instanceof checks against only two model classes:
if (chatModel instanceof OpenAiChatModel oaiModel) {
aiServices.chatModel(wrap(openTelemetry, oaiModel));
} else {
log.warn("unsupported model: {}. LLM calls will not be instrumented",
chatModel.getClass().getName());
}The auto-instrumentation module (LangchainInstrumentationModule) similarly only registers ByteBuddy advice for OpenAiChatModel$OpenAiChatModelBuilder.build() and OpenAiStreamingChatModel$OpenAiStreamingChatModelBuilder.build().
The most impactful missing model providers are:
| LangChain4j model class | Provider |
|---|---|
AnthropicChatModel |
Anthropic Claude |
GoogleAiGeminiChatModel / GoogleAiGeminiStreamingChatModel |
Google Gemini |
VertexAiGeminiChatModel / VertexAiGeminiStreamingChatModel |
Google Vertex AI |
AzureOpenAiChatModel / AzureOpenAiStreamingChatModel |
Azure OpenAI |
MistralAiChatModel / MistralAiStreamingChatModel |
Mistral AI |
Since the existing OpenAI instrumentation works by wrapping LangChain4j's internal HttpClient interface (extracted via reflection from the model's internal client), the same pattern could apply to other providers — each uses the same dev.langchain4j.http.client.HttpClient interface internally.
Braintrust docs status
Braintrust docs do not have a LangChain4j-specific page. The Java SDK is listed as supported for OpenAI, Anthropic, and Gemini providers at https://www.braintrust.dev/docs/integrations/ai-providers, but LangChain4j as a framework integration is not_found in docs.
Upstream sources
- LangChain4j supports 20+ model providers as first-party integrations: https://github.com/langchain4j/langchain4j
- LangChain4j model provider modules:
langchain4j-anthropic,langchain4j-google-ai-gemini,langchain4j-vertex-ai-gemini,langchain4j-azure-open-ai,langchain4j-mistral-ai, etc. - All LangChain4j model providers use their own
dev.langchain4j.http.client.HttpClientinternally (not the upstream provider SDK), so upstream SDK auto-instrumentation does not cover them.
Local files inspected
braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/BraintrustLangchain.java— lines 36-51 (instanceofcheck limits to OpenAI only), lines 88-152 (OpenAI-specific wrapping via internal HttpClient)braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/auto/LangchainInstrumentationModule.java— lines 55-68, 85-100 (auto-instrumentation only targets OpenAI builders)braintrust-sdk/instrumentation/langchain_1_8_0/src/main/java/dev/braintrust/instrumentation/langchain/v1_8_0/WrappedHttpClient.java— the HTTP client wrapper that could be reused for other providers