-
Notifications
You must be signed in to change notification settings - Fork 0
docs: Add Humanloop as an observability provider #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,265 @@ | ||
| --- | ||
| title: Humanloop | ||
| description: Monitor and trace your AI SDK application with Humanloop, the LLM evals platform for enterprises. | ||
| --- | ||
|
|
||
| # Humanloop Observability | ||
|
|
||
| [Humanloop](https://humanloop.com/) is the LLM evals platform for enterprises, giving you the tools that top teams use to ship and scale AI with confidence. Humanloop integrates with the AI SDK to provide: | ||
|
|
||
| The AI SDK can log to [Humanloop](https://humanloop.com/) via OpenTelemetry. This integration enables trace visualization, cost/latency/error monitoring, and evaluation by code, LLM, or human judges. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. remove this paragraph, rephrased version in previous comment |
||
|
|
||
| ## Reference | ||
|
|
||
| ### Telemetry Configuration | ||
|
|
||
| The AI SDK supports tracing through the `experimental_telemetry` parameter that can be set on each request. | ||
|
|
||
| ```ts | ||
| const result = await generateText({ | ||
| model: openai('gpt-4o'), | ||
| prompt: 'Write a short story about a cat.', | ||
| experimental_telemetry: { isEnabled: true }, | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. want highlight on this line |
||
| }); | ||
| ``` | ||
|
|
||
| ### Metadata Parameters | ||
|
|
||
| The Humanloop OpenTelemetry Receiver accepts these metadata parameters: | ||
|
|
||
| | Parameter | Required | Description | | ||
| | --------------------- | -------- | ------------------------------------------------------------------------------ | | ||
| | `humanloopPromptPath` | Yes | Path to the prompt on Humanloop. Generation spans create Logs for this Prompt. | | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Reminder that this is a first point of contact for new users: they're not yet in on the HL lingo Alternative descriptions:
|
||
| | `humanloopFlowPath` | No | Path to the flow on Humanloop. Groups steps into a single Flow Log. | | ||
| | `humanloopFlowId` | No | ID of a Flow Log on Humanloop. Groups multiple calls into a single Flow Log. | | ||
|
|
||
| ## Setup | ||
|
|
||
| ### Prerequisites | ||
|
|
||
| - A Humanloop account and API key. | ||
| - [Sign up](https://app.humanloop.com/signup) or [login](https://app.humanloop.com/login) to Humanloop. | ||
| - Create an API key in [Organization Settings](https://app.humanloop.com/account/api-keys). | ||
| - A Vercel AI SDK application. | ||
|
|
||
| ### Telemetry Configuration | ||
|
|
||
| When sending traces to Humanloop, these parameters are added to the telemetry object: | ||
|
|
||
| ```ts | ||
| experimental_telemetry: { | ||
| isEnabled: true, | ||
| functionId: 'unique-function-id', // Optional identifier for the function | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't understand this comment; can you try describing why I would add a functionId/ what id does? |
||
| metadata: { | ||
| humanloopPromptPath: 'Path/To/Prompt', | ||
| humanloopFlowPath: 'Path/To/Flow', // Optional | ||
| humanloopFlowId: 'flow-log-id' // Optional | ||
| }, | ||
| } | ||
| ``` | ||
|
|
||
| ### Environment Variables | ||
|
|
||
| When using OpenTelemetry with Humanloop, the following environment variables configure the OTLP exporter: | ||
|
|
||
| ```bash | ||
| OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel | ||
| OTEL_EXPORTER_OTLP_PROTOCOL=http/json | ||
| OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually like the xxxxx pattern here; it's a good way to signal you need to add |
||
| ``` | ||
|
|
||
| ## Framework Implementation | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OpenTelemetry Setup? |
||
|
|
||
| <Tabs items={['Next.js', 'Node.js']}> | ||
| <Tab> | ||
| Next.js has support for OpenTelemetry instrumentation on the framework level. Learn more about it in the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry). | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Next.js has framework level support for OpenTelemetry instrumentation" |
||
|
|
||
| Required dependencies: | ||
|
|
||
| <Tabs items={['pnpm', 'npm', 'yarn']}> | ||
| <Tab> | ||
| <Snippet | ||
| text="pnpm add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation" | ||
| dark | ||
| /> | ||
| </Tab> | ||
| <Tab> | ||
| <Snippet | ||
| text="npm install @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation" | ||
| dark | ||
| /> | ||
| </Tab> | ||
| <Tab> | ||
| <Snippet | ||
| text="yarn add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation" | ||
| dark | ||
| /> | ||
| </Tab> | ||
| </Tabs> | ||
|
|
||
| Update your `.env.local` file to configure the OTLP Exporter: | ||
|
|
||
| ```bash filename=".env.local" | ||
| OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel | ||
| OTEL_EXPORTER_OTLP_PROTOCOL=http/json | ||
| OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key | ||
| ``` | ||
|
|
||
| Register the OpenTelemetry SDK `instrumentation.ts` file (in root or src/ dir): | ||
|
|
||
| ```ts filename="instrumentation.ts" | ||
| import { registerOTel } from '@vercel/otel'; | ||
|
|
||
| export function register() { | ||
| registerOTel({ | ||
| serviceName: 'humanloop-vercel-ai-nextjs', | ||
| }); | ||
| } | ||
| ``` | ||
|
|
||
| Your calls to the AI SDK should now be logged to Humanloop. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Your AI SDK project will now log to Humanloop" // More confidence, more salemanship - you are writing from a position of knowledge |
||
|
|
||
| </Tab> | ||
| <Tab> | ||
|
|
||
| ### Node.js Implementation | ||
|
|
||
| OpenTelemetry has a package to auto-instrument Node.js applications. Learn more about it in the [OpenTelemetry Node.js guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/). | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Add OpenTelemetry to your Node.js project. To learn more it, check out the [....." |
||
|
|
||
| Required dependencies: | ||
|
|
||
| <Tabs items={['pnpm', 'npm', 'yarn']}> | ||
| <Tab> | ||
| <Snippet | ||
| text="pnpm add @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http" | ||
| dark | ||
| /> | ||
| </Tab> | ||
| <Tab> | ||
| <Snippet | ||
| text="npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http" | ||
| dark | ||
| /> | ||
| </Tab> | ||
| <Tab> | ||
| <Snippet | ||
| text="yarn add @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http" | ||
| dark | ||
| /> | ||
| </Tab> | ||
| </Tabs> | ||
|
|
||
| Update your `.env` file to configure the OTLP Exporter: | ||
|
|
||
| ```bash filename=".env" | ||
| OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel | ||
| OTEL_EXPORTER_OTLP_PROTOCOL=http/json | ||
| OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=xxxxxx" # Humanloop API key | ||
| ``` | ||
|
|
||
| Register the OpenTelemetry SDK and add Humanloop metadata to the spans. The `humanloopPromptPath` specifies the (Prompt File)[http://localhost:3001/docs/v5/explanation/prompts] in Humanloop to which the spans will be logged. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oops, localhost link |
||
|
|
||
| ```ts highlight="3-13,19" | ||
| import { openai } from '@ai-sdk/openai'; | ||
| import { generateText } from 'ai'; | ||
| import { NodeSDK } from '@opentelemetry/sdk-node'; | ||
| import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node'; | ||
| import dotenv from 'dotenv'; | ||
|
|
||
| dotenv.config(); | ||
|
|
||
| const sdk = new NodeSDK({ | ||
| instrumentations: [getNodeAutoInstrumentations()], | ||
| }); | ||
|
|
||
| sdk.start(); | ||
|
|
||
| async function main() { | ||
| // ... Vercel AI SDK calls ... | ||
|
|
||
| // Must call shutdown to flush traces | ||
| await sdk.shutdown(); | ||
| } | ||
|
|
||
| main().catch(console.error); | ||
| ``` | ||
|
|
||
| Your calls to the AI SDK should now be logged to Humanloop. | ||
|
|
||
| </Tab> | ||
| </Tabs> | ||
|
|
||
| ## Trace Grouping | ||
|
|
||
| To group multiple AI SDK calls into a single Flow Log, create and pass a Flow Log ID to the telemetry metadata of each AI SDK call. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "To trace a user-LLM session end to end, we will use Humanloop's tracing feature: Flows (add linked here). Pass A Flow Log ID to the telemetra metadata on all AI SDK calls. |
||
|
|
||
| 1. Create a Flow Log in Humanloop | ||
| 2. Pass the Flow Log ID to each AI SDK call | ||
| 3. Update the Flow Log when all executions are complete | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Confused, expected the flow trace to be completed automatically |
||
|
|
||
| The Flow Log serves as a parent container for all related Prompt Logs in Humanloop. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. drop this paragraph |
||
|
|
||
| ```ts | ||
| import { HumanloopClient } from 'humanloop'; | ||
|
|
||
| const humanloop = new HumanloopClient(); | ||
|
|
||
| async function main() { | ||
| const flow = await humanloop.flows.upsert({ | ||
| path: 'Plethora of Poetry', | ||
| attributes: {}, | ||
| }); | ||
| const flowLog = await humanloop.flows.log({ | ||
| id: flow.id, | ||
| }); | ||
|
|
||
| const outputs = []; | ||
|
|
||
| for (const poetName of ['Edgar Allan Poe', 'Mary Shelley', 'Lord Byron']) { | ||
| const result = await generateText({ | ||
| model: openai('gpt-3.5-turbo'), | ||
| maxTokens: 50, | ||
| prompt: `Write me a poem in the style of ${poetName}.`, | ||
| experimental_telemetry: { | ||
| isEnabled: true, | ||
| functionId: `poet-${poetName.toLowerCase().replace(' ', '-')}`, | ||
| metadata: { | ||
| humanloopFlowId: flowLog.id, | ||
| humanloopPromptPath: `Poets/${poetName}`, | ||
| }, | ||
| }, | ||
| }); | ||
|
|
||
| outputs.push(result.text); | ||
| } | ||
|
|
||
| await humanloop.flows.updateLog(flowLog.id, { | ||
| traceStatus: 'complete', | ||
| output: outputs.join('\n\n'), | ||
| }); | ||
|
|
||
| await sdk.shutdown(); | ||
| } | ||
| ``` | ||
|
|
||
| ## Debugging | ||
|
|
||
| If you aren't using Next.js 15+, you will also need to enable the experimental instrumentation hook (available in 13.4+). | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Need to elaborate this further: it's next.js only, should mention what the hook's benefits, THEN mentions you don't need extra configuration for >= 15 |
||
|
|
||
| ```javascript filename="next.config.js" | ||
| module.exports = { | ||
| experimental: { | ||
| instrumentationHook: true, | ||
| }, | ||
| }; | ||
| ``` | ||
|
|
||
| ## Resources | ||
|
|
||
| To see a full example of instrumenting your application, check out the Humanloop [AI SDK Guides](https://humanloop.com/docs/v5/vercel-ai-sdk). | ||
|
|
||
| After instrumenting your AI SDK application with Humanloop, you can then: | ||
|
|
||
| - Experiment with different [versions of Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts) and try them out in the Editor | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Try tweaking your Prompt in the workspace editor to improve its performance" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see your point about provider: they can tweak the Prompt but not call it after optimisation. Let's mention that briefly in the same paragraph: "Our AI SDK Provider implementation is coming soon, allowing you to switch between Prompt versions as you make tweaks" |
||
| - Create [custom Evaluators](https://humanloop.com/docs/v5/explanation/evaluators) -- Human, Code, or LLM -- to monitor and benchmark your AI application | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would rather only point to monitoring - they're still dipping their toes and this is the next quantum of utility: BAM you have your AI Vercel project, now you also have monitoring in HL. That's a workable setup already and makes them come back for more |
||
| - Set up [live monitoring](https://humanloop.com/docs/v5/guides/observability/monitoring) of your logs to continuously track your application's performance | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -8,6 +8,7 @@ description: AI SDK Integration for monitoring and tracing LLM applications | |
| Several LLM observability providers offer integrations with the AI SDK telemetry data: | ||
|
|
||
| - [Braintrust](/providers/observability/braintrust) | ||
| - [Humanloop](/providers/observability/humanloop) | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Petition to put ourselves first /s |
||
| - [Traceloop](/providers/observability/traceloop) | ||
| - [Langfuse](/providers/observability/langfuse) | ||
| - [LangSmith](/providers/observability/langsmith) | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
critique this final version
Humanloop is an enterprise LLMOps platform that helps you confidently evaluate, deploy, and scale AI features.
Our AI SDK integration allows you to seamlessly import telemetry data into Humanloop via the OpenTelemetry protocol.
You can visualize app traces and metrics for latency, cost, and errors. You can then set up automatic monitoring using code, human, and LLM evaluators.