A Model Context Protocol (MCP) server that provides access to Narrative's Data Collaboration Platform APIs through any MCP server. For integrating your favorite data platform with your favorite LLM.
Learn more about Narrative: https://www.narrative.io/
To use this MCP server, you need to configure it in your MCP settings file (eg .cursor/mcp.json for Cursor or claude_desktop_config.json for Claude Desktop).
Add the following configuration to your mcp.json file:
{
"mcpServers": {
"narrative": {
"command": "bun",
"args": [
"--cwd",
"<FULL_PATH_TO>/data-collaboration-mcp",
"dev"
],
"env": {
"NARRATIVE_API_URL": "https://api.narrative.io",
"NARRATIVE_API_TOKEN": "<YOUR_API_TOKEN>"
}
}
}
}Important:
- Replace
<YOUR_API_TOKEN>with your actual Narrative API token (required) - Update the path in the
--cwdargument to point to your local installation of this repository - Get your Narrative API token from your Narrative account settings at https://www.narrative.io/
After updating your MCP configuration, restart your editor or MCP client for the changes to take effect.
This MCP server provides the following tools:
search_attributes: Search Narrative Rosetta Stone attributes with paginationlist_datasets: List all available datasets from the Narrative marketplacelist_access_rules: List access rules with filtering optionssearch_access_rules: Search access rules with querydataset_statistics: Get comprehensive statistics for a datasetdataset_sample: Retrieve sample records from a datasetnql_execute: Execute NQL queries asynchronouslynql_get_results: Retrieve results from NQL query jobsecho: Simple echo tool for testing
This MCP server provides expert guidance prompts:
execute-nql: Expert guidance for executing NQL queries on the Narrative platform. This prompt ensures queries follow all mandatory NQL syntax rules, namespace conventions, and best practices. It validates queries, enforces materialized view patterns, handles Rosetta Stone mappings, and provides post-execution guidance.
Search for attributes related to "demographics"
Show me all available datasets
Use the execute-nql prompt to help me write a query that combines data from dataset 1234
The NQL execution prompt provides expert guidance including:
- Validation of NQL syntax and structure
- Enforcement of materialized view patterns
- Proper namespace and dataset reference handling
- Rosetta Stone integration guidance
- Post-execution result handling
The MCP Inspector is an interactive browser-based tool for testing and debugging MCP servers. It lets you connect to your server and manually invoke tools, browse resources, and test prompts — all without needing an LLM client.
npm run inspectThis loads the server config from .mcp.json, passes your .env credentials, and opens the Inspector UI in your browser. The terminal output will include a URL with an auth token — make sure to use that full URL:
http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=<token>
You can also run the inspector directly with full control over options:
# Using your .mcp.json config (recommended)
source .env && npx @modelcontextprotocol/inspector \
--config .mcp.json \
--server data-collaboration \
-e NARRATIVE_API_URL=$NARRATIVE_API_URL \
-e NARRATIVE_API_TOKEN=$NARRATIVE_API_TOKEN
# Using an explicit command
source .env && npx @modelcontextprotocol/inspector \
-e NARRATIVE_API_URL=$NARRATIVE_API_URL \
-e NARRATIVE_API_TOKEN=$NARRATIVE_API_TOKEN \
bun src/index.ts
# Against the compiled build
npm run build && source .env && npx @modelcontextprotocol/inspector \
-e NARRATIVE_API_URL=$NARRATIVE_API_URL \
-e NARRATIVE_API_TOKEN=$NARRATIVE_API_TOKEN \
node build/index.js| Flag | Description |
|---|---|
--config <path> |
Path to an MCP config file (e.g. .mcp.json) |
--server <name> |
Server name from the config file to inspect |
-e KEY=VALUE |
Pass environment variables to the spawned server |
--transport <type> |
Transport type: stdio, sse, or http |
--cli |
Run in CLI mode (no browser UI) |
Once connected, the Inspector provides tabs for each MCP capability:
- Tools — List all tools, view their JSON schemas, execute them with custom inputs, and inspect the results
- Resources — Browse available resources, view metadata, and read resource contents
- Prompts — List prompt templates, fill in arguments, and preview the generated messages
- Notifications — View server log messages and notifications in real time
- Start the inspector:
npm run inspect - Make changes to server code in
src/ - The inspector spawns the server fresh on each Connect — click Disconnect then Connect to pick up changes
- Use the History panel to review the request/response sequence
- Check Server Notifications for any logging messages from your server
The inspector also supports a headless CLI mode (--cli) for one-shot invocations. This is useful for scripting, automated smoke tests, or when AI agents need to verify tool outputs during development.
Each call spawns the server, executes the method, prints JSON to stdout, and exits.
List capabilities:
# List all tools and their schemas
npm run inspect -- --cli --method tools/list
# List all resources
npm run inspect -- --cli --method resources/list
# List all prompts
npm run inspect -- --cli --method prompts/listCall a tool:
# Call the echo tool
npm run inspect -- --cli --method tools/call --tool-name echo --tool-arg 'message=hello'
# Search for rosetta stone attributes
npm run inspect -- --cli --method tools/call --tool-name search_attributes --tool-arg 'query=email'
# List datasets
npm run inspect -- --cli --method tools/call --tool-name list_datasetsPipe through jq for readable output:
npm run inspect -- --cli --method tools/call --tool-name search_attributes --tool-arg 'query=age' 2>/dev/null | jq '.content[0].text | fromjson'CLI-specific flags:
| Flag | Description |
|---|---|
--method <method> |
MCP method to invoke (e.g. tools/list, tools/call, resources/read, prompts/get) |
--tool-name <name> |
Tool name (required for tools/call) |
--tool-arg 'key=value' |
Tool argument as key=value (repeatable for multiple args) |
A Claude Code skill is available for AI agents to test MCP features using the Inspector CLI. Run it from Claude Code with:
/test-mcp-feature list_datasets
This walks agents through the full verification flow: checking feature registration, invoking the feature, parsing responses, and testing edge cases. See .claude/commands/test-mcp-feature.md for the full skill definition.
- Connect button does nothing: Make sure the URL includes the
?MCP_PROXY_AUTH_TOKEN=...query parameter. The inspector requires this token for authentication. - "Connection Error" after clicking Connect: Check that your
.envfile contains validNARRATIVE_API_URLandNARRATIVE_API_TOKENvalues. The server will fail to start without them. - Wrong command/args shown: Use
--config .mcp.json --server data-collaborationto load from your config file rather than manually entering the command.
Run the test suite:
bun run test| Script | Description |
|---|---|
npm run dev |
Start the server in watch mode (auto-restarts on changes) |
npm run build |
Type-check and compile to build/ |
npm run start |
Run the compiled server |
npm run test |
Run the test suite |
npm run test:watch |
Run tests in watch mode |
npm run inspect |
Launch the MCP Inspector (browser UI + CLI mode) |
After configuring the MCP server and restarting your editor, verify it's working by:
- Asking your AI assistant to "List all available datasets"
- Asking it to "Search for attributes related to location"
- The Narrative tools should appear in the available MCP tools list
Check MCP server logs if you encounter issues:
For Cursor:
# Check Cursor logs for MCP server errors
tail -f ~/Library/Logs/Cursor/logs/*.logFor Claude Desktop:
tail -f ~/Library/Logs/Claude/mcp*.log