NetMedEx is an AI-powered knowledge discovery platform designed to transform biomedical literature into actionable insights. Unlike traditional tools that merely extract entities, NetMedEx leverages Hybrid Retrieval-Augmented Generation (Hybrid RAG) to synthesize structured co-mention networks with unstructured text, providing a holistic understanding of biological relationships.
In NetMedEx, the Co-Mention Network serves as a structural "scaffolding." While the network visualizes the landscape of bio-concepts (genes, diseases, chemicals, etc.), the AI-driven Semantic Layer breathes life into these connections by extracting evidence, identifying relationship types, and answering complex natural language queries.
NetMedEx offers flexible ways to interact with the platform:
- Web Application (via Docker) - Recommended
- Web Application (Local)
- Command-Line Interface (CLI)
- Python API
The easiest way to start is using Docker. Run the command below and visit the access URL:
docker run -d -p 8050:8050 --rm lsbnb/netmedexImportant
Access URL: http://localhost:8050
Alternatively, install via PyPI for local hosting or CLI access:
pip install netmedexRecommended: Python >= 3.11
Launch the interactive dashboard locally:
netmedex runNetMedEx features an interactive Chat Panel driven by Hybrid RAG, which combines the power of large language models (LLMs) with specialized bio-medical knowledge graphs.
Figure 1: NetMedEx Hybrid RAG Architecture combining Text and Graph RAG for chatting with biomedical knowledge.
- Hybrid RAG Chat: Synthesizes unstructured text (abstracts) and structured graph knowledge (paths and neighbors).
- Natural Language & Universal Translation: Ask in English, Japanese, Chinese, or Korean! NetMedEx automatically translates non-English queries to optimized standard PubTator3 English syntax before searching.
- ChatGPT-Style Chat Experience: Features an intuitive, auto-scrolling Chat Panel that perfectly mimics modern AI layouts (user queries on the right, AI responses on the left), preventing the need for manual scrolling.
- Semantic Evidence Extraction: Automatically identifies relationship types (e.g., treats, inhibits) and confidence scores.
- Evidence Confidence Scoring: The AI evaluates its own extraction certainty (0.0 to 1.0) based on the textual strength of the abstract. Users can adjust a Semantic Confidence Threshold to filter for strict clinical evidence (>0.7) or exploratory novel associations (<0.3).
- Contextual Reasoning: Identifies shortest paths and relevant subgraphs to explain hidden connections between entities.
- Obtain an API key from OpenAI or set up a local LLM endpoint (e.g., Ollama).
- Configure via "Advanced Settings" in the web interface or via
.envfile.
Tip
Connecting to Local LLMs (Ollama/LM Studio):
- Linux/Docker: Use your host IP (e.g.,
http://192.168.1.100:11434). - Windows/macOS (Docker): Use
http://host.docker.internal:[PORT].
The workspace follows a logical discovery workflow across three main operational panels.
The Search Panel is where you define your research scope and configure the AI engine.
Figure 2: The Search Panel for keyword and natural language querying.
Expand Advanced Settings to configure your LLM provider. This is a crucial first step for enabling semantic analysis.
Figure 3: Configuring the AI Engine (OpenAI or Local) in Advanced Settings.
Figure 4: Selecting a specific model for local AI processing via the dropdown menu.
Users can also upload previously downloaded PubTator format files for re-analysis, or restore a previously exported Graph File (.pkl) to skip re-processing entirely.
Figure 5: Uploading PubTator files for re-analysis, or a Graph File (.pkl) to instantly restore a saved session.
Tip
Graph File Restore: After a time-consuming Semantic Analysis run, export the result as a Graph (.pkl) from the Graph Panel, then reload it later via Search Panel → Source: Graph File (.pkl). The full graph state — including all semantic edges, node metadata, and article abstracts — is restored instantly, allowing you to continue adjusting the network and using the Chat Panel without re-running any analysis.
The Graph Panel visualizes the co-mention/semantic analyzed network, providing the visualization of search results for your research. Using the shift key to select sub-network, those selected nodes and edges will be highlighted as the base for chat in next step. Users can visualize the network using different layouts and community detection algorithms. Users also can export the network in several formats:
| Export Format | Description | Re-importable? |
|---|---|---|
| HTML | Interactive visualization for browsers (example) | ❌ |
| XGMML | Network file for Cytoscape Desktop | ❌ |
| PubTator | Raw annotation file | ✅ Re-upload in Search Panel |
| Graph (.pkl) | Full graph state including semantic analysis results and article abstracts | ✅ Restore in Search Panel → "Graph File" |
Figure 6: Interactive Knowledge Graph showing Bio-Concept connections.
Figure 7: High-resolution view of the Graph Panel interface.There are several options in the top right corner of the graph panel, including layout, community detection, and save.
Figure 8: Case study: Visualizing the Sarcopenia-related network using NetMedEx to depict the relationships in semantic level between genes, diseases, chemicals, and species.
- Nodes: Genes, Diseases, Chemicals, and Species.
- Edges: Literature co-occurrence. Thicker edges indicate higher frequency.
- Clusters: Use the Community Detection feature to group related concepts automatically.
Figure 9: Automated community detection for functional clustering.
Figure 10: Selecting a sub-network by holding the Shift key to isolate relevant nodes and edges as the base for hybridRAG to chat with.
The Chat Panel provides the deep semantic layer, interpreting the graph using LLMs.
Figure 11: Hybrid RAG Chat for natural language reasoning over the network.
Figure 12(A): Press the "Analyze Selection" button to construct RAGs for communications with the selected sub-network.
Figure 12(B): RAG generating to prepare the chat later.
Figure 13: The Chat History panel for managing and reviewing previous discovery sessions.
Figure 14: Tabular representation of semantic analysis results (e.g., miRNA relationships).
While the Web Interface provides a full "Interactive Discovery" workflow—including dynamic sub-network selection (Shift+Select) and real-time Hybrid RAG chat—the CLI and API are designed for automated batch processing and static graph construction.
- Interactive Discovery (Web Only): Real-time interaction, dynamic graph filtering, and context-aware chat.
- Batch Processing (CLI/API): Static semantic analysis and high-throughput network generation.
For high-throughput analysis, use the NetMedEx CLI.
# Search articles by keywords
netmedex search -q '"N-dimethylnitrosamine" AND "Metformin"' --sort scorenetmedex search key options:
-q, --query: Query string.-p, --pmids: Comma-separated PMID list (alternative to--query).-f, --pmid_file: Load PMIDs from file, one per line (alternative to--query).-o, --output: Output.pubtatorpath.-s, --sort {score,date}: Sort by relevance (score) or newest (date, default).--max_articles: Maximum number of articles to request (default:1000).--full_text: Collect full-text annotations when available.--use_mesh: Use MeSH vocabulary in output.--ai_search: Enable LLM-based natural language to PubTator boolean query translation.--llm_provider {openai,google,local}: Provider for AI search translation.--llm_api_key: API key override for selected provider.--llm_model: Model override for selected provider.--llm_base_url: Base URL override (primarily for local/OpenAI-compatible endpoints).
Optional: enable AI query translation (--ai_search) with the same three providers.
# OpenAI
netmedex search \
-q "Find papers about metformin effects in NASH" \
--ai_search \
--llm_provider openai \
--llm_api_key "$OPENAI_API_KEY" \
--llm_model "gpt-4o-mini"# Google / Gemini
netmedex search \
-q "Find papers about metformin effects in NASH" \
--ai_search \
--llm_provider google \
--llm_api_key "$GEMINI_API_KEY" \
--llm_model "gemini-2.0-flash"# Local (Ollama / LocalAI / LM Studio OpenAI-compatible endpoint)
netmedex search \
-q "Find papers about metformin effects in NASH" \
--ai_search \
--llm_provider local \
--llm_base_url "http://localhost:11434/v1" \
--llm_model "llama3.1"# Generate HTML network from annotations
netmedex network -i annotations.pubtator -o network.html -w 2 --community
# Generate pickle graph for CLI chat (required for `netmedex chat`)
netmedex network -i annotations.pubtator -o network.pickle -f pickleUse --edge_method semantic to enable semantic relationship extraction.
# OpenAI
netmedex network \
-i annotations.pubtator \
-o semantic_openai.html \
--edge_method semantic \
--llm_provider openai \
--llm_api_key "$OPENAI_API_KEY" \
--llm_model "gpt-4o-mini"# Google / Gemini
netmedex network \
-i annotations.pubtator \
-o semantic_google.html \
--edge_method semantic \
--llm_provider google \
--llm_api_key "$GEMINI_API_KEY" \
--llm_model "gemini-2.0-flash"# Local (Ollama / LocalAI / LM Studio OpenAI-compatible endpoint)
netmedex network \
-i annotations.pubtator \
-o semantic_local.html \
--edge_method semantic \
--llm_provider local \
--llm_base_url "http://localhost:11434/v1" \
--llm_model "llama3.1"You can also omit --llm_* flags and configure defaults via .env (e.g., LLM_PROVIDER, OPENAI_API_KEY, GEMINI_API_KEY, LOCAL_LLM_BASE_URL, OPENAI_MODEL, GOOGLE_MODEL, LOCAL_LLM_MODEL).
Provider consistency note:
- CLI now supports the same three providers (
openai,google,local) acrosssearch,network, andchat. - Provider settings are passed by CLI flags or
.envvalues; they are not serialized into.pubtator/graph outputs automatically.
netmedex chat uses the pickled graph (-f pickle) as Hybrid RAG context and supports the same three providers.
# One-shot question
netmedex chat \
-g network.pickle \
-q "Summarize key evidence and hypotheses for metformin in NASH." \
--llm_provider openai \
--llm_api_key "$OPENAI_API_KEY"# Interactive mode
netmedex chat \
-g network.pickle \
--llm_provider local \
--llm_base_url "http://localhost:11434/v1" \
--llm_model "llama3.1"Tips:
- Type
exitorquitto leave interactive mode. - Use
/clearto clear chat history. - Use
/statsto inspect session statistics.
NetMedEx can be integrated directly into your Python pipelines as a library.
# Programmatic Access (API)
from netmedex import search, networkIf your upstream pipeline finds candidate genes (e.g., top 5 DE genes), you can directly bridge into NetMedEx and keep a conversational session in your own UI.
See example:
examples/netmedex_chat_bridge.py
Minimal flow:
from netmedex.chat_bridge import BridgeConfig, NetMedExChatBridge
cfg = BridgeConfig(provider="google", model="gemini-2.0-flash", edge_method="semantic")
bridge = NetMedExChatBridge(cfg)
bridge.build_context_from_genes(
genes=["SOST", "LRP5", "TNFRSF11B", "RUNX2", "ALPL"],
disease="osteoporosis",
)
answer = bridge.ask("What are the strongest evidence links and possible mechanisms?")
print(answer["message"])Install API extras and run:
pip install -e ".[api]"
python examples/netmedex_fastapi_server.pyEndpoints:
GET /health: service health check.POST /sessions: build Search -> Network -> Chat context and create a chat session.POST /sessions/{session_id}/ask: send a question in that session.DELETE /sessions/{session_id}: release session state.
Create a session from genes:
curl -X POST "http://127.0.0.1:8000/sessions" \
-H "Content-Type: application/json" \
-d '{
"config": {
"provider": "google",
"model": "gemini-2.0-flash",
"edge_method": "semantic",
"max_articles": 120
},
"genes": ["SOST", "LRP5", "TNFRSF11B", "RUNX2", "ALPL"],
"disease": "osteoporosis"
}'Ask in-session:
curl -X POST "http://127.0.0.1:8000/sessions/<SESSION_ID>/ask" \
-H "Content-Type: application/json" \
-d '{"question":"Summarize strongest evidence and potential mechanisms."}'Framework-agnostic Python client example:
from examples.netmedex_fastapi_client import NetMedExAPIClient
client = NetMedExAPIClient("http://127.0.0.1:8000")
client.create_session(
config={
"provider": "google",
"model": "gemini-2.0-flash",
"edge_method": "semantic",
"max_articles": 120,
},
genes=["SOST", "LRP5", "TNFRSF11B", "RUNX2", "ALPL"],
disease="osteoporosis",
)
resp = client.ask("Summarize strongest evidence and potential mechanisms.")
print(resp["message"])
client.close()Reference client file:
examples/netmedex_fastapi_client.py
Integration examples for your own app/chat platform:
# Example A: wrap NetMedEx into your backend service function
from examples.netmedex_fastapi_client import NetMedExAPIClient
def run_gene_chat(genes: list[str], user_question: str) -> str:
client = NetMedExAPIClient("http://127.0.0.1:8000")
client.create_session(
config={
"provider": "google",
"model": "gemini-2.0-flash",
"edge_method": "semantic",
"max_articles": 120,
},
genes=genes,
disease="osteoporosis",
)
try:
resp = client.ask(user_question)
return resp.get("message", "")
finally:
client.close()// Example B: call NetMedEx API from any web frontend (React/Vue/plain JS)
async function askNetMedEx(baseUrl, sessionId, question) {
const resp = await fetch(`${baseUrl}/sessions/${sessionId}/ask`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ question })
});
if (!resp.ok) throw new Error(await resp.text());
return await resp.json(); // { success, message, sources, ... }
}Example C: recommended lifecycle for multi-turn chat
1) User selects genes/disease in your app.
2) Backend calls POST /sessions once and stores session_id.
3) Each user message calls POST /sessions/{session_id}/ask.
4) On chat end/timeout, call DELETE /sessions/{session_id}.
Minimal Web Chat UI (no framework):
# terminal 1: start FastAPI bridge
python examples/netmedex_fastapi_server.py
# terminal 2: serve static examples folder
python -m http.server 8080 --directory examplesThen open:
http://127.0.0.1:8080/minimal_chat_ui.html
Gradio Chat UI:
# terminal 1: start FastAPI bridge
python examples/netmedex_fastapi_server.py
# terminal 2: launch gradio app
pip install -e ".[ui]"
python examples/gradio_chat_ui.pyThen open:
http://127.0.0.1:7860- In the UI, click
Create Sessionfirst, then ask questions.
© 2026 LSBNB Lab@ IIS, Academia Sinica, TAIWAN. Refer to LICENSE for details.