Skip to content

6G-SANDBOX/6G-Sandbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

6G-Sandbox Agent

This project provides an intelligent agent for the 6G-Sandbox platform, enabling users to interactively configure, validate, and deploy Trial Networks (TNs) using natural language. The agent leverages LangChain, LangGraph, Azure OpenAI, and GitHub integration to automate the process of selecting components, resolving dependencies, and generating YAML descriptors for deploymentm using TNLCM (Trial Network Lifecycle Manager) as validation and deployment tool.

Project Structure

The project is composed by:

1. app

This folder contains the main API and orchestration login for the 6G-Sandbox Agent. Its structure is as follows:

  • __init__.py: Marks the folder as a Python package.
  • main.py: Entry point for the FastAPI application, exposing REST endpoints for interacting with the agent. The main endpoints are:
    • GET / – Basic welcome endpoint to verify the API is reachable.
    • GET /health – Liveness/health check for monitoring and deployments.
    • POST /api/login – Simple credential check used by the UI.
    • POST /process_query – Core endpoint that processes a natural-language query and generates/modifies a Trial Network (TN) YAML descriptor.
    • POST /stream – Streaming (SSE) version of POST /process_query, sending incremental updates.
    • POST /get_all_components_from_session – Returns all components discovered for a given session.
    • POST /get_selected_components_from_session – Returns the subset of components selected in the session.
    • POST /get_state_session – Returns the full internal graph state associated with a session.
    • POST /get_messages_session – Returns all messages exchanged between internal agents for a session.
    • GET /get_session_ids – Lists all stored session IDs from the LangGraph checkpointer.
    • DELETE /delete_session_id – Deletes the state associated with a single session ID.
    • DELETE /delete_all_session_ids – Deletes the state for all sessions.
  • models.py: Orchestrates the LangGraph workflow by constructing a state graph with multiple nodes representing different stages of the agent's pipeline. This module:
    • Initializes the database connection and memory checkpointer for maintaining conversation state
    • Defines the complete graph structure with nodes for initial configuration, orchestration, YAML generation, validation, consulting, and TNLCM integration
    • Configures conditional edges that route execution based on user intent (consult, modify, create, validate)
    • Compiles the graph with checkpointing capabilities to enable stateful multi-turn conversations
    • Serves as the central configuration point that brings together all workflow components from sandbox_llm.py

This folder implements the backend logic that powers the agent’s API and workflow orchestration.

2. config

This folder contains configuration files and templastes used by the 6G-Sandbox Agent.

  • .env: Environment variables for credentials and configuration (such as Azure OpenAI, GitHub and Ansible tokens). This file is not included in the repo since it contains private information.
  • sample_tnlcm_descriptor.yaml: Example YAML descriptor for a Trial Network, used for testing the Agent.
  • template.yaml: Base YAML template for generating new Trial Network descriptors.

This folder centralizes all configuration and template files required for the agent’s operation and deployment.

3. doc

This folder contains the auxiliar documentation (diagrams, images, documentation, ...) to understand 6G-Sandbox AI Agent.

4. src

This folder contains the core logic and utilities for the 6G-Sandbox Agent.

  • __init__.py: Marks the folder as a Python package.
  • sandbox_llm.py: Main Python module implementing the agent’s orchestration and reasoning logic. It:
    • Integrates with LangChain, LangGraph, Azure OpenAI, GitHub, Ansible Vault, and TNLCM tools
    • Defines the LangGraph node functions (e.g., check_initial_configuration, orchestrator_node, consulting_node, check_yaml_example_node, parse_components_node, check_dependencies_node, create_yaml_template_node, modify_yaml_template_node, validate_yaml_tnlcm_node, tnlcm_agent_node, tnlcm_tools_node)
    • Drives the full lifecycle of a Trial Network descriptor: component discovery, dependency resolution, YAML example retrieval, template creation, modification, and validation/deployment via TNLCM
    • Provides database helpers (create_database, cleanup_database) used to back LangGraph’s SQLite checkpointer

This folder is the main location for the agent’s business logic, workflow orchestration, and interactive experimentation.

5. tools

This folder contains the different tools that the agent can use to interact with external systems like GitHub and TNLCM.

  • git_tools.py: Provides tools for interacting with the 6G-Library and 6G-Sandbox-Sites GitHub repositories.
    • get_libraries(): Fetches the list of available component libraries from the repository.
    • get_components_branch(repo, branch): Retrieves a list of component directories from a specified repository and branch.
    • get_components_from_site(repo, site): Fetches the available components for a specific site, decrypting the configuration using Ansible Vault.
    • yaml_retrieval(repo, branch, components): Retrieves and parses the public.yaml file for a given list of components.
    • yaml_example_retrieval(repo, branch, component): Fetches a sample tnlcm_descriptor.yaml for a component, using an LLM to select the best match if multiple examples exist.
  • other_tools.py: Contains other miscellaneous tools.
  • tnlcm_tools.py: Provides tools for interacting with the Trial Network Lifecycle Manager (TNLCM) API.
    • tnlcm_validate_yaml(): Validates a given TNLCM descriptor YAML against the TNLCM API.
    • activate_tnlcm_trial_network(): Activates a trial network using its ID.
    • delete_tnlcm_trial_network(): Deletes a trial network.
    • purge_tnlcm_trial_network(): Permanently purges a trial network.
    • get_tnlcm_all_trial_networks(): Lists all existing trial networks.
    • get_tnlcm_trial_network(): Retrieves detailed information for a specific trial network.

6. ui

ℹ️ Note The current deployed UI (http://10.95.82.65:8081/) is using the repository frontend-chatbot-sandbox

This folder contains the web-based user interface for the 6G-Sandbox Agent which connects through Telefónica's Model Gateway.

  • Dockerfile: This build a Docker iamge for serving the web-based user interface of the 6G-Sandbox Agent using NGINX, using a custom NGINX configuration.
  • docker-compose.yml: It has the same functionality as the Dockerfile, but makes it easy to build and run the UI container, mapping it to a local port for development or deployment.
  • nginx/: Used to serve the static web content.
  • www/: It conatins the main web assets and frontend code
    • index.html: Entry point for the web application.
    • main.js: Main JavaScript logic for the UI.
    • assets/: CSS files and other static assets for styling the interface.
    • img/: Logos and images used in the UI.
    • skin/: Custom CSS themes for light and dark modes.

This folder provides all the resources needed to run and customize the graphical interface for interacting with the 6G-Sandbox Agent.

6. requirements.txt

This is the file that contains the neccesary libraries which need to be installed in the environment to be able to execute the Sandbox AI Agent.

Architecture

The following section presents the internal architecture of the AI Agent developed for the 6G Sandbox project, as shown in Figure 1. This agent is designed to assist users in creating, modifying, validating, and consulting YAML-based network descriptors used for trial deployments across federated testbed sites.

📌 High-Level Description

In its first iteration, the AI Agent establishes an interaction loop between the user and the system where the user selects:

  • A target branch of the 6G-Library repository.
  • A specific site where the deployment will be executed.

Based on the user's intent and context, the orchestrator component analyzes the prompt and autonomously chooses one of five high-level actions:

  • Lazy Creation – Simplified YAML generation using prefilled component templates.
  • Zero-Touch Creation – Fully automated descriptor generation from high-level user goals.
  • Modify – Update or extend an existing YAML file.
  • Consult – Retrieve information about network components or deployment constraints within 6G-Library.
  • Validate – Check the structure and dependencies of an existing or generated YAML descriptor using TNLCM tools.
graph TD;
   __start__([<p>__start__</p>]):::first
   initial_configuration(initial_configuration)
   orchestrator(orchestrator)
   check_yaml_example(check_yaml_example)
   parse_components(parse_components)
   check_dependencies(check_dependencies)
   create_yaml_template(create_yaml_template)
   consulting_node(consulting_node)
   validate_yaml_tnlcm(validate_yaml_tnlcm)
   modify_yaml_template(modify_yaml_template)
   tnlcm_agent_node(tnlcm_agent_node)
   tnlcm_tools_node(tnlcm_tools_node)
   __end__([<p>__end__</p>]):::last
   __start__ --> initial_configuration;
   check_dependencies --> create_yaml_template;
   check_yaml_example -. &nbsp;end&nbsp; .-> __end__;
   check_yaml_example -.-> parse_components;
   create_yaml_template --> validate_yaml_tnlcm;
   initial_configuration --> orchestrator;
   modify_yaml_template --> validate_yaml_tnlcm;
   orchestrator -. &nbsp;end&nbsp; .-> __end__;
   orchestrator -.-> check_yaml_example;
   orchestrator -. &nbsp;consult&nbsp; .-> consulting_node;
   orchestrator -. &nbsp;modify&nbsp; .-> modify_yaml_template;
   orchestrator -.-> parse_components;
   orchestrator -. &nbsp;tnlcm_agent&nbsp; .-> tnlcm_agent_node;
   parse_components --> check_dependencies;
   tnlcm_agent_node -.-> __end__;
   tnlcm_agent_node -. &nbsp;tools&nbsp; .-> tnlcm_tools_node;
   tnlcm_tools_node --> tnlcm_agent_node;
   validate_yaml_tnlcm -. &nbsp;end&nbsp; .-> __end__;
   validate_yaml_tnlcm -. &nbsp;modify&nbsp; .-> modify_yaml_template;
   consulting_node --> __end__;
   classDef default fill:#f2f0ff,line-height:1.2
   classDef first fill-opacity:0
   classDef last fill:#bfb6fc
Loading

Figure 1: Architecture of the Sandbox Agent — this diagram illustrates the core components and data flow.

Getting Started

Prerequisites

  • PyPI - Version
  • Python
  • Docker
  • Docker Compose (for containerized deployment)
  • Access to Azure OpenAI and GitHub tokens.

Installation

  1. Clone the repository:
git clone https://github.com/Telefonica/fnl-sandbox-agent.git
cd fnl-sandbox-agent
  1. Create an environment and access to it:
python3 -m venv env
source env/bin/activate  # On Linux/macOS
# Or on Windows: env\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt

Configuration

Create a .env file in the project root with the following structure:

# Azure OpenAI Configuration
AZURE_OPENAI_API_KEY=your_api_key_here
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT_NAME=your_deployment_name
AZURE_OPENAI_API_VERSION=2024-02-15-preview

# GitHub Configuration
GITHUB_TOKEN=your_github_personal_access_token

# Ansible Configuration
ANSIBLE_VAULT_PASSWORD=your_vault_password
DEFAULT_SITE=UMA

# TNLCM Configuration
TNLCM_API_URL=http://your-tnlcm-instance
TNLCM_API_TOKEN=your_tnlcm_token

⚠️ Important: Never commit the .env file to version control. It's already included in .gitignore.

🚀 Deployment Guide

This project includes an API powered by FastAPI and a separate UI component. Below are the steps to deploy each part of the system, both in development mode and using Docker.

🔧 API Deployment

▶️ Run Locally

To start the API server from the root of the project:

uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload

▶️ Run in Background (with Logs)

To run the server in the background and log output to uvicorn.log:

nohup uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload > uvicorn.log 2>&1 &

▶️ Run with Docker

To deploy the API using Docker in daemon mode:

docker-compose -f docker-compose.yml up -d --build

View logs:

docker-compose logs -f

Stop the API:

docker-compose down

🖥️ UI Deployment

The UI has its own docker-compose.yml located in the ui/ directory.

▶️ From the Project Root (/)

Start the UI:

docker-compose -f ui/docker-compose.yml up -d --build

View logs:

docker-compose -f ui/docker-compose.yml logs -f

Stop the UI:

docker-compose -f ui/docker-compose.yml down

▶️ From Inside the /ui/ Folder

Start the UI:

docker-compose up -d --build

View logs:

docker-compose logs -f

Stop the UI:

docker-compose down

Disclaimer

Sites

At this moment, deployments requiring descriptor decryption are limited to the UMA site. The UMA site is available because the corresponding Ansible decryption key is provided via the .env configuration. Other sites are not yet enabled for decryption, so they are currently unavailable for automated deployments through the agent and for consulting. To change the site, it needs to update manually the environment Ansible key.

Model Gateway

The Model Gateway is proprietary software developed by Telefónica. Its interface and connection endpoints are private at this time and cannot be exposed publicly. Any integration in this project assumes access to Telefónica’s internal Model Gateway and valid credentials configured in the .env file.

That said, the agent exposes a REST API, so the Model Gateway can be replaced by other frontend interfaces (e.g., Streamlit, custom web apps, or CLI clients) that consume the agent’s endpoints. You can build your own UI to "attack" the API directly without relying on the Model Gateway.

License

This project is proprietary software developed by Telefónica Innovación Digital (TID). All rights reserved ©.

About

Sandbox AI Agent to generate valid descriptors according to user input in a Chatbot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors