Tablepilot provides a REST API for managing AI-generated tables. You can start server using tablepilot serve
http://127.0.0.1:8080/api/v1
POST /tables
The JSON body of this API follows the same syntax as the CLI schema file. See examples directory for more examples.
{
"name": "recipes",
"model": "m1",
"description": "all recipes",
"columns": [
{"name": "recipe_name", "description": "Name of the recipe", "type": "string", "fill_mode": "ai"}
]
}{
"id": "foo"
}GET /tables/{table_id or table_name}/schema
Retrieves the schema definition of a specific table. This includes the table's name, description, and a detailed list of its columns with their configurations (name, type, fill mode, source details, etc.). This is useful for understanding the structure and data generation rules for a table.
{
"name": "users",
"description": "Table containing user information",
"columns": [
{
"name": "user_id",
"description": "Unique identifier for the user",
"type": "integer",
"fill_mode": "ai",
"random": false,
"replacement": false,
"repeat": false,
"context_length": 0,
"linked_column": "",
"linked_context_columns": [],
"source_id": "",
"source_type": "",
"options": []
},
{
"name": "username",
"description": "User's chosen username",
"type": "string",
"fill_mode": "ai",
"random": false,
"replacement": false,
"repeat": false,
"context_length": 0,
"linked_column": "",
"linked_context_columns": [],
"source_id": "",
"source_type": "",
"options": []
},
{
"name": "email",
"description": "User's email address",
"type": "string",
"fill_mode": "pick",
"random": true,
"replacement": false,
"repeat": false,
"context_length": 0,
"linked_column": "email_address",
"linked_context_columns": ["domain"],
"source_id": "dataset_emails",
"source_type": "dataset",
"options": []
}
]
}PATCH /tables/{table_id or table_name}
{
"columns": [
{"name": "recipe_name", "description": "Name of the recipe", "type": "string", "fill_mode": "ai"}
],
"sources": [
{"name": "cuisines"}
]
}{
"id": "foo"
}POST /generate/tables/{table_id or table_name}
{
"batch": 2,
"count": 4,
"temperature": 0.56,
"model": "aiai"
}{
"data": [
{"recipe_name": "0", "ingredient": "t0"},
{"recipe_name": "1", "ingredient": "t1"}
]
}POST /autofill/tables/{table_id or table_name}
{
"batch": 2,
"count": 4,
"temperature": 0.56,
"model": "aiai"
"autofill": {"columns": ["ingredients"], "context_columns": ["steps"]}
}Important: Unlike the autofill CLI command, if context_columns is empty, other columns will not be used automatically. You must explicitly specify which context_columns to use.
{
"data": [
{"recipe_name": "0", "ingredient": "t0"},
{"recipe_name": "1", "ingredient": "t1"}
]
}POST /regenerate/tables/{table_id or table_name}
{
"batch": 2,
"temperature": 0.56,
"model": "aiai"
"autofill": {"columns": ["ingredients"], "rows": ["rsClYt", "8cR0I7"], "prompt": "foo bar"}
}{
"data": [
{"recipe_name": "0", "ingredient": "t0"},
{"recipe_name": "1", "ingredient": "t1"}
]
}POST /generate/tables/{table_id or table_name}
{
"batch": 2,
"count": 4,
"temperature": 0.56,
"model": "aiai",
"stream": true
}event:message
data:{"data":[{"recipe_name":"0","ingredient":"t0"}]}
event:message
data:{"data":[{"recipe_name":"1","ingredient":"t1"}]}
event:message
data:[DONE]
GET /tables/{table_id or table_name}/rows
{
"data": [
{"recipe_name": "Spaghetti Bolognese", "ingredient": "Tomato Sauce"},
{"recipe_name": "Pancakes", "ingredient": "Flour"},
{"recipe_name": "Salad", "ingredient": "Lettuce"}
],
"total": 3
}GET /tables
{
"total": 2,
"tables": [
{"id": "1", "name": "recipes", "description": "Collection of recipes"},
{"id": "2", "name": "ingredients", "description": "Ingredient lists"}
]
}DELETE /tables/{table_id or table_name}
204 No Content
POST /tables/{table_id or table_name}/truncate
{
"deleted_rows": 5
}GET /tables/{table_id or table_name}
{
"columns": [
{
"ID": "1",
"Name": "name",
"Type": "string",
"FillMode": "ai",
"Description": "recipe name"
},
{
"ID": "2",
"Name": "description",
"Type": "string",
"FillMode": "ai",
"Description": "recipe description"
}
]
}GET /models
{
"default": "4o"
"models": ["gemini-2","4o","vllm-llama3"]
}Manages AI model providers.
GET /providers
Retrieves a list of all configured AI model providers.
[
{
"id": 1,
"name": "openai_official",
"type": "openaicompatible",
"editable": false,
"key": "sk-...",
"base_url": "https://api.openai.com/v1",
"enabled": true,
"models": [
{
"model": "gpt-4",
"alias": "gpt4",
"client": "openai_official",
"max_tokens": 8192,
"rpm": 100,
"image": true,
"default": true
}
]
},
{
"id": 2,
"name": "custom_ollama",
"type": "openaicompatible",
"editable": true,
"key": "ollama_key_if_needed",
"base_url": "http://localhost:11434/v1",
"enabled": true,
"models": [
{
"model": "llama3",
"alias": "",
"client": "custom_ollama",
"max_tokens": 4096,
"rpm": 0,
"image": false,
"default": false
}
]
}
]POST /providers
Creates a new AI model provider. The id and editable fields in the request body are typically ignored as they are server-assigned. models can be initially empty or provided.
{
"name": "new_provider_xyz",
"type": "openaicompatible",
"key": "provider_api_key",
"base_url": "https://api.example.com/v1",
"enabled": true,
"models": [
{
"model": "custom-model-1",
"alias": "cm1",
"max_tokens": 2048,
"rpm": 50,
"image": false
}
]
}A 200 OK response with an empty JSON string:
""PATCH /providers/{provider_id}
Updates an existing AI model provider. The id in the URL specifies the provider to update. The request body should contain the fields to be updated. name and type are usually not changed, but other fields like key, base_url, enabled, and models can be updated.
{
"key": "updated_api_key",
"enabled": false,
"models": [
{
"model": "custom-model-1",
"alias": "cm1_updated",
"max_tokens": 4000,
"rpm": 60,
"image": true
},
{
"model": "new-model-2",
"alias": "nm2",
"max_tokens": 2000,
"rpm": 30,
"image": false
}
]
}A 200 OK response with an empty JSON string:
""DELETE /providers/{provider_id}
Deletes an AI model provider specified by its id.
A 200 OK response with an empty JSON string:
""Manages and executes automated workflows involving table operations and AI tasks.
GET /workflows
Retrieves a list of all saved workflows.
{
"total": 1,
"workflows": [
{
"id": "wf_abc123",
"name": "Daily Data Processing",
"description": "Imports data, generates insights, and exports results."
}
]
}POST /workflows
Creates a new workflow.
{
"name": "New User Onboarding Workflow",
"description": "Prepares tables and initial data for new users.",
"variables": [
{
"name": "username",
"description": "The name of the new user.",
"type": "string",
"default_value": "guest"
},
{
"name": "user_data_file",
"description": "CSV file containing user details.",
"type": "file"
}
],
"steps": [
{
"name": "Create User Table",
"type": "create_table",
"description": "Creates a table specific for the user {{.username}}.",
"payload": {
"request": {
"name": "user_{{.username}}_details",
"description": "Details for user {{.username}}",
"columns": [
{"name": "email", "type": "string", "fill_mode": "ai"},
{"name": "registration_date", "type": "date", "fill_mode": "ai"}
]
},
"on_exists": "skip"
}
},
{
"name": "Import User Data",
"type": "import",
"description": "Imports data from the provided CSV.",
"payload": {
"table": "user_{{.username}}_details",
"file": "{{.user_data_file}}",
"truncate": true
}
}
]
}Note on payload: The structure of payload within each step depends on the step's type. Variables defined in the variables section can be referenced in step payloads using Go template syntax (e.g., {{.variable_name}}).
{
"id": "wf_xyz789"
}GET /workflows/{workflow_id}
Retrieves the complete details of a specific workflow by its ID.
{
"id": "wf_xyz789",
"name": "New User Onboarding Workflow",
"description": "Prepares tables and initial data for new users.",
"variables": [
{
"name": "username",
"description": "The name of the new user.",
"type": "string",
"default_value": "guest"
},
{
"name": "user_data_file",
"description": "CSV file containing user details.",
"type": "file"
}
],
"steps": [
{
"name": "Create User Table",
"type": "create_table",
"description": "Creates a table specific for the user {{.username}}.",
"payload": {
"request": {
"name": "user_{{.username}}_details",
"description": "Details for user {{.username}}",
"columns": [
{"name": "email", "type": "string", "fill_mode": "ai"},
{"name": "registration_date", "type": "date", "fill_mode": "ai"}
]
},
"on_exists": "skip"
}
},
{
"name": "Import User Data",
"type": "import",
"description": "Imports data from the provided CSV.",
"payload": {
"table": "user_{{.username}}_details",
"file": "{{.user_data_file}}",
"truncate": true
}
}
]
}PATCH /workflows/{workflow_id}
Updates an existing workflow. The request body should be a complete workflow.Workflow object containing the desired changes.
(Similar to the Create Workflow request body, with potentially updated fields)
{
"id": "wf_xyz789", // Usually ignored by server, ID from URL is used
"name": "Updated User Onboarding Workflow",
"description": "Prepares tables and initial data for new users, now with more automation.",
"variables": [
{
"name": "username",
"description": "The name of the new user.",
"type": "string",
"default_value": "default_user"
},
{
"name": "user_data_file",
"description": "CSV file containing user details.",
"type": "file"
},
{
"name": "notification_email",
"description": "Email to notify upon completion.",
"type": "string"
}
],
"steps": [
// Potentially updated or new steps
{
"name": "Create User Table",
"type": "create_table",
"description": "Creates a table specific for the user {{.username}}.",
"payload": {
"request": {
"name": "user_{{.username}}_main_data", // Renamed table
"description": "Main data for user {{.username}}",
"columns": [
{"name": "email", "type": "string", "fill_mode": "ai"},
{"name": "registration_date", "type": "date", "fill_mode": "ai"},
{"name": "status", "type": "string", "fill_mode": "ai"} // New column
]
},
"on_exists": "recreate"
}
}
]
}{
"id": "wf_xyz789"
}DELETE /workflows/{workflow_id}
Deletes a workflow specified by its ID.
A 200 OK response with an empty JSON string:
""POST /workflows/{workflow_id_or_name}/run
Executes a specified workflow. Input variables for the workflow are provided in the request body. The response is a Server-Sent Events (SSE) stream providing real-time updates on the workflow's execution.
{
"variables": {
"username": "john_doe",
"user_data_file": {
"name": "john_doe_details.csv",
"data": "data:text/csv;base64,YXMNCg=="
},
"notification_email": "john.doe@example.com"
},
"model": "gpt-4-turbo",
"image_model": "dall-e-3",
"temperature": 0.7
}Note on user_data_file: File variables are passed as objects with name and data (Data URL encoded string). Other variables are passed directly. model, image_model, and temperature are optional overrides for AI steps within the workflow.
The server responds with a stream of events. Each event has a type and data.
Example events:
event:message
data:{"type":"MESSAGE","data":"Starting step: Create User Table..."}
event:message
data:{"type":"STEP_DONE"}
event:message
data:{"type":"MESSAGE","data":"Starting step: Import User Data..."}
event:message
data:{"type":"ROWS","data":[{"email":{"value":"john.doe@example.com","type":"string"},"registration_date":{"value":"2023-10-26","type":"date"}}]}
event:message
data:{"type":"STEP_DONE"}
event:message
data:{"type":"EXPORT","data":"CSV content here...","path":"/path/to/export/file.csv"}
// The 'path' field in EXPORT is optional and indicates the desired save path if applicable.
event:message
data:{"type":"MESSAGE","data":"Table user_john_doe_main_data exported"}
event:message
data:{"type":"STEP_DONE"}
event:message
data:{"type":"ERROR","data":"Error message if something went wrong."}
event:message
data:{"type":"WORKFLOW_DONE"}
event:message
data:[DONE]
Possible event type values:
MESSAGE: General message or log from a step.ROWS: Data generated by a step (e.g., from a "generate" or "autofill" step), typically an array of row objects.EXPORT: Contains exported data (e.g., CSV content) and an optionalpath.ERROR: Indicates an error occurred during a step.STEP_DONE: Signals the completion of a workflow step.WORKFLOW_DONE: Signals the completion of the entire workflow. The stream is terminated by[DONE].
Manages datasets that can be used as sources in table generation or other operations. Datasets can be of type 'list' (a simple list of strings) or 'csv' (data uploaded from a CSV file).
GET /datasets
Retrieves a list of all available datasets.
{
"total": 2,
"datasets": [
{
"id": "ds_abc123",
"name": "Country Names",
"description": "A list of country names.",
"type": "list",
"column_count": 0,
"value_count": 195,
"data": ["United States", "Canada", "Mexico", "..."],
"columns": []
},
{
"id": "ds_xyz789",
"name": "Product Catalog",
"description": "CSV data for product information.",
"type": "csv",
"column_count": 3,
"value_count": 0,
"data": [],
"columns": ["product_id", "product_name", "price"]
}
]
}Note: For 'list' type, data contains the list items and column_count is 0, columns is empty. For 'csv' type, columns contains the header names, data is empty, and value_count is 0 (as actual row count isn't stored directly in this summary).
POST /datasets
Creates a new dataset. The request must be multipart/form-data.
- For
type: "list", providename,description,type, anddata(as form fields,datacan be repeated for multiple items). - For
type: "csv", providename,description,type, and one or morefiles(as file uploads).
Example for type: "list":
name: (string) "My List Dataset"description: (string) "A simple list of items"type: (string) "list"data: (string) "item1"data: (string) "item2" (repeatdatafield for each item in the list)
Example for type: "csv":
name: (string) "My CSV Dataset"description: (string) "Data uploaded from a CSV file"type: (string) "csv"files: (file) [upload content of data.csv] (can attach multiple files if the server merges them, or just one primary CSV)
{
"id": "ds_new456",
"name": "My List Dataset"
}GET /datasets/{dataset_id}
Retrieves detailed information about a specific dataset by its ID.
{
"id": "ds_abc123",
"name": "Country Names",
"description": "A list of country names.",
"type": "list",
"column_count": 0,
"value_count": 195,
"data": ["United States", "Canada", "Mexico", "Brazil", "United Kingdom", "..."],
"columns": []
}Or for a CSV dataset:
{
"id": "ds_xyz789",
"name": "Product Catalog",
"description": "CSV data for product information.",
"type": "csv",
"column_count": 3,
"value_count": 0,
"data": [],
"columns": ["product_id", "product_name", "price"]
}PATCH /datasets/{dataset_id}
Updates an existing dataset. The request must be multipart/form-data.
You can update name, description. For type: "list", you can update data. For type: "csv", you can upload new files (this typically replaces the existing CSV data). The dataset type cannot be changed.
Example for updating description and data of a list dataset:
description: (string) "An updated list of items"data: (string) "updated_item1"data: (string) "updated_item2" (The provideddatafields will replace the existing list.)
Example for updating files of a csv dataset:
files: (file) [upload content of new_data.csv] (The new file(s) will replace the old CSV data.)
{
"id": "ds_xyz789"
}DELETE /datasets/{dataset_id}
Deletes a dataset specified by its ID.
A 200 OK response with an empty JSON string:
""GET /datasets/{dataset_id}/preview
Retrieves a preview of the dataset's content.
- For
type: "list", it returns all items. - For
type: "csv", it typically returns the first 100 rows.
For a list dataset:
{
"type": "list",
"data": ["item1", "item2", "item3", "..."],
"rows": null
}For a csv dataset:
{
"type": "csv",
"data": null,
"rows": [
{"product_id": "P1001", "product_name": "Laptop", "price": "1200.00"},
{"product_id": "P1002", "product_name": "Mouse", "price": "25.00"},
// ... up to 100 rows
]
}POST /image_import/tables
{
"data": <base64 encoded image data>,
"model": "aiai",
"prompt": "create a table if 4 columns, ...."
}{"id": "cx5zty"}POST /ai/list_gen
Uses AI to generate a list of string options based on a given prompt. It can optionally take existing options to avoid duplicates and generate new, complementary options.
{
"model": "gpt-4",
"prompt": "Generate a list of common fruits.",
"options": ["apple", "banana"]
}If no specific AI model is provided, the system's default model will be used. The options field is optional; if provided, the AI will try to generate options not present in this list.
A JSON array of generated string options.
[
"orange",
"grape",
"strawberry",
"mango"
]