Glambdar is a minimal serverless function runtime written in Go for executing Node.js functions with Docker-based isolation.
It is simple and focuses on the core mechanics of a serverless runtime: deployment, invocation, isolation and IPC.
-
A function is uploaded as a zip file
-
The zip is extracted into a function-specific directory
-
On invocation:
- A warm Docker container is acquired from the pool (or a new one started)
- The function code is mounted
- A Node.js worker executes the function
- Communication between runtime and worker happens via Unix Domain Sockets (UDS)
- After execution, the container is returned to the pool for reuse
-
The response is returned to the client
-
Metadata and execution logs are tracked in a SQLite database for each function
-
Functions can be queried or deleted via API routes
- Docker
- Unix-based Environment (Linux/macOS)
UDS is used for IPC, so Windows is not supported natively
- Go (for building the runtime)
- Node.js (inside Docker container, managed by the Node.js container image)
-
Glambdar relies on Docker for function isolation. Ensure the Docker daemon is running before starting the runtime.
-
The runtime will automatically create a
.glambdardirectory in your user home directory to store functions, logs, and metadata.
git clone https://github.com/eswar-7116/glambdar.git
cd glambdargo run ./cmd/glambdargo build -o glambdar ./cmd/glambdar
./glambdarThe runtime starts an HTTP server on localhost:8000.
curl -X POST \
-F "file=@/path/to/myfunc.zip" \
http://localhost:8000/deployThe function name is automatically inherited from the zip file name.
curl -X POST \
-H "Content-Type: application/json" \
-d '{"name":"Glambdar"}' \
http://localhost:8000/invoke/myfunccurl http://localhost:8000/infocurl http://localhost:8000/info/myfunccurl http://localhost:8000/logs/myfunccurl -X DELETE http://localhost:8000/del/myfuncPOST /deploy
- Upload a zip file
- Glambdar extracts the zip file into
GLAMBDAR_DIR/functions/<name> - Initializes metadata
- Optional:
rateLimit(form field) - set a maximum requests per second for this function (default:0for unlimited)
POST /config/:name
- Body:
{"rateLimit": number} - Updates the rate limit for a deployed function in real-time without redeploying.
POST /invoke/:name
- Runs the function in an isolated Docker container
- Uses a warm container pool for subsequent faster invocations
- Uses UDS for runtime-worker communication
All invocations are HTTP POST requests.
GET /logs/:name
- Returns stdout and stderr execution logs for a single function
GET /info
- Returns metadata for all deployed functions
GET /info/:name
- Returns metadata for a single function
DELETE /del/:name
- Removes function code, metadata, and logs
Each deployed function must export a handler function from an index.js file.
- File name must be
index.js - The entry point must be
exports.handler - The handler must be an async function
- The handler receives the request object described below in the Function Request Format section
exports.handler = async (req) => {
const jsonData = await req.json();
return {
statusCode: 200,
body: {
message: `Hello ${jsonData.name}!`,
},
};
};If handler is missing or index.js is not present, the invocation will fail.
{
headers: { [key: string]: string | string[] },
body: string,
json(): Promise<any>
}Inside the function:
req.headers: request headersreq.body: raw body stringawait req.json(): parsed JSON body
{
statusCode?: number,
headers?: { [key: string]: string | string[] },
body: any
}statusCode(optional) is the HTTP status code of the response (default:200)headers(optional) is the response headersbodycan be any JSON-serializable value- Returned as the HTTP response body
- Unit tests run by default
- Integration tests (Docker-dependent) are skipped unless enabled
Run only unit tests locally:
go test ./...Run integration tests locally:
RUN_INTEGRATION_TESTS=1 go test ./...Glambdar is optimized for low-latency function execution using persistent per-function container pools, Unix Domain Socket (UDS) IPC, and EWMA-based predictive pre-warming.
| Metric | Result |
|---|---|
| Cold Start Latency | ~230 ms |
| Warm Start Latency (Avg) | ~1.06 ms |
| Warm Throughput | ~2,951 req/s |
Benchmark Environment
- Local Linux environment
- Intel i7 13th Gen, 16GB RAM
- Simple "ping" function returning static JSON
- Warm latency measured after worker/container initialization
- Throughput measured under concurrent warm load
- Persistent Docker container pool for reduced latency and auto-scaling
- Intra-Function Concurrency: Implemented a multi-request routing threshold (adapted from 2024 IEEE serverless optimization models) to drastically reduce cold starts under burst loads while maintaining strict process isolation.
- EWMA-Based Predictive Pre-Warming: Uses Exponentially Weighted Moving Average traffic prediction with dynamic alpha to proactively spin up containers before demand spikes, eliminating cold starts under burst loads.
- UDS over TCP for low-latency IPC
- Simple IPC protocol (structured JSON)
If you like this project, please consider giving this repo a star 🌟
