Skip to content

Open-Model-Initiative/Omni-RLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

52 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Omni-RLM

A High-Performance Recursive Language Model Framework

Zig

Leverage the power of recursive reasoning in AI agents with type-safe, high-performance Zig

δΈ­ζ–‡ζ–‡ζ‘£: README_CN.md

Overview β€’ Features β€’ Installation β€’ Quick Start β€’ Examples β€’ Project Structure β€’ Documentation β€’ Roadmap


πŸ“– Overview

Omni-RLM is a high-performance recursive language model framework that enables AI agents to perform complex reasoning tasks through controlled recursive LLM calls. Built with Zig's zero-cost abstractions and memory safety features, it provides a robust foundation for production-grade AI applications.

Why Omni-RLM?

  • πŸš€ Blazing Fast: Leveraging Zig's zero-cost abstractions and manual memory management for optimal performance
  • πŸ”„ Recursive Reasoning: Support for multi-depth language model calls with fine-grained control
  • πŸ“ Production-Ready Logging: Comprehensive structured logging for debugging and analysis
  • πŸ”Œ Backend Agnostic: Works with any OpenAI-compatible API (OpenAI, Qwen, Anthropic, etc.)
  • 🎯 Type-Safe: Compile-time guarantees prevent runtime errors
  • πŸ’Ύ Memory Efficient: Explicit allocator control for predictable resource usage

✨ Features

Feature Description
Recursive Execution Execute language models with configurable recursion depth limits
Query Tracking Automatic tracking of context length, type, and metadata
Iteration Logging JSON-formatted logs for every iteration with full traceability
Backend Flexibility Easy integration with OpenAI, Qwen, or any compatible LLM-API spec
Memory Safety Built-in protection against memory leaks and undefined behavior
Custom Prompts Override system prompts for specialized agent behaviors

Installation

Prerequisites

  • Zig 0.15.2 or later
  • Python package dill for code execution in the

Installation Steps

  1. Clone the repository:
git clone https://github.com/Open-Model-Initiative/Omni-RLM.git
cd Omni-RLM
  1. Install Python dependencies:
pip install dill # Only needed if executing code in local environment
  1. Copy the template and create your .env file:
cp .env.example .env
  1. Fill your .env values:
OMNIRLM_API_KEY=sk-your-api-key-here
OMNIRLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
OMNIRLM_MODEL_NAME=qwen-flash

πŸš€ Quick Start

Here's a simple example to get you started:

zig build run

IMPORTANT: zig build run now loads backend config from .env.

const std = @import("std");
const omni = @import("omni-rlm");
const RLM = omni.RLM;
const RLMLogger = omni.RLMLogger;
const config_env = omni.config_env;

pub fn main() !void {
    const allocator = std.heap.page_allocator;

    var backend_cfg = try config_env.load_backend_env_config(allocator, ".env");
    defer backend_cfg.deinit(allocator);

    // Initialize logger
    const logger = try RLMLogger.init("./logs", "run", allocator);

    // Configure RLM instance
    var rlm: RLM = .{
        .backend = "openai",
        .backend_kwargs = .{
            .base_url = backend_cfg.base_url,
            .api_key = backend_cfg.api_key,
            .model_name = backend_cfg.model_name,
        },
        .environment = "local",
        .environment_kwargs = "{}",
        .max_depth = 1,
        .logger = logger,
        .allocator = allocator,
        .max_iterations = 10,
    };

    try rlm.init();
    defer rlm.deinit();

    // Make a completion request
    const prompt = "Print me the first 100 powers of two, each on a newline.";
    const p = try allocator.dupe(u8, prompt);
    defer allocator.free(p);
    
    const result = try rlm.completion(p, null);
    defer allocator.free(result.response);
    
    std.debug.print("Response: {s}\n", .{result.response});
    std.debug.print("Execution Time: {d}ms\n", .{result.execution_time});
}

πŸ’‘ Usage Examples

OpenClaw-style agent in Zig

A dedicated OpenClaw-style entry point is available at src/example/openclaw.zig. It wires Omni-RLM with an autonomous system prompt and can be run as:

export OPENAI_API_KEY="sk-..."
# Optional overrides:
# export OPENAI_BASE_URL="https://api.openai.com/v1/chat/completions"
# export OPENAI_MODEL="gpt-4o-mini"
zig build openclaw -- "Implement a Fibonacci CLI and test it"

This gives you an agentic coding loop (plan β†’ execute β†’ reflect β†’ final answer) while reusing Omni-RLM's recursion, logging, and environment tooling.

Configuring Different Backends

OpenAI GPT-4
var rlm: RLM = .{
    .backend = "openai",
    .backend_kwargs = .{
        .base_url = "https://api.openai.com/v1/chat/completions",
        .api_key = "sk-...",
        .model_name = "gpt-4",
    },
    .max_depth = 3,
    .max_iterations = 50,
    .allocator = allocator,
};
Qwen (Alibaba Cloud)
var rlm: RLM = .{
    .backend = "openai",
    .backend_kwargs = .{
        .base_url = "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions",
        .api_key = "sk-...",
        .model_name = "qwen-plus",
    },
    .max_depth = 2,
    .allocator = allocator,
};
Custom Backend
var rlm: RLM = .{
    .backend = "openai",
    .backend_kwargs = .{
        .base_url = "https://your-custom-api.com/v1/chat/completions",
        .api_key = "sk-...",
        .model_name = "your-model",
    },
    .custom_system_prompt = "You are a specialized coding assistant...",
    .allocator = allocator,
};

Working with Logs

The logger creates structured JSON logs that include:

{
  "prompt": [{"role":"Your prompt concat with system message"}...],
  "response": "Model response",
    "code_blocks": [
        {
            "code": "extracted code block if any",
            "result": {
                "stdout": "output from code execution",
                "stderr": "error output if any",
                "term": "exit status"
            }
        }
    ],
  "final_answer": "Extracted final answer if any",
  "execution_time": 1234//milliseconds
}

πŸ“ Project Structure

Omni-RLM/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ omni-rlm.zig          # Public exports (RLM, RLMLogger, config_env)
β”‚   β”œβ”€β”€ core/
β”‚   β”‚   β”œβ”€β”€ config_env.zig    # .env backend config loader
β”‚   β”‚   β”œβ”€β”€ rlm.zig           # Core RLM orchestrator
β”‚   β”‚   β”œβ”€β”€ rlm_logger.zig    # Structured logging system
β”‚   β”‚   β”œβ”€β”€ types.zig         # Type definitions and structs
β”‚   β”‚   β”œβ”€β”€ prompt.zig        # Prompt construction utilities
β”‚   β”‚   β”œβ”€β”€ parsing.zig       # Response parsing (code blocks)
β”‚   β”‚   β”œβ”€β”€ Model_info.zig    # Model configuration and metadata
β”‚   β”‚   └── environment/
β”‚   β”‚       β”œβ”€β”€ type.zig      # EnvHandler and env types
β”‚   β”‚       β”œβ”€β”€ local/        # Local Python environment
β”‚   β”‚       β”‚   └── local.zig # Local runner implementation
β”‚   β”‚       └── daytona/      # Daytona environment
β”‚   β”‚           └── daytona.zig       # Daytona runner
β”‚   β”‚           └── daytona_script.py # Daytona helper script
β”‚   └── example/
β”‚       β”œβ”€β”€ quickstart.zig    # Example usage (use for debug and testing)
β”‚       β”œβ”€β”€ run.zig           # Example runner
β”‚       └── openclaw.zig      # OpenClaw-style autonomous agent
β”œβ”€β”€ API_referance.md     # API reference documentation
β”œβ”€β”€ build.zig
β”œβ”€β”€ build.zig.zon
β”œβ”€β”€ LICENSE
└── README.md            # This file

Key Files

  • src/omni-rlm.zig: Package interface and config_env export
  • src/core/rlm.zig: Main entry point with RLM struct and completion logic
  • src/core/rlm_logger.zig: Handles all logging operations with JSON output
  • src/core/types.zig: Shared type definitions (metadata, message, code blocks)
  • src/core/prompt.zig: System prompt building from query metadata
  • src/core/parsing.zig: Utilities to extract structured data from responses
  • src/core/Model_info.zig: Model configurations and capabilities

πŸ§ͺ Test

Run tests:

zig build test

run quickstart example:

zig build quickstart

πŸ“– Documentation

πŸ“– Roadmap

The team has proposed an initial community roadmap and wider community inputs are welcomed.