Skip to content

Commit 372b409

Browse files
authored
Merge pull request #3 from itzlambda/feat/oauth-support
feat(cli): refactor authentication to support OAuth alongside API keys
2 parents ddb692f + 7c310d1 commit 372b409

43 files changed

Lines changed: 4235 additions & 1787 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,3 +30,4 @@ TODO.md
3030
.cursor
3131
.taskmaster/
3232
.cursorignore
33+
docs/

.justfile

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,4 @@ lint:
44

55

66
fmt:
7-
cargo fmt --check
8-
9-
clippy:
10-
cargo clippy
7+
cargo fmt

AGENTS.md

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
# rullm
2+
3+
## Build Commands
4+
5+
```bash
6+
# Build all crates
7+
cargo build --all
8+
9+
# Lint (format + clippy)
10+
just lint
11+
12+
# Format only
13+
just fmt
14+
15+
# Check examples compile
16+
cargo check --examples
17+
18+
# Run tests
19+
cargo test
20+
```
21+
22+
## Project Structure
23+
24+
This is a Rust workspace with two crates:
25+
26+
- **rullm-core** (`crates/rullm-core/`) - Core library for LLM provider interactions
27+
- **rullm-cli** (`crates/rullm-cli/`) - CLI binary for querying LLMs
28+
29+
### Core Library Architecture
30+
31+
The core library uses a trait-based provider system with two API levels:
32+
33+
1. **Simple API** - String-based, minimal configuration
34+
2. **Advanced API** - Full control with `ChatRequestBuilder`
35+
36+
Key modules:
37+
- `providers/` - Provider implementations (OpenAI, Anthropic, Google, OpenAI-compatible)
38+
- `compat_types.rs` - OpenAI-compatible message/response types used across providers
39+
- `config.rs` - Provider configuration traits and builders
40+
- `error.rs` - `LlmError` enum with comprehensive error variants
41+
- `utils/sse.rs` - Server-sent event parsing for streaming
42+
43+
### CLI Architecture
44+
45+
The CLI is organized by commands in `commands/`:
46+
- `auth.rs` - OAuth and API key management
47+
- `chat.rs` - Interactive chat mode with reedline
48+
- `models.rs` - Model listing and updates
49+
- `alias.rs` - User-defined model aliases
50+
- `templates.rs` - TOML template management
51+
52+
OAuth implementation in `oauth/`:
53+
- `openai.rs`, `anthropic.rs` - Provider-specific OAuth flows
54+
- `server.rs` - Local callback server for OAuth redirects
55+
- `pkce.rs` - PKCE challenge generation

CLAUDE.md

Lines changed: 1 addition & 165 deletions
Original file line numberDiff line numberDiff line change
@@ -1,165 +1 @@
1-
# CLAUDE.md
2-
3-
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4-
5-
## Project Overview
6-
7-
`rullm` is a Rust library and CLI for interacting with multiple LLM providers (OpenAI, Anthropic, Google AI, Groq, OpenRouter). The project uses a workspace structure with two main crates:
8-
9-
- **rullm-core**: Core library implementing provider integrations, middleware (Tower-based), and streaming support
10-
- **rullm-cli**: Command-line interface built on top of rullm-core
11-
12-
## Architecture
13-
14-
### Provider System
15-
16-
All LLM providers implement two core traits defined in `crates/rullm-core/src/types.rs`:
17-
- `LlmProvider`: Base trait with provider metadata (name, aliases, env_key, default_base_url, available_models, health_check)
18-
- `ChatCompletion`: Extends LlmProvider with chat completion methods (blocking and streaming)
19-
20-
Provider implementations are in `crates/rullm-core/src/providers/`:
21-
- `openai.rs`: OpenAI GPT models
22-
- `anthropic.rs`: Anthropic Claude models
23-
- `google.rs`: Google Gemini models
24-
- `openai_compatible.rs`: Generic provider for OpenAI-compatible APIs
25-
- `groq.rs`: Groq provider (uses `openai_compatible`)
26-
- `openrouter.rs`: OpenRouter provider (uses `openai_compatible`)
27-
28-
The `openai_compatible` provider is a generic implementation that other providers like Groq and OpenRouter extend. It uses a `ProviderIdentity` struct to define provider-specific metadata.
29-
30-
### Middleware Stack
31-
32-
The library uses Tower middleware (see `crates/rullm-core/src/middleware.rs`):
33-
- Rate limiting
34-
- Timeouts
35-
- Connection pooling
36-
- Logging and metrics
37-
38-
Configuration is done via `MiddlewareConfig` and `LlmServiceBuilder`.
39-
40-
### Simple API
41-
42-
`crates/rullm-core/src/simple.rs` provides a simplified string-based API (`SimpleLlmClient`, `SimpleLlmBuilder`) that wraps the advanced provider APIs for ease of use.
43-
44-
### CLI Architecture
45-
46-
The CLI entry point is `crates/rullm-cli/src/main.rs`, which:
47-
1. Parses arguments using clap (see `args.rs`)
48-
2. Loads configuration from `~/.config/rullm/` (see `config.rs`)
49-
3. Dispatches to commands in `crates/rullm-cli/src/commands/`
50-
51-
Key CLI modules:
52-
- `client.rs`: Creates provider clients from model strings (format: `provider:model`)
53-
- `provider.rs`: Resolves provider names and aliases
54-
- `config.rs`: Manages CLI configuration (models list, aliases, default model)
55-
- `api_keys.rs`: Manages API key storage in system keychain
56-
- `templates.rs`: TOML-based prompt templates with `{{input}}` placeholders
57-
- `commands/chat/`: Interactive chat mode using reedline for advanced REPL features
58-
59-
### Model Format
60-
61-
Models are specified using the format `provider:model`:
62-
- Example: `openai:gpt-4`, `anthropic:claude-3-opus-20240229`, `groq:llama-3-8b`
63-
- The CLI resolves this via `client::from_model()` which creates the appropriate provider client
64-
65-
## Common Development Tasks
66-
67-
### Building and Running
68-
69-
```bash
70-
# Build everything
71-
cargo build --all
72-
73-
# Build release binary
74-
cargo build --release
75-
76-
# Run the CLI (from workspace root)
77-
cargo run -p rullm-cli -- "your query"
78-
79-
# Or after building
80-
./target/debug/rullm "your query"
81-
./target/release/rullm "your query"
82-
```
83-
84-
### Testing
85-
86-
```bash
87-
# Run all tests (note: some require API keys)
88-
cargo test --all
89-
90-
# Run tests for specific crate
91-
cargo test -p rullm-core
92-
cargo test -p rullm-cli
93-
94-
# Run a specific test
95-
cargo test test_name
96-
97-
# Check examples compile
98-
cargo check --examples
99-
```
100-
101-
### Code Quality
102-
103-
```bash
104-
# Format code
105-
cargo fmt
106-
107-
# Check formatting
108-
cargo fmt -- --check
109-
110-
# Run clippy (linter)
111-
cargo clippy --all-targets --all-features -- -D warnings
112-
113-
# Fix clippy suggestions automatically
114-
cargo clippy --fix --all-targets --all-features
115-
```
116-
117-
### Running Examples
118-
119-
```bash
120-
# Run examples from rullm-core (requires API keys)
121-
cargo run --example openai_simple
122-
cargo run --example anthropic_simple
123-
cargo run --example google_simple
124-
cargo run --example openai_stream # Streaming example
125-
cargo run --example test_all_providers # Test all providers at once
126-
```
127-
128-
### Adding a New Provider
129-
130-
When adding a new provider:
131-
132-
1. **OpenAI-compatible providers**: Use `OpenAICompatibleProvider` with a `ProviderIdentity` in `providers/openai_compatible.rs`. See `groq.rs` or `openrouter.rs` for examples.
133-
134-
2. **Non-compatible providers**: Create a new file in `crates/rullm-core/src/providers/`:
135-
- Implement `LlmProvider` and `ChatCompletion` traits
136-
- Add provider config struct in `crates/rullm-core/src/config.rs`
137-
- Export from `providers/mod.rs` and `lib.rs`
138-
- Add client creation logic in `crates/rullm-cli/src/client.rs`
139-
- Update `crates/rullm-cli/src/provider.rs` for CLI support
140-
141-
3. Update `DEFAULT_MODELS` in `crates/rullm-core/src/simple.rs` if adding default model mappings
142-
143-
### Streaming Implementation
144-
145-
All providers should implement `chat_completion_stream()` returning `StreamResult<ChatStreamEvent>`. The stream emits:
146-
- `ChatStreamEvent::Token(String)`: Each token/chunk
147-
- `ChatStreamEvent::Done`: Completion marker
148-
- `ChatStreamEvent::Error(String)`: Errors during streaming
149-
150-
See provider implementations for SSE parsing patterns using `utils::sse::sse_lines()`.
151-
152-
## Configuration Files
153-
154-
- **User config**: `~/.config/rullm/config.toml` (or system equivalent)
155-
- Stores: default model, model aliases, cached models list
156-
- **Templates**: `~/.config/rullm/templates/*.toml`
157-
- **API keys**: Stored in system keychain via `api_keys.rs`
158-
159-
## Important Notes
160-
161-
- The project uses Rust edition 2024 (rust-version 1.85+)
162-
- Model separator changed from `/` to `:` (e.g., `openai:gpt-4` not `openai/gpt-4`)
163-
- Chat history is persisted in `~/.config/rullm/chat_history/`
164-
- The CLI uses `reedline` for advanced REPL features (syntax highlighting, history, multiline editing)
165-
- In chat mode: Alt+Enter for multiline, Ctrl+O for buffer editing, `/edit` to open $EDITOR
1+
@AGENTS.md

0 commit comments

Comments
 (0)