Skip to content

aleocampodev/mindops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 MindOps: Cognitive Load Balancer

MindOps Dashboard

MindOps Demo Ver video de Demostración en Loom

MindOps is an event-driven, multi-agent cognitive processor. It's designed to take unstructured mental noise (rants, anxieties, chaotic thoughts) and compile it into deterministic, prioritized action plans. Think of it as a load balancer for your brain's working memory.


🎯 High-Value Use Cases

Building a task manager is easy. Organizing chaos is hard. Here is who MindOps is built for:

  • Founders & Indie Hackers: You have 50 things to do across 4 projects and are paralyzed by context-switching. You vent to the bot: "I need to fix the auth bug, but I'm worried about the AWS bill, and I haven't replied to Sarah." MindOps parses this, prioritizes the bug, schedules the bill review, and drafts a reminder for Sarah.
  • Engineers in "The Zone": When you get blocked by side-quests or lose track of architecture thoughts. Brain-dump into Telegram, and let the asynchronous engine structure your next steps without you ever leaving your IDE or losing momentum.
  • ADHD & Neurodivergent Execution: Acts as an external executive function system. It turns emotional, overwhelming states into calm, objective, PENDING database states. If you freeze and ignore the tasks, the Twilio Safety Net handles the edge case: it physically calls your phone to break the paralysis loop.

🚀 The User Loop (How It Works)

  1. Input (The Webhook): You send an audio or text rant to the MindOps Telegram Bot.
  2. Processing (The Engine): An asynchronous n8n pipeline processes the payload. It uses Semantic RAG to find historical context and an LLM to distill the rant into atomic actions.
  3. Execution (The Dashboard): You log into the Next.js portal. Your chaos is now a neat, prioritized Board. You approve, modify, or complete tasks.

⚙️ System Architecture & Engineering

As an engineer, wrapping an LLM api call is trivial. Building a deterministic, resilient, and blazing-fast agentic system is complex. Here is how MindOps solves the hard parts:

graph TD
    User((User)) -->|Telegram/Web| Frontend[Next.js App Router]
    Frontend -->|Auth/State| Supabase[(Supabase & pgvector)]
    User -->|Thoughts/Vents| TelegramBot[Telegram API]
    TelegramBot -->|Webhooks| Orchestrator{n8n Orchestrator}
    Orchestrator -->|Process/Categorize| Supabase
    Orchestrator -->|Redundancy| Twilio[Twilio Voice API]
Loading

1. Eliminating N+1 Latency via Context Injection Gateway

Instead of the orchestration engine making multiple sequential database calls to fetch user profiles, language preferences, and prompt templates, MindOps uses a Context Injection Gateway. A single, ultra-fast Supabase RPC (get_bot_context) hydrates the entire state into memory in one go. Downstream agents read directly from this $json.context payload, slashing execution latency.

2. Semantic RAG & Deterministic State Machine

LLMs hallucinate; state machines don't.

  • RAG: MindOps uses pgvector to do k-NN cosine similarity searches in a 768-dimensional latent space. The AI engine understands how you usually behave and responds with personalized context.
  • State Machine: The AI doesn't execute blind system changes. It proposes "Atomic Actions" bounded by strict database constraints (Enums like PENDING, ACTIVE). Human-in-the-loop (HITL) is mandatory for final review.

3. Native Server-Side i18n (Zero Flicker)

Bilingual apps often suffer from hydration flicker. MindOps intercepts requests at the Next.js Edge using middleware.ts. It resolves locale via a strict hierarchy (URL > DB Profile > Telegram Session > Browser Header) and passes an x-next-intl-locale header to React Server Components. The result is zero flash of the wrong language on first load.

4. Infrastructure Architecture

The backend runs on a hyper-lean infrastructure designed to completely eliminate Cold Starts while remaining extremely memory efficient. (For a detailed breakdown of the hosting decisions, the execution model, and why we chose a monolith approach over queue/workers, see our Infrastructure Logs).


⚠️ High Priority Roadmap: UX & AI Clarity

1. Transparent Model Execution

Currently, the documentation refers to the reasoning engine generically as an "LLM." To provide technical clarity for developers:

  • Core Model: Google Gemini 2.0 Flash (orchestrated via n8n).
  • Base Prompting: The orchestrator injects strict JSON schema rules, limiting the model's output to format-compliant deterministic actions.
  • RAG Context Injection: Unstructured data is vectorized via pgvector. During execution, the pipeline retrieves the closest embeddings (historical user context) and passes them into the prompt's context window.

2. Honest AI Pipeline Visualization

Note: The current Mermaid diagram will soon be replaced with a high-fidelity visual architecture image to improve rendering and cognitive load.

The literal execution pipeline follows this strict flow (understandable in 10 seconds): Telegramn8n (Orchestrator)RAG (Supabase Vector)Gemini 2.0 FlashSupabase (State Update)Next.js Dashboard


💻 Local Setup (For Developers)

Want to spin up your own cognitive engine?

Prerequisites

  • Node.js (v18+)
  • A Supabase project
  • A Telegram Bot Token

Run Locally

git clone https://github.com/aleocampodev/mindops.git
cd mindops-web

# Install dependencies (React 19 requires --legacy-peer-deps for some libs)
npm install --legacy-peer-deps

# Configure environment variables
cp .env.example .env.local
# Edit .env.local to add your NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_ANON_KEY

# Boot the app
npm run dev

Visit http://localhost:3000 to start debugging.


Designed for efficiency. Built for the mind. ⚡

About

🧠 MindOps: A Mental Engineering platform. It acts as an external cognitive processor, utilizing Supabase RAG, n8n orchestrations, and Telegram to transform unstructured thoughts into deterministic action plans.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors