Tokensense-Ai is a privacy-first, professional-grade cost intelligence platform designed for AI developers, prompt engineers, and architects. It provides a "pre-flight" environment to calculate, simulate, and optimize API costs across 50+ major models (GPT-4o, Claude 3.5 Sonnet, Gemini 1.5 Pro, Llama 3) before you make a single API call.
Tokensense-Ai isn't just a simple counter; it's a comprehensive suite of optimization tools:
- Function: Real-time token counting utilizing each provider's exact tokenizer via
tiktokenWASM. - Key Feature: Side-by-side editing of System and User prompts with instant cost projection.
- Optimizers: Integrated "Token Diet" tools to strip whitespace and compress prompts without losing context.
- Function: Calculates tokens for image processing based on resolution and detail settings (High/Low).
- Functionality: Supports OpenAI's tile-based pricing and Anthropic's vision padding logic.
- Function: Solves the "Break-Even" math for cached context (Anthropic & Gemini).
- Strategic Insight: Determine exactly how many turns or documents you need to process before the caching premium pays for itself in read-discounts.
- Function: Forecasts the compounded cost of autonomous agent workflows.
- Functionality: Simulates recursive context growth, planning turns, and "worst-case" billing scenarios for loops that retry or branch.
- Function: Compare standard API pricing vs. asynchronous Batch API discounts.
- Functionality: Plan large-scale data processing jobs with projected savings of 50%+.
- Function: Side-by-side cost analysis across 50+ LLMs.
- Key Metric: Real-time calculation of Input vs. Output token ratios and cost-per-1M-tokens.
- Function: Visual timeline of LLM price drops and provider competition.
- Strategic Insight: Track the "Commoditization of Reasoning" as frontier models become progressively cheaper.
- Function: Interactive educational modules (Lessons 1-4) covering the math behind tokenization, multilingual penalties, and agentic architecture.
In the era of sensitive data, Tokensense-Ai operates on a Zero-Server architecture:
- Client-Side Processing: All tokenization and logic happen 100% in your browser.
- No Data Retention: Your prompts, files, and API keys (optional) are never uploaded or stored.
- WASM Performance: High-speed computation using WebAssembly for local efficiency.
- Framework: Next.js 15 (App Router, RSC)
- UI: Tailwind CSS v4, Lucide Icons, Framer Motion
- State: Zustand (Persistence-enabled)
- Tokenization: Tiktoken (WASM), Google AI Edge
- Deployment: Netlify / Vercel
- Localization: next-intl (Root-level flattened architecture)
Tokensense-Ai is ready for local development or self-hosting:
-
Clone & Install:
git clone https://github.com/artosien/Tokensense-Ai.git cd Tokensense-Ai npm install -
Run Development Server:
npm run dev
-
Build for Production:
npm run build
We believe in open-source transparency. If you find a pricing discrepancy or want to add a new model:
- Open a PR following our CONTRIBUTING.md.
- Check the CHANGELOG.md for recent architectural updates.
License: Apache 2.0 (Open Core) — Built by Angelo S. Enriquez.
Architecting the Future with Agentic Intelligence.


