Ethical AI for Neurodivergent Web Accessibility
CogniFlow is a socially responsible AI assistant designed to bridge the gap between complex web content and neurodivergent cognition. Built specifically for the challenges of ADHD and Dyslexia, it replaces visual chaos with a focused, automated, and ethics-driven reading environment.
The core of the project is the "Visual Noise Problem"—the overwhelming density of modern web pages (ads, pop-ups, endless text) that causes cognitive overload for neurodivergent minds.
Our solution uses a Low-Friction Sensory Interface and an Autonomous AI Synthesis Engine to ensure information is processed, simplified, and delivered to the user in a format that respects their cognitive needs.
- Cognitive Equity: An algorithm that prioritizes clarity over engagement, actively stripping away attention-harvesting patterns.
- Sensory Safety: A "Digital Calm" check that allows users to instantly dampen visual stimuli (colors, fonts, animations) that trigger sensory overload.
- Transparent AI: A non-hallucinatory summarization system that always cites its sources directly on the page (Live Page Spotlight).
- Instant Summaries: Generates a concise TL;DR of any webpage using Google Gemini 1.5 Flash.
- Contextual Explanations: Select any text to get a simple, jargon-free definition or explanation instantly.
- Live Page Spotlight: Highlights key phrases from the summary directly on the webpage, bridging the abstract summary with the concrete content.
- Bionic Reading (Focus Mode): Enhances text by bolding the first few letters of every word, guiding the eye's fixation points to increase reading speed and retention.
- Smart Semantic Color: Automatically identifies and highlights Important Words (Concepts, Names, Places) in Warm Rose, helping users scan for meaning rather than just reading linearly.
- Reading Ruler: A digital guide that follows the mouse, dimming the rest of the page to isolate the current line of text.
- Professional Dyslexia Support: Switches page fonts to Verdana or OpenDyslexic with optimized line (
1.7) and letter (0.04em) spacing to prevent text from "swimming". - Adaptive Themes: High Contrast Dark Mode and Sepia Mode to reduce eye strain and blue light exposure.
| Layer | Technology | Purpose |
|---|---|---|
| Frontend | React 19, Tailwind CSS v4 | High-performance Extension UI & Side Panel. |
| Build Tool | Vite, CRXJS | Fast HMR and optimized extension bundling. |
| AI Logic | Google Gemini 1.5 Flash | Real-time summarization, entity extraction, and explanation. |
| Content Script | TypeScript | Direct DOM manipulation for Bionic Reading and Highlighting. |
| State | React Hooks | Local state management for sensory preferences. |
cogniflow/
├── components/ # Reusable UI components (ChatInterface, VisionBoard)
├── public/ # Static assets (icons, images)
├── services/ # AI Service integrations (Gemini API)
├── utils/ # Core logic and helpers
│ ├── extensionUtils.ts # DOM manipulation & Content Injection
│ └── textProcessing.ts # Bionic reading algorithms
├── App.tsx # Main Extension Side Panel application
├── background.js # Background service worker (Context Menus)
├── manifest.json # Chrome Extension Configuration
└── vite.config.ts # Build configuration with CRXJS- Node.js 18+
- npm or yarn
-
Clone the repository
git clone https://github.com/shafayatsaad/cogniflow.git cd cogniflow -
Install dependencies
npm install
-
Set up Environment Variables Create a
.env.localfile in the root directory:VITE_GEMINI_API_KEY=your_gemini_api_key_here
-
Start Development Server
npm run dev
This runs Vite in watch mode.
-
Load in Chrome
- Go to
chrome://extensions. - Enable Developer mode.
- Click Load unpacked.
- Select the
distfolder.
- Go to
- Documentation: See
docs/blueprint.mdfor detailed project specifications - Issues: Report bugs or request features on GitHub Issues
- Discussions: Join community discussions on GitHub Discussions
We welcome contributions! Please see our Contributing Guidelines for details on how to get involved.
Distributed under the MIT License. See LICENSE for more information.
Shafayat Saad - Project Lead