Pragya is a mobile-first, agent-native computing layer that replaces app-centric interaction with intent-based execution, enabling users to get things done by expressing goals — not managing apps.
Pragya rethinks how people interact with smartphones.
Instead of opening apps, navigating UIs, and repeating context, users simply express intent — and Pragya plans, executes, and completes tasks across the system.
- Power mobile users
- Privacy-conscious users
- Developers & researchers exploring agent-native systems
- Future OEM / OS innovators
- Smart device & HCI researchers
Modern mobile operating systems are:
- 📱 App-centric
- 🧠 Cognitively expensive
- 🔁 Repetitive
- ☁️ Cloud-dependent
- 🔒 Opaque about data usage
Pragya introduces an agent-first interaction model built for mobile realities.
Smartphones force users to think in apps, not outcomes.
To complete a simple task, users must:
- Decide which app to open
- Navigate multiple screens
- Copy context across apps
- Repeat the same steps daily
- They sit inside app-centric OSes
- They lack system-level orchestration
- They ask too many follow-up questions
- They cannot complete multi-step goals reliably
Pragya makes the agent the interface.
Users do not open apps.
They express intent.
“Send my ETA to my mom.”
Pragya:
- Understands the goal
- Infers required context (location, contacts, preferences)
- Orchestrates system capabilities and skills
- Executes the task end-to-end
- Displays only the UI required for confirmation
No app switching.
No manual steps.
graph TD
A[User Intent<br/>Text / Voice] --> B[Intent Engine<br/>On-Device LLM]
B --> C[Planner & Skill Router]
C --> D[Skill Execution Layer]
D --> E[Dynamic UI Renderer]
C --> F[Personal Vault]
F --> G[Audit Log]
graph TD
A[Intent Input] --> B[Understanding]
B --> C[Planning]
C --> D[Execution]
D --> E[UI + Outcome]
E --> F[Audit Record]
🧠 Intent-first interaction model
🧩 Skill-based execution (not apps)
🔁 Cross-skill orchestration
⚡ Low-latency on-device inference
🔐 Tokenized, one-time data access
📜 Human-readable audit logs
🧭 Context-aware execution
📱 Designed specifically for mobile constraints
🧩 Skills, Not Apps Pragya replaces applications with Skills.
A Skill:
Exposes intents it can fulfill
Provides logic + optional UI fragments
Never receives persistent access to user data
Is orchestrated exclusively by the agent
Pragya acts as a universal coordination layer between Skills.
On-device intent processing by default
Explicit permission requests per action
One-time, scoped data tokens
Full audit log visible to the user
No silent background access
Privacy is architectural, not policy-based.
🛠️ Tech Stack (Planned / In Progress) Core Android (AOSP-compatible layer)
Kotlin / Java
Python (agent & experimentation)
AI / ML On-device LLMs (quantized)
Intent parsing & planning models
Context memory with decay
System Integration Android Accessibility APIs
Notification & system event listeners
Overlay & dynamic UI rendering
Pragya/
├── docs/ # HLD, LLD, architecture, ethics
├── research/ # Related work & experiments
├── product/ # Roadmap, personas, monetization
├── sdk/ # Skill SDK & specs
├── src/
│ ├── agent_core/ # Core agent runtime
│ ├── intent_engine/ # NLU & planning
│ ├── skill_router/ # Skill orchestration
│ ├── ui_runtime/ # Dynamic UI renderer
│ └── personal_vault/ # Encrypted data & audit logs
├── assets/ # Diagrams, demos
└── README.md
Metric Target Intent latency < 300 ms On-device inference Yes Background footprint Minimal Network dependency Optional Data persistence User-controlled
Capability Pragya Traditional Mobile OS Voice Assistants
Intent-first UX ✅ ❌ ❌
Cross-app execution ✅ ❌
Android agent runtime (v0)
Intent parsing & planning engine
Core system skills (phone, messages, calendar)
Skill SDK v0
Proactive intent suggestions
Developer preview
Mobile beta
Pragya aims to redefine mobile computing by:
Reducing cognitive load
Eliminating app switching
Making privacy transparent
Treating intent as the primary abstraction
Long-term goal:
Replace app-centric interaction with intent-native computing.
Pragya is open-source and research-driven.
We welcome contributions from:
Mobile systems engineers
AI / agent researchers
Privacy & security experts
HCI & UX designers
See CONTRIBUTING.md for details.
Apache 2.0 License
If you like this project, ⭐ star the repo and join the journey.