An independent, research-backed evaluation framework for EdTech products
The Educator Index™ is a comprehensive evaluation framework for EdTech products that helps:
- School Districts make evidence-based purchasing decisions
- EdTech Investors predict product-market fit and retention
- EdTech Companies receive independent certification and improvement feedback
Unlike ad-hoc reviews or vendor-funded studies, The Educator Index evaluates products against peer-reviewed research on what drives student learning outcomes.
| Tier | Score | Description |
|---|---|---|
| Exemplary | 90-100 | Best-in-class, proven impact |
| Distinguished | 80-89 | High quality, strong evidence |
| Certified | 70-79 | Meets standards, ready for adoption |
| Developing | 50-69 | Early stage, needs improvement |
| Below Threshold | <50 | Significant gaps, not recommended |
The Educator Index™ Framework Domain & Indicator Naming Conventions (Version 2.0)
DOMAIN 1: STUDENT LEARNING IMPACT "Does this product support how students actually learn?" Indicators (4 total = 60 points possible) Instructional Fit (0-15 points) Alignment to classroom workflows, instructional priorities and learning outcomes High-Effect Learning Behaviors (0-15 points) Presence of research-backed practices that drive achievement Equity & Learner Diversity (0-15 points) Support of student agency, multilingual learners, students with disabilities, and diverse needs Learning Data Quality (0-15 points) Richness of evidence that reveals student thinking
DOMAIN 2: INSTRUCTIONAL INTEGRATION & USABILITY "Can educators actually use this consistently and does it make them better?" Indicators (4 total = 60 points possible) Routine-Ready Design (0-15 points) Low-friction implementation for typical educators Dashboard & Actionability (0-15 points) Real-time insights that enable teacher action Culture & Capacity Support (0-15 points) Community, ecosystem, and change management Professional Development & Support (0-15 points) Scalability of training and implementation assistance
DOMAIN 3: TRUST & RELIABILITY "Is this product safe, reliable, and sustainable?" Indicators (3 total = 45 points possible) Technical Foundation (0-15 points) Reliability, interoperability, and security AI Quality & Safety (0-15 points) Transparency, bias mitigation, accuracy, student data protection Adoption & Retention (0-15 points) Product stickiness, sustained engagement, and market validation
TOTAL FRAMEWORK 3 Domains 11 Indicators 165 Points Possible
Every pillar is validated against peer-reviewed research with documented effect sizes:
- Instructional Design: d = 0.50-0.68 (Stockard et al., 2018)
- Feedback: d = 0.70 (Hattie, 2009; Wisniewski et al., 2020)
- Collective Teacher Efficacy: d = 1.57 (highest in education research!)
- Universal Design for Learning: d = 0.35-0.52 (Al-Makhzoomy et al., 2023)
- Formative Assessment: d = 0.40-0.70 (Black & Wiliam, 2018)
See full research documentation: docs/framework/research-basis.md
- Request an evaluation: Submit product details via vendor questionnaire
- Review the report: Get comprehensive scoring + implementation guidance
- Make informed decision: Use data-driven insights for procurement
- Submit company for due diligence: Use investor questionnaire
- Receive retention predictions: ARR/NRR forecasts based on product quality
- Validate with data room: Compare predictions against actual metrics
- Self-assess: Complete vendor questionnaire
- Get certified: Receive Educator Index badge + detailed report
- Improve product: Use feedback to enhance quality
- Python 3.8+
- pip (Python package manager)
# Clone the repository
git clone https://github.com/YOUR-USERNAME/educator-index.git
cd educator-index
# Install dependencies
pip install -r requirements.txt# Using a completed questionnaire JSON file
python scripts/run_evaluation.py --questionnaire data/sample_evaluations/example.json
# Output report
python scripts/generate_report.py --input results.json --format htmleducator-index/
├── docs/ # Documentation
│ ├── framework/ # Framework design & research
│ ├── questionnaires/ # Intake forms
│ └── examples/ # Sample evaluations
│
├── framework/ # Core scoring logic (Python)
│ ├── pillars.py # Pillar definitions
│ ├── scoring.py # Scoring algorithms
│ └── predictions.py # ARR/NRR models
│
├── data/ # Data files & samples
├── tests/ # Unit tests
└── scripts/ # Utility scripts
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Research: Add peer-reviewed citations to strengthen framework
- Code: Improve scoring algorithms or report generation
- Evaluations: Submit product evaluations (with evidence)
- Feedback: Suggest framework improvements
This project is licensed under the MIT License - see LICENSE file for details.
The Educator Index™ is developed by Danelle Almaraz, Merissa Sadler-Holder, Bonnie Nieves, a research and consulting firm focused on evidence-based EdTech evaluation.
- Website: [teachingwithmachines.com]
- Contact: [email]
- Twitter: [@teachingwmachines]
If you use The Educator Index in research or practice, please cite:
The Educator Index™: A Research-Backed
Framework for EdTech Evaluation. (2026) GitHub.
https://github.com/YOUR-USERNAME/educator-index
- Full Research Documentation
- Framework Overview
- Scoring Methodology
- Example Evaluations
- Setup Guide for Claude Code ← START HERE
If you find The Educator Index useful:
- ⭐ Star this repository
- 🐦 Share on social media
- 📧 Tell your district/company about it
- 🤝 Contribute improvements
Built with ❤️ for educators, by educators