class SachinPaunikar:
role = "Data Scientist | ML Fairness & Responsible AI Engineer"
location = "Nagpur, Maharashtra, India 🇮🇳"
company = "SparrowAI Research and Development Center"
education = "B.E. Computer Science — Nagpur University | CGPA: 8.2 / 10.0"
open_to = ["Remote roles", "Full-time positions", "Collaborations"]
focus_areas = [
"⚖️ ML Fairness & Bias Detection",
"📊 Real-time Drift Monitoring",
"🔍 Explainable AI (SHAP · DiCE · LIME)",
"🏥 Medical Computer Vision",
"🎵 Deep Learning · Audio Classification",
"🗣️ NLP · Generative AI Pipelines",
]
currently = "Building production-grade ethical AI systems aligned with EU AI Act & EEOC"
philosophy = "Make AI responsible, transparent, and fair — for everyone."| # | Project | Description | Stack | Demo |
|---|---|---|---|---|
| 🥇 | Bias Drift Guardian | Real-time AI fairness & drift monitoring. EEOC / EU AI Act compliant. Intersectional bias detection across compound subgroups | Python Streamlit FastAPI SHAP Fairlearn Docker |
🔴 Live |
| 🥈 | Urban Sound Classifier | 96.63% accuracy on UrbanSound8K (8,732 samples). Hybrid U-Net + CNN ensemble with real-time microphone classification | TensorFlow U-Net Librosa TFLite Flask |
— |
| 🥉 | Transcript → Ad Generator | NLP pipeline: transcript ingestion → NER → LLM ad copy → async video rendering. CI/CD via GitHub Actions | spaCy Redis Queue MoviePy Docker GitHub Actions |
— |
| 4️⃣ | Skin Lesion Segmentation | Medical AI: U-Net pixel segmentation on HAM10000. Temporal tracking with automated >15% growth alerts | TensorFlow U-Net OpenCV Albumentations HAM10000 |
— |
| 5️⃣ | RetinaFace Detection | Face detection & 5-point landmark localisation using RetinaFace architecture | Python InsightFace OpenCV Jupyter |
[🔴 Live] |
What makes it unique: Standard fairness tools check one attribute at a time (gender or age). Bias Drift Guardian detects compound discrimination across intersecting subgroups — the kind courts care about.
Standard: "No gender bias detected" ✅ (Male: 70%, Female: 68%)
Ours: "Female employees aged 50+ → only 38% approval rate!" ❌ (Disparity: 0.48)
Core capabilities:
| Capability | Details |
|---|---|
| 🎯 Intersectional Fairness | Compound subgroup analysis (gender × age × race) |
| 📊 Drift Detection | PSI · KS Test · Chi-Square with configurable thresholds |
| 🔍 Root Cause Analysis | SHAP feature importance drift attribution |
| 🔮 Counterfactual XAI | DiCE What-If explanations (constraint-aware, EEOC-auditable) |
| 📁 Multi-Dataset | German Credit · Adult Census · COMPAS Recidivism |
| 🚀 Deployment | Docker Compose · FastAPI · Streamlit Cloud · MIT licensed |
Core Languages
Machine Learning & Deep Learning
Responsible AI & Explainability
Data & Visualisation
NLP & Computer Vision
Deployment & MLOps
I focus on a gap most ML engineers ignore: what happens after your model is deployed.
The hidden reality of production ML:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
• 80% of models experience drift within 6 months of deployment
• Compound discrimination (gender × age × race) goes undetected
by standard single-attribute fairness checks
• EEOC / EU AI Act compliance is now a legal requirement,
not a nice-to-have
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
My solution: Real-time monitoring that catches what others miss.
Areas of deep expertise:
- Intersectional Fairness — compound multi-attribute subgroup analysis beyond single-attribute tools
- Data Drift Detection — PSI, KS Test, Chi-Square with root-cause attribution via SHAP
- Counterfactual Explanations — DiCE-based What-If analysis that is constraint-aware and audit-ready
- Regulatory Compliance — EEOC (US), EU AI Act, GDPR-aware system design
- MLOps — Docker, FastAPI, GitHub Actions CI/CD, Redis async queues, Streamlit Cloud deployment
🏢 SparrowAI Research and Development Center | Data Scientist & AI Researcher | Jan 2025–Present
→ Built Bias Drift Guardian: production fairness monitoring system (EEOC / EU AI Act)
→ Intersectional bias detection across compound subgroups (e.g. Female + Age 50+ → 38% approval)
→ Drift detection via PSI, KS Test, Chi-Square across 3 real-world datasets
→ SHAP root-cause analysis + DiCE counterfactual explanations for compliance teams
→ Dockerised full stack; 2,800+ lines of technical documentation
🏢 Sparrow AI Pvt. Ltd. | Data Science Intern | Jan 2025 – Jun 2025
→ Customer churn prediction & sales forecasting (classification + regression)
→ Automated preprocessing pipelines — reduced manual effort ~30%
→ Stakeholder dashboards (Streamlit · Matplotlib · Seaborn)
I am actively looking for Data Scientist and ML Engineer roles — remote or Nagpur-based.
If you work in Responsible AI, MLOps, FinTech, HealthTech, or HR Tech — let's talk.
"Empowering ethical AI through transparent fairness monitoring, real-time bias detection, and responsible machine learning."