I care about what it takes to make AI actually work in production. The model is usually the easy part. The hard part is building something reliable enough that people can trust it when the stakes are real.
Most of my prior work lived in private internal systems, so this GitHub is where I started building in public during my Wharton MBA.
Right now: AI Product Manager at Ikigai Labs (backed by Khosla Ventures), building forecasting systems for enterprise supply chains where predictions feed directly into procurement decisions, and errors can mean millions in misallocated inventory.
Previously: Built a distributed risk analytics platform at Goldman Sachs. Traders would not act on a number they could not interrogate. Latency mattered. Accuracy mattered. Explainability mattered just as much.
Wharton MBA '26 | IIT Roorkee CS
🔨 Side projects — each one taught me something different about building with AI:
-
AI Compass — Describe your use case, get model recommendations with cost estimates. Same recommendation engine, but different interfaces required different guardrails — web app for users, MCP server for AI clients.
-
Chess Coach — Engines calculate positions but cannot explain them. Large language models can explain ideas but hallucinate evaluations. I built a hybrid system where the hard part is deciding which one to trust at each step.
-
Startup Research — 10 web sources into one coherent brief in 90 seconds. The real problem is not retrieval. It is getting the model to synthesize with appropriate uncertainty instead of presenting everything as fact.

