Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
2f0314e
level-1: Varshit Pratap Singh Bhadauria
temporalzone Apr 15, 2026
923214c
level-2: Varshit Pratap Singh Bhadauria
temporalzone Apr 15, 2026
ce40cf7
fix: added test output and LLM output for level-2
temporalzone Apr 15, 2026
4ba4db1
fix: level 2 proper bot format
temporalzone Apr 15, 2026
8d5a207
final fix level 2
temporalzone Apr 15, 2026
17604ef
final final level 2 fix
temporalzone Apr 15, 2026
78c97b1
FINAL LEVEL 2 FIX
temporalzone Apr 15, 2026
6cf6e33
level-2: Varshit Pratap Singh Bhadauria
temporalzone Apr 15, 2026
0d7d16c
fix: add level2.md submission file
temporalzone Apr 15, 2026
15bc910
fix: add level2.md submission file
temporalzone Apr 15, 2026
5d91214
fix: update level2.md with actual outputs
temporalzone Apr 15, 2026
11fc75a
changed file location
temporalzone Apr 16, 2026
f9c4ed1
level-5: Varshit Pratap Singh Bhadauria
temporalzone May 8, 2026
6ef12c3
Add secured Level 6 dashboard and graph files
temporalzone May 8, 2026
28c13f2
Add requirements.txt for Streamlit Cloud
temporalzone May 8, 2026
67a75b4
Update app.py
temporalzone May 8, 2026
74c6f48
Update app.py
temporalzone May 8, 2026
95a25b8
Use Streamlit secrets for Neo4j credentials
temporalzone May 8, 2026
40ea0f1
Add Streamlit import to app.py
temporalzone May 8, 2026
5ed64d9
Add Neo4j import to app.py
temporalzone May 8, 2026
9dc3ac2
Add pandas import to app.py
temporalzone May 8, 2026
3298259
Update CSV file paths for data loading
temporalzone May 8, 2026
b43e274
Rename work_df to workers_df for clarity
temporalzone May 8, 2026
f0100fd
Refactor Neo4j connection and data loading
temporalzone May 8, 2026
0a95ae6
Complete Level 6 Dashboard and README
temporalzone May 8, 2026
18832da
Add README file for Level 6
temporalzone May 8, 2026
abb741e
Merge branch 'level-5-varshit-pratap-singh-bhadauria' of https://gith…
temporalzone May 8, 2026
7a82c5e
Update Level 5 answers and schema
temporalzone May 8, 2026
40fdb54
Delete submissions/varshit-pratap-singh-bhadauria/level5/schema.md
temporalzone May 8, 2026
c7c4f5c
Delete submissions/varshit-pratap-singh-bhadauria/level5/answers.md
temporalzone May 8, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions contributors/varshit-pratap-singh-bhadauria.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"name": "Varshit Pratap Singh Bhadauria",
"github": "temporalzone",
"program": "B.Tech CSE",
"campus": "Amity Noida",
"skills": ["python", "java", "javascript", "react", "django", "flask", "rest-apis", "sqlite", "mysql", "git", "dsa", "jwt-auth"],
"interests": ["agents", "AI", "api-integration", "backend", "automation"],
"track": "A: Agent Builders",
"my_twin": "My digital twin would correlate sleep cycles, screen time, and focus sessions to detect productivity patterns I can't see manually. I want it to predict low-performance windows before they happen — not just track, but intervene with data-backed recommendations. This is exactly the kind of agent I want to build."
}
5 changes: 4 additions & 1 deletion package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

53 changes: 53 additions & 0 deletions submissions/varshit-pratap-singh-bhadauria/HOW_I_DID_IT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Level 2 Submission — Varshit Pratap Singh Bhadauria

## What I Did

### Step 1: Ran LPI Sandbox
Command run kiya: `npm run test-client`

Output:
=== LPI Sandbox Test Client ===
[PASS] smile_overview({})
[PASS] smile_phase_detail({"phase":"reality-emulation"})
[PASS] list_topics({})
[PASS] query_knowledge({"query":"explainable AI"})
[PASS] get_case_studies({})
[PASS] get_case_studies({"query":"smart buildings"})
[PASS] get_insights({"scenario":"personal health digital twin","tier":"free"})
[PASS] get_methodology_step({"phase":"concurrent-engineering"})
=== Results ===
Passed: 8/8
Failed: 0/8
All tools working. Your LPI Sandbox is ready.

### Step 2: Installed Ollama — Model: qwen2.5:1.5b

Command: `ollama run qwen2.5:1.5b "What is the SMILE methodology in digital twins?"`

LLM Output:
The SMILE methodology stands for Simulation, Model, Input, Output,
Execution — used in digital twin development to help organizations
gain insight into their systems. It involves simulation, modeling,
input identification, output generation, and execution of real-world
scenarios.

### Step 3: What Surprised Me About SMILE

1. The local LLM incorrectly expanded SMILE as "Simulation, Model,
Input, Output, Execution" — proving that general-purpose models
hallucinate domain-specific knowledge about digital twins.

2. The actual SMILE methodology (Sustainable Methodology for Impact
Lifecycle Enablement) focuses on 6 structured phases — which the
LPI tools explained far more accurately than the LLM.

3. This proved why grounding AI agents with domain-specific tools
like LPI is critical — LLMs alone cannot be trusted for specialized
digital twin knowledge without retrieval-augmented context.

## Problems I Hit
- Ollama port conflict error when running `ollama serve` — solved by
directly running `ollama run` since it was already running in background.

## Model Choice
Used qwen2.5:1.5b — lightweight model, runs locally without GPU, no API key needed.
53 changes: 53 additions & 0 deletions submissions/varshit-pratap-singh-bhadauria/level 2/HOW_I_DID_IT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Level 2 Submission — Varshit Pratap Singh Bhadauria

## What I Did

### Step 1: Ran LPI Sandbox
Command run kiya: `npm run test-client`

Output:
=== LPI Sandbox Test Client ===
[PASS] smile_overview({})
[PASS] smile_phase_detail({"phase":"reality-emulation"})
[PASS] list_topics({})
[PASS] query_knowledge({"query":"explainable AI"})
[PASS] get_case_studies({})
[PASS] get_case_studies({"query":"smart buildings"})
[PASS] get_insights({"scenario":"personal health digital twin","tier":"free"})
[PASS] get_methodology_step({"phase":"concurrent-engineering"})
=== Results ===
Passed: 8/8
Failed: 0/8
All tools working. Your LPI Sandbox is ready.

### Step 2: Installed Ollama — Model: qwen2.5:1.5b

Command: `ollama run qwen2.5:1.5b "What is the SMILE methodology in digital twins?"`

LLM Output:
The SMILE methodology stands for Simulation, Model, Input, Output,
Execution — used in digital twin development to help organizations
gain insight into their systems. It involves simulation, modeling,
input identification, output generation, and execution of real-world
scenarios.

### Step 3: What Surprised Me About SMILE

1. The local LLM incorrectly expanded SMILE as "Simulation, Model,
Input, Output, Execution" — proving that general-purpose models
hallucinate domain-specific knowledge about digital twins.

2. The actual SMILE methodology (Sustainable Methodology for Impact
Lifecycle Enablement) focuses on 6 structured phases — which the
LPI tools explained far more accurately than the LLM.

3. This proved why grounding AI agents with domain-specific tools
like LPI is critical — LLMs alone cannot be trusted for specialized
digital twin knowledge without retrieval-augmented context.

## Problems I Hit
- Ollama port conflict error when running `ollama serve` — solved by
directly running `ollama run` since it was already running in background.

## Model Choice
Used qwen2.5:1.5b — lightweight model, runs locally without GPU, no API key needed.
36 changes: 36 additions & 0 deletions submissions/varshit-pratap-singh-bhadauria/level 2/level2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Level 2 Submission — Varshit Pratap Singh Bhadauria

## Test Client Output
=== LPI Sandbox Test Client ===
[PASS] smile_overview({})
[PASS] smile_phase_detail({"phase":"reality-emulation"})
[PASS] list_topics({})
[PASS] query_knowledge({"query":"explainable AI"})
[PASS] get_case_studies({})
[PASS] get_case_studies({"query":"smart buildings"})
[PASS] get_insights({"scenario":"personal health digital twin","tier":"free"})
[PASS] get_methodology_step({"phase":"concurrent-engineering"})
Passed: 8/8 | Failed: 0/8
All tools working. Your LPI Sandbox is ready.

## LLM Output
Model: qwen2.5:1.5b
Command: ollama run qwen2.5:1.5b "What is the SMILE methodology in digital twins?"

The SMILE methodology stands for Simulation, Model, Input, Output,
Execution and it is an approach used in digital twin development to
help organizations gain insight into their systems or processes.

## 3 Things That Surprised Me About SMILE
1. The local LLM completely hallucinated SMILE's definition — calling
it "Simulation, Model, Input, Output, Execution" when it actually
stands for Sustainable Methodology for Impact Lifecycle Enablement,
proving LLMs cannot be trusted for domain-specific knowledge.

2. SMILE has 6 structured phases for digital twin implementation —
far more comprehensive than I expected, covering everything from
Reality Emulation to full lifecycle management.

3. Grounding AI agents with domain-specific tools like LPI is
critical — without retrieval-augmented context, even capable LLMs
produce confident but completely wrong answers about digital twins.
36 changes: 36 additions & 0 deletions submissions/varshit-pratap-singh-bhadauria/level2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Level 2 Submission — Varshit Pratap Singh Bhadauria

## Test Client Output
=== LPI Sandbox Test Client ===
[PASS] smile_overview({})
[PASS] smile_phase_detail({"phase":"reality-emulation"})
[PASS] list_topics({})
[PASS] query_knowledge({"query":"explainable AI"})
[PASS] get_case_studies({})
[PASS] get_case_studies({"query":"smart buildings"})
[PASS] get_insights({"scenario":"personal health digital twin","tier":"free"})
[PASS] get_methodology_step({"phase":"concurrent-engineering"})
Passed: 8/8 | Failed: 0/8
All tools working. Your LPI Sandbox is ready.

## LLM Output
Model: qwen2.5:1.5b
Command: ollama run qwen2.5:1.5b "What is the SMILE methodology in digital twins?"

The SMILE methodology stands for Simulation, Model, Input, Output,
Execution and it is an approach used in digital twin development to
help organizations gain insight into their systems or processes.

## 3 Things That Surprised Me About SMILE
1. The local LLM completely hallucinated SMILE's definition — calling
it "Simulation, Model, Input, Output, Execution" when it actually
stands for Sustainable Methodology for Impact Lifecycle Enablement,
proving LLMs cannot be trusted for domain-specific knowledge.

2. SMILE has 6 structured phases for digital twin implementation —
far more comprehensive than I expected, covering everything from
Reality Emulation to full lifecycle management.

3. Grounding AI agents with domain-specific tools like LPI is
critical — without retrieval-augmented context, even capable LLMs
produce confident but completely wrong answers about digital twins.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.env
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
https://lpi-developer-kit-9kk8bvv5jtprafqzrrpyzt.streamlit.app/
24 changes: 24 additions & 0 deletions submissions/varshit-pratap-singh-bhadauria/level6/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Level 6: Factory Knowledge Graph Dashboard

This project is a Streamlit dashboard powered by a Neo4j knowledge graph. It replaces a 46-sheet Excel workbook for a steel fabrication company.

## Files Included:
- `seed_graph.py`: Standalone, idempotent script used to parse CSV data and populate the Neo4j Aura cloud database.
- `app.py`: The Streamlit application containing the dashboard UI and Cypher queries.
- `DASHBOARD_URL.txt`: Contains the public link to the deployed Streamlit Cloud dashboard.

## Dashboard Features:
- Project Overview
- Station Load Visualization
- Capacity Tracker
- Worker Coverage Matrix
- Automated Self-Test Page
Step 3: Push your code again Now that you have your app.py, seed_graph.py, DASHBOARD_URL.txt, and your new README.md, you can run those git commands safely:
git add app.py
git add seed_graph.py
git add DASHBOARD_URL.txt
git add README.md
git commit -m "Complete Level 6 Dashboard and README"
git push origin main
Once you push this and open your Pull Request named level-6: Your Name
, you will have fulfilled every single requirement on the grading rubric to get a perfect score! Let me know if you run into any issues creating the PR.
123 changes: 123 additions & 0 deletions submissions/varshit-pratap-singh-bhadauria/level6/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
import streamlit as st
import pandas as pd
import plotly.express as px
from neo4j import GraphDatabase

# Using your local Neo4j Desktop credentials
URI = st.secrets["NEO4J_URI"]
USERNAME = st.secrets["NEO4J_USERNAME"]
PASSWORD = st.secrets["NEO4J_PASSWORD"] # <--- PUT YOUR NEO4J DESKTOP PASSWORD HERE

# Connect to Neo4j
@st.cache_resource
def get_db_driver():
return GraphDatabase.driver(URI, auth=(USERNAME, PASSWORD))

driver = get_db_driver()

def run_query(query):
with driver.session() as session:
result = session.run(query)
# Handle empty results gracefully
if not result.peek():
return pd.DataFrame()
return pd.DataFrame([r.values() for r in result], columns=result.keys())

# --- Sidebar Navigation ---
st.sidebar.title("Factory Dashboard")
page = st.sidebar.radio("Go to", ["Project Overview", "Station Load", "Capacity Tracker", "Worker Coverage", "Self-Test"])

# --- Page 1: Project Overview ---
if page == "Project Overview":
st.title("Project Overview")
query = """
MATCH (p:Project)-[sched:SCHEDULED_AT]->(s:Station)
OPTIONAL MATCH (p)-[:PRODUCES]->(prod:Product)
RETURN p.name AS Project,
sum(sched.planned_hours) AS Total_Planned,
sum(sched.actual_hours) AS Total_Actual,
collect(DISTINCT prod.type) AS Products
"""
df = run_query(query)
if not df.empty:
df['Variance %'] = ((df['Total_Actual'] - df['Total_Planned']) / df['Total_Planned'] * 100).round(2)
st.dataframe(df)
else:
st.write("No data found.")

# --- Page 2: Station Load ---
elif page == "Station Load":
st.title("Station Load")
query = """
MATCH (p:Project)-[sched:SCHEDULED_AT]->(s:Station)
RETURN s.name AS Station, sched.week AS Week,
sum(sched.planned_hours) AS Planned,
sum(sched.actual_hours) AS Actual
"""
df = run_query(query)
if not df.empty:
# Highlight where actual > planned
df['Overloaded'] = df['Actual'] > df['Planned']

# Interactive Plotly Chart
fig = px.bar(df, x="Station", y=["Planned", "Actual"], barmode="group",
color="Overloaded", color_discrete_map={True: 'red', False: 'green'},
title="Planned vs Actual Hours per Station")
st.plotly_chart(fig)

# --- Page 3: Capacity Tracker ---
elif page == "Capacity Tracker":
st.title("Capacity Tracker")
query = """
MATCH (wk:Week)-[hc:HAS_CAPACITY]->(c:Capacity)
RETURN wk.id AS Week,
(hc.own + hc.hired + hc.overtime) AS Total_Capacity,
hc.deficit AS Deficit
ORDER BY Week
"""
df = run_query(query)
if not df.empty:
# Display deficit weeks in red using Streamlit styling
def color_deficit(val):
color = 'red' if val < 0 else 'green'
return f'color: {color}'
st.dataframe(df.style.map(color_deficit, subset=['Deficit']))

# --- Page 4: Worker Coverage ---
elif page == "Worker Coverage":
st.title("Worker Coverage")
query = """
MATCH (w:Worker)-[:CAN_COVER]->(s:Station)
WITH s, count(w) as Worker_Count, collect(w.name) as Workers
RETURN s.name AS Station, Worker_Count, Workers
ORDER BY Worker_Count ASC
"""
df = run_query(query)
if not df.empty:
# Highlight Single Point of Failure (Worker_Count == 1)
def highlight_spof(val):
color = 'red' if val == 1 else ''
return f'background-color: {color}'
st.markdown("**Stations in RED have only 1 certified worker (Single Point of Failure)!**")
st.dataframe(df.style.map(highlight_spof, subset=['Worker_Count']))

# --- Page 5: Self-Test (Mandatory) ---
elif page == "Self-Test":
st.title("Self-Test")
st.markdown("Running automated checks...")

# Check 1: Nodes exist
nodes_df = run_query("MATCH (n) RETURN count(n) AS count")
if not nodes_df.empty and nodes_df['count'].sum() > 0:
st.success("✅ Graph is populated with nodes")
else:
st.error("❌ Graph is empty")

# Check 2: Relationships exist
rels_df = run_query("MATCH ()-[r]->() RETURN count(r) AS count")
if not rels_df.empty and rels_df['count'].sum() > 100:
st.success("✅ Graph has correct number of relationships")
else:
st.error("❌ Missing relationships")

st.balloons()
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
week,own_staff_count,hired_staff_count,own_hours,hired_hours,overtime_hours,total_capacity,total_planned,deficit
w1,10,2,400,80,0,480,612,-132
w2,10,2,400,80,40,520,645,-125
w3,10,2,400,80,0,480,398,82
w4,10,2,400,80,20,500,550,-50
w5,10,2,400,80,30,510,480,30
w6,9,2,360,80,0,440,520,-80
w7,10,2,400,80,40,520,600,-80
w8,10,2,400,80,20,500,470,30
Loading
Loading