A multiple-choice exam app built with Flask. Pick an exam, answer questions, and get graded results with full history tracking. Exams are plain JSON files — drop one in and it just works.
Goes without saying that as this is a vibe coded experiment, I'd suggest only running it locally.
NOTE: Be mindful of NDA's should you contribute questions and exams. In my case I deliberately only gave AI agents information about the exams that the exam body publishes themselves (i.e sample questions, syllabus documentation) before getting the agents to generate exam JSON.
Docker (recommended):
docker compose up --build # first run
docker compose up # subsequent runs
docker compose down # stopThe app will be available at http://localhost:5000.
Local development:
pip install -r requirements.txt
python app/app.py| Variable | Default | Description |
|---|---|---|
SECRET_KEY |
dev value (warns on startup) | Flask session signing key — set this in production |
HISTORY_DB |
app/data/history.db |
Path to the SQLite history database |
GITHUB_REPO |
RedcentricCyber/Control-Alt-Defeat |
Repo slug used by the "report inaccuracy" button |
CAD_PRODUCTION |
off | Set to 1 when serving over HTTPS — enables Secure session cookies and HSTS |
| Exam | Question Pool | Difficulty | Source |
|---|---|---|---|
| CREST CCT Multichoice Preparation | 1821 | Hard | ChatGPT, Claude, Gemini |
| CyberScheme CSTM | 108 | Medium | ChatGPT |
| CREST CPSA Multichoice Preparation | 493 | Medium | Gemini |
Add a JSON file to app/exams/ following this format:
{
"title": "My Exam",
"description": "A short description shown on the exam card.",
"difficulty": "MEDIUM",
"pass_mark": 70,
"time_limit": 30,
"num_questions": 20,
"reward": "FLAG{optional-reward-for-passing}",
"based_on": "Optional — what the exam is based on",
"created_by": "Optional — author name",
"links": [
{ "text": "Study Guide", "url": "https://example.com" }
],
"questions": [
{
"category": "Topic Area",
"question": "What is the answer to this question?",
"options": {
"A": "Wrong answer",
"B": "Wrong answer",
"C": "Correct answer",
"D": "Wrong answer"
},
"correct": "C",
"explanation": "Optional explanation shown after grading.",
"skill": "Optional skill value to allow for a subcategory under the main one, as used on CREST syllabus documentation"
}
]
}I found Gemini code studio to be best at creating an exam using something similar to the below prompt.
Create an XYZ exam .json file, using the below format
{
"title": "My Exam",
"description": "A short description shown on the exam card.",
"difficulty": "MEDIUM",
"pass_mark": 70,
"time_limit": 30,
"num_questions": 20,
"reward": "FLAG{optional-reward-for-passing}",
"based_on": "Optional — what the exam is based on",
"created_by": "Optional — author name",
"links": [
{ "text": "Study Guide", "url": "https://example.com" }
],
"questions": [
{
"category": "Topic Area",
"question": "What is the answer to this question?",
"options": {
"A": "Wrong answer",
"B": "Wrong answer",
"C": "Correct answer",
"D": "Wrong answer"
},
"correct": "C",
"explanation": "Optional explanation shown after grading.",
"skill": "Optional skill value to allow for a subcategory under the main one, as used on CREST syllabus documentation"
}
]
}
The XYZ exam syllabus is below
CATEGORY|SKILL
Here are some sample questions to judge the skill level required
EXAMPLE QUESTIONS
Also note:
* Please make sure explanations are helpful to candidates revision and help them understand why an answer is correct.
* I want at least 5 questions per skill
* Please ensure all the options (incorrect and correct) are similar length/verbosity
* Minimise repetition and similar questions
Key fields:
time_limit— minutes; set to0for untimednum_questions— how many to serve per session (can be less than the total pool)category— questions are sampled proportionally across categories so every topic is representedcorrect— the key (A/B/C/D) of the right answerreward— optional text (e.g. voucher code, CTF flag) shown once to users who pass with default exam settings (question count and time limit unchanged)
If running with Docker, the exams directory is mounted as a volume — just drop the file in app/exams/ and refresh the page. No rebuild needed.
Since the included exams were generated by AI, inaccuracies are possible. Use the Report button on any question during an exam to create a pre-filled GitHub issue, or create an issue directly with details of the problem and the correction.