AI Quiz Maker turns a simple study prompt into a structured quiz with easy, medium, and hard questions. It helps students and educators generate consistent study material fast, with outputs ready for reading or integration. Use this AI quiz generator when you want topic-focused practice questions without manual research.
Created by Bitbash, built to showcase our approach to Scraping and Automation!
If you are looking for ai-quiz-maker you've just found your team — Let’s Chat. 👆👆
AI Quiz Maker converts free-text study requests into multi-difficulty quizzes by extracting the core topic, gathering trusted reference information, and generating questions in multiple output formats. It solves the problem of spending time searching, summarizing, and drafting questions manually. It’s built for learners, teachers, course creators, and developers who need reusable quiz content.
- Detects the main study topic from natural language prompts.
- Collects relevant reference pages for the topic to ground question content.
- Organizes key facts, events, and entities into a structured knowledge base.
- Generates easy, medium, and hard questions from extracted information.
- Exports the quiz in Markdown, HTML, and JSON for different use cases.
| Feature | Description |
|---|---|
| Topic extraction | Converts a free-text study request into a clear core topic to drive quiz generation. |
| Asynchronous page retrieval | Pulls multiple reference pages concurrently to reduce total runtime. |
| Multi-difficulty questions | Produces easy, medium, and hard questions to support progressive learning. |
| Structured knowledge base | Organizes extracted facts (events, figures, definitions) to improve question quality. |
| Multi-format export | Outputs Markdown for readability, HTML for embedding, and JSON for apps and pipelines. |
| Reproducible runs | Saves inputs and results so quizzes can be regenerated or compared over time. |
| Field Name | Field Description |
|---|---|
| inputPrompt | The original study request provided by the user. |
| topic | The extracted core topic used to drive research and quiz creation. |
| sources | A list of reference pages used for grounding (title, URL, retrieval status). |
| knowledgeBase | Structured facts extracted from sources (entities, events, dates, summaries). |
| quiz.easy | Array of easy questions and answer options. |
| quiz.medium | Array of medium questions and answer options. |
| quiz.hard | Array of hard questions and answer options. |
| output.markdown | Markdown-formatted quiz content for direct viewing. |
| output.html | HTML-formatted quiz content for embedding in web pages. |
| output.json | JSON-formatted quiz content for programmatic use. |
| runStats | Timing and run metadata (pages fetched, durations, success counts). |
| errors | Any extraction, retrieval, or generation errors encountered during the run. |
{
"easy": [
{
"question": "What year did World War 2 start?",
"answers": ["1939", "1940", "1941", "1942"]
}
],
"medium": [
{
"question": "Which countries were part of the Axis powers?",
"answers": ["Germany", "Italy", "Japan", "Hungary"]
}
],
"hard": [
{
"question": "What was Operation Barbarossa?",
"answers": ["The German invasion of the Soviet Union"]
}
]
}
AI Quiz Maker/
├── src/
│ ├── runner.py
│ ├── cli.py
│ ├── agent/
│ │ ├── topic_extractor.py
│ │ ├── planner.py
│ │ └── quiz_generator.py
│ ├── scraping/
│ │ ├── wiki_fetcher.py
│ │ ├── async_client.py
│ │ └── url_builder.py
│ ├── processing/
│ │ ├── content_parser.py
│ │ ├── knowledge_base.py
│ │ └── difficulty_router.py
│ ├── exporters/
│ │ ├── export_markdown.py
│ │ ├── export_html.py
│ │ └── export_json.py
│ ├── schemas/
│ │ └── output.schema.json
│ └── config/
│ ├── settings.example.json
│ └── logging.yaml
├── data/
│ ├── inputs.sample.txt
│ ├── sample_output.json
│ └── sample_output.md
├── tests/
│ ├── test_topic_extractor.py
│ ├── test_wiki_fetcher.py
│ ├── test_quiz_generator.py
│ └── fixtures/
│ ├── wiki_pages.html
│ └── prompts.json
├── scripts/
│ ├── run_local.sh
│ └── format_check.sh
├── requirements.txt
├── pyproject.toml
├── LICENSE
└── README.md
- Students use it to generate a study quiz from a topic prompt, so they can practice with structured questions across difficulty levels.
- Teachers use it to quickly produce quiz sets for lesson plans, so they can spend more time teaching and less time writing questions.
- Tutors use it to tailor practice quizzes per student request, so they can personalize sessions without extra prep.
- Course creators use it to draft question banks from core topics, so they can accelerate content production and iteration.
- Developers use the JSON output to feed learning apps, so they can integrate an AI quiz generator into existing workflows.
How do I run it locally?
Install dependencies, then run the CLI with a prompt. The runner triggers topic extraction, reference retrieval, knowledge-base building, and quiz generation, then writes Markdown/HTML/JSON outputs into the output directory. Use the sample prompt file in data/inputs.sample.txt to validate your setup quickly.
What topics work best? Clear, specific prompts produce the best results (e.g., “World War 2 causes and key battles” instead of “history”). If you need broad coverage, include scope hints like region, time period, or subtopics so the topic extractor can anchor the quiz generation.
Can I control the number of questions per difficulty? Yes. Configure counts per tier (easy/medium/hard) in the settings file. The generator uses those limits while balancing coverage across the knowledge base, so you get consistent quiz sizes across runs.
What happens if a reference page fails to load or parse?
The run continues using available sources. Failures are recorded in errors, and runStats will reflect partial retrieval. In most cases, the tool still produces a usable quiz, but increasing source limits or retry settings improves completeness.
Primary Metric: Generates a 15-question quiz (5 per difficulty) from a typical single-topic prompt in 18–35 seconds on a standard developer laptop, depending on the number of reference pages retrieved.
Reliability Metric: Achieves a 96–99% successful run completion rate across repeated runs on common topics, with most failures caused by transient network fetch issues.
Efficiency Metric: Sustains 6–12 concurrent page fetches during asynchronous retrieval, keeping CPU usage moderate while shifting the workload to I/O-bound operations.
Quality Metric: Produces 90–97% structurally valid question objects (well-formed question text and answer arrays) with high content coverage when at least 3–5 reference pages are successfully parsed.
