Skip to content

Latest commit

 

History

History
44 lines (34 loc) · 1.49 KB

File metadata and controls

44 lines (34 loc) · 1.49 KB

Prompt Engineering Playground

This project is for experimenting with prompt engineering techniques for Large Language Models (LLMs) using the Gemini API.

Structure

  • src/: Python source code
    • main.py: Main script to run experiments.
    • prompt_tech/: Core modules for the project.
      • api.py: Handles interaction with the Gemini API.
      • runner.py: Manages the execution of experiments and saving results.
  • prompts/: Store prompt templates and examples in text files.
  • data/: Input data for your experiments (e.g., CSV files).
  • results/: Output of your experiments (e.g., JSON files with model responses).
  • tests/: Tests for your experiment code.

Setup

  1. Install dependencies:

    uv pip install -r requirements.txt
  2. Set up your environment variables:

    • Create a .env file in the root of the project.
    • Add your Gemini API key to the .env file:
      GEMINI_API_KEY="YOUR_API_KEY"
      

Usage

  1. Add your prompts:

    • You can add new prompts and techniques to the prompts dictionary in src/main.py.
    • For more complex prompts, you can save them as text files in the prompts/ directory and read them in your code.
  2. Run the experiments:

    python src/main.py
  3. Analyze the results:

    • The results of the experiments will be saved in results/experiment_results.json.
    • This file will contain the prompt, the model's response, the technique used, and the latency for each experiment.