Skip to content

dk-singh/sample-rag-application

Repository files navigation

rag-example

Setup Instructions

1. Set Up Environment Variables

Rename the env.example file to .env and provide the required keys.

2. Install All Dependencies

make install-all

This will install the requirements for all applications (ingestion, embedding, and RAG application).

3. Run Ingestion

To run the ingestion process, use the following command:

make run-ingestion

This will scrape Wikipedia for the following pages:

  • Artificial intelligence
  • Machine learning
  • Deep learning
  • Natural language processing
  • Computer vision

The raw files will be stored in the data/raw directory.

4. Run Embedding

To run the embedding process, use the following command:

make run-embedding

This will process the raw data, clean it, and chunk it for embedding. The cleaned data will be stored in the data/processed directory. The processed files will be read, and the embedded values will be stored in a local vector database at data/vectordb.

5. Run RAG Application

To run the RAG (Retrieval-Augmented Generation) process, use the following command:

make run-rag

This will start the FastAPI RAG application on http://localhost:8000.

6. Test RAG Application

To ask a question to the RAG app, it will use the context from the local vector database and pass it to the LLM for a response. Below is a sample endpoint to test it:

curl -X POST "http://localhost:8000/rag" \
    -H "Content-Type: application/json" \
    -d '{"query": "What is deep learning?", "max_results": 5}'

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors