This repository contains a Python tool that combines web scraping, local Large Language Model (LLM) capabilities via Ollama, and semantic embedding using Nomic Embed to create a powerful Retrieval-Augmented Generation (RAG) system.
Key Features:
- Website Scraping: Extracts relevant text content from a specified website.
- Local LLM with Ollama: Utilizes Ollama to run LLMs locally, ensuring data privacy and offline functionality.
- Nomic Embed: Generates high-quality embeddings for the scraped data, enabling accurate semantic search and retrieval.
- RAG Implementation: Integrates the LLM and embedding models to provide contextually relevant answers based on the scraped website content.
- Easy to use: Simple command-line interface.
Use Cases:
- Creating a local knowledge base from website data.
- Building a chatbot that answers questions based on website content.
- Summarizing and analyzing information from web pages.
Dependencies:
- Ollama
- Nomic Embed