Skip to content

soh-kaz/ServeGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ServeGPT - (Simple Chatbot using Online & Offline Models)

Description

ServeGPT is an innovative LLM-based chatbot built using LangChain, Python, Vite, and ReactJS. This project integrates multiple AI models, including online APIs (gpt-4o and gemini-1.5-flash) and offline models (Meta-Llama-3-8B-Instruct.Q4_0.gguf and qwen2-1_5b-instruct-q4_0.gguf), to provide a versatile and responsive conversational experience. Designed to assist with a wide range of topics, from general assistance to detailed explanations, ServeGPT offers a clean and user-friendly interface. Whether you're looking for real-time interactions or offline capabilities, this chatbot is a powerful tool for exploring AI-driven conversations.

Project Structure

  • frontend: Contains the Vite ReactJS codebase for the user interface.
  • backend: Houses the Python code, including LangChain integration and model handling. Includes a models folder where .gguf model files (e.g., Meta-Llama-3-8B-Instruct.Q4_0.gguf and qwen2-1_5b-instruct-q4_0.gguf) are required.

Features

  • Multi-Model Support: Utilizes gpt-4o and gemini-1.5-flash via online APIs, and Meta-Llama-3-8B-Instruct.Q4_0.gguf and qwen2-1_5b-instruct-q4_0.gguf offline.
  • Responsive UI: Built with Vite and ReactJS for a seamless user experience.
  • Flexible Integration: Leverages LangChain for managing diverse AI models and workflows.
  • Offline Capability: Supports offline operation with pre-loaded models.

Requirements

  • Python 3.10: Ensure you have Python 3.10 installed to run the backend.
  • Node.js 22.16.*: Ensure you have Node.js version 22.16.x installed to run the frontend.

Installation

  1. Clone the repository:
    git clone https://github.com/soh-kaz/ServeGPT.git
  2. Navigate to the project directory:
    cd ServeGPT

Backend Setup

  1. Move to the backend directory:
    cd backend
  2. Install Python dependencies:
    pip install -r requirements.txt
  3. Set up environment variables (e.g., API keys for gpt-4o and gemini-1.5-flash) in a .env file.
  4. Download and Add Models:
    • Create a models folder in the backend directory if it doesn’t exist:
      mkdir models
    • Download the .gguf model files for Meta-Llama and Qwen from Hugging Face:

Frontend Setup

  1. Move to the frontend directory:
    cd ../frontend
  2. Install Node.js dependencies:
    npm install

Running the Application

  1. Start the backend server (from backend directory):
    python app.py
  2. Start the frontend server (from frontend directory):
    npm run dev

Usage

  • Launch the app and select a model from the dropdown (e.g., gpt-4o, gemini-1.5-flash, Meta-Llama-3-8B-Instruct, qwen2-1_5b-instruct).
  • Start a new chat and interact with the AI assistant.
  • Explore topics or request assistance as needed.

Screenshots

Gemini

ChatGPT

Qwen

Meta-Llama

Contributing

Feel free to fork this repository, submit issues, or create pull requests. Contributions to

About

ServeGPT is an innovative LLM-based chatbot built using LangChain, Python, Vite, and ReactJS. This project integrates multiple AI models, including online APIs (Gemini and ChatGPT) and offline models (Qwen and Meta-Llama), to provide a versatile and responsive conversational experience.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors