With this small project you can run Ollama with llama-3-8b model locally. Also, it uses Whisper Timestamped to receive the audio from a microphone of your choose. When the completion (streamed in the console) is ready, the output is then read using the library pyttsx3 (in the future planning to change it for a local AI voice generator also open-source and locally :)
-
You must have Python to run this project. If you don't have it, you can download it here.
-
Install Ollama from its website: Ollama
-
Execute the installer for your operating system Just click the installer and press "install" or something
Create a new environment with the following command:
py -m venv venvActivate the environment with the following command:
source venv/Scripts/activateInstall the required packages with the following command:
pip install -r requirements.txtRun the application with the following command:
python main.py