A powerful Spring Boot backend for AI-powered applications that integrates various language models (LLMs), vector databases, and natural language processing capabilities.
- Basic Chat Interface: Direct communication with AI models like GPT-4 and locally hosted models via Ollama
- RAG (Retrieval Augmented Generation): Enhanced responses with context from your knowledge base
- Function Calling: Execute predefined Java functions from natural language prompts
- Recommendation Engine: Book and author recommendations based on chat history analysis
- Image Generation: AI-powered image creation from text descriptions
- Speech-to-Text: Audio transcription services
- Framework: Spring Boot 3.2+
- AI SDKs:
- Spring AI (for OpenAI, Ollama integration)
- LangChain4j (for RAG, embeddings, function calling)
- Vector Database: Chroma DB
- LLMs:
- OpenAI GPT models (3.5/4)
- Ollama (for local model deployment)
- Database: PostgreSQL (user data, chat history)
- Authentication: JWT-based auth
- Documentation: OpenAPI/Swagger
- Java 17+
- Maven 3.8+
- Docker & Docker Compose
- Chroma DB instance
- API keys for OpenAI (if using their services)
- Ollama setup (for local models)
-
Clone the repository
git clone https://github.com/your-username/ai-integration-spring.git cd ai-integration-spring -
Configure environment variables Create a
.envfile in the root directory with the following variables:OPENAI_API_KEY=your_openai_key CHROMA_DB_HOST=localhost CHROMA_DB_PORT=8000 OLLAMA_API_URL=http://localhost:11434 DB_USERNAME=postgres DB_PASSWORD=yourpassword -
Start the services
docker-compose up -d
-
Build and run the application
mvn clean install mvn spring-boot:run
-
Verify the setup Open your browser and navigate to
http://localhost:8080/swagger-ui.html
POST /api/chat/basic- Basic chat with LLMPOST /api/chat/rag- Enhanced chat with RAG
POST /api/function/call- Execute functions via natural language
POST /api/embedding/generate- Generate embeddingsPOST /api/embedding/search- Search similar documents
GET /api/recommendations/books- Get book recommendationsGET /api/recommendations/authors- Get author recommendations
POST /api/images/generate- Generate images from text prompts
POST /api/transcript/audio- Convert audio to text
ai-integration-spring/
βββ src/
β βββ main/
β β βββ java/com/example/aiintegration/
β β β βββ config/ # Configuration classes
β β β βββ controller/ # REST controllers
β β β βββ model/ # Entity classes
β β β βββ repository/ # Data access
β β β βββ service/ # Business logic
β β β β βββ ai/ # AI service implementations
β β β β βββ embedding/ # Embedding services
β β β β βββ llm/ # LLM providers
β β β βββ dto/ # Data transfer objects
β β β βββ util/ # Utility classes
β β βββ resources/
β β βββ application.yml # Application config
β β βββ static/ # Static resources
β βββ test/ # Test classes
βββ .mvn/wrapper/ # Maven wrapper
βββ docker-compose.yml # Docker services
βββ Dockerfile # Application container
βββ pom.xml # Maven dependencies
βββ README.md # This file
The application uses Chroma DB as the vector database. Configure connection in application.yml:
chroma:
url: ${CHROMA_DB_HOST}:${CHROMA_DB_PORT}
collection: ai-documentsConfigure multiple LLM providers:
spring:
ai:
openai:
api-key: ${OPENAI_API_KEY}
chat:
options:
model: gpt-4
temperature: 0.7
ollama:
base-url: ${OLLAMA_API_URL}
chat:
options:
model: llama3- Create a new service with the function implementation
- Register the function in
FunctionRegistry - Add function metadata to the available tools
- Create a document loader in the
repositorypackage - Implement the chunking strategy
- Register the new source in the
EmbeddingService
- Caching: Implement Redis caching for frequently accessed embeddings
- Async Processing: Use Spring's async capabilities for non-blocking operations
- Connection Pooling: Configure proper database and HTTP client pooling
- All API endpoints are secured with JWT authentication
- LLM API keys are stored securely using environment variables
- Input validation and sanitization for all user inputs
- Rate limiting on sensitive endpoints
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.