You can test the Orchestrator API (ClaimAgent) using Postman or any REST client. Below is an example for the /api/claim-chat endpoint:
- URL:
http://localhost:7192/api/claim-chat - Method:
POST - Headers:
Content-Type: application/json
{
"ConversationId": "test-conv-1",
"UserId": "user-123",
"UserMessage": "I want to file a claim for my car accident.",
"ChannelId": "teams",
"Metadata": {
"claimType": "auto",
"locale": "en-US"
}
}
- Open Postman and create a new
POSTrequest tohttp://localhost:7192/api/claim-chat(adjust port if needed). - Set the request body to
rawand selectJSON. - Paste the sample request body above.
- Click Send.
- You should receive a response similar to:
{
"assistantReply": "...AI-generated response...",
"messages": [
{ "role": "system", "content": "..." },
{ "role": "user", "content": "I want to file a claim for my car accident." },
{ "role": "assistant", "content": "...AI-generated response..." }
]
}
Note:
- The API must be running locally (
dotnet runinsource/ClaimAgent). - Adjust the port if you changed it in
launchSettings.json. - You can add authentication headers if your deployment requires it.
AI Claim Integration is a cloud-native solution for automating and orchestrating insurance claim processing using Azure services, .NET, and AI. The project integrates multiple microservices, bots, and background workers to streamline claim handling, leveraging Azure OpenAI, Redis, Azure Storage, and more.
- ClaimAgent: Azure Functions app for orchestrating chat-based claim workflows using OpenAI, Azure Storage, and Semantic Kernel for advanced AI orchestration.
- ClaimTeamsAgent: Microsoft Teams bot for interacting with users, collecting claim information, and forwarding messages to the orchestrator API for processing.
- ClaimWorkerService: Background worker for processing claim data, embedding, and vector storage.
- Infrastructure: Terraform scripts for provisioning Azure resources (resource groups, storage, Redis, Key Vault, Cognitive Services, etc).
- .NET 8 SDK
- Terraform 1.6+
- Azure CLI
- Access to Azure subscription
-
Clone the repository
git clone <repo-url> cd ai-claim-integration
-
Provision Infrastructure
- Update variables in
infra/terraform-vars/dev.terraform.tfvarsas needed. - Run Terraform:
cd infra terraform init terraform plan -var-file="terraform-vars/dev.terraform.tfvars" terraform apply -var-file="terraform-vars/dev.terraform.tfvars"
- Update variables in
-
Configure Environment
- Set required environment variables (see
local.settings.jsonandappsettings.jsonin each service for details). - Ensure Azure resources (Storage, Redis, Key Vault, etc.) are provisioned and accessible.
- Set required environment variables (see
-
Build and Run Services
- Each service (ClaimAgent, ClaimTeamsAgent, ClaimWorkerService) can be built and run independently:
Repeat for other services as needed.
cd source/ClaimAgent dotnet build dotnet run
- Each service (ClaimAgent, ClaimTeamsAgent, ClaimWorkerService) can be built and run independently:
- Build: Use
dotnet buildin each service directory. - Run Locally: Use
dotnet runor configure launch profiles in Visual Studio. - Unit/Integration Tests: (Add test instructions if available.)
Contributions are welcome! Please open issues or submit pull requests for improvements, bug fixes, or new features.
- Fork the repository
- Create a feature branch
- Commit your changes
- Open a pull request
For major changes, please open an issue first to discuss your proposal.
This project is licensed under the MIT License.
Semantic Kernel is an open-source SDK from Microsoft that enables integration of AI models (like Azure OpenAI) with traditional programming logic. In this project, Semantic Kernel is used within the ClaimAgent to:
- Orchestrate chat-based workflows using large language models (LLMs)
- Manage conversation history and context
- Integrate retrieval-augmented generation (RAG) by injecting claim data into the conversation
- Enable advanced prompt engineering and memory management
How it works here:
- The Orchestrator service uses Semantic Kernel to build a chat history, inject system prompts, and call Azure OpenAI for chat completions.
- Claim data is loaded from Azure Blob Storage and provided as context to the LLM, improving the accuracy and relevance of responses.
- Conversation memory is stored in Redis for continuity across user sessions.
The Teams bot enables users to interact with the claim system directly from Microsoft Teams. It:
- Receives user messages and claim details
- Forwards messages to the Orchestrator API for processing
- Returns AI-generated responses and claim status updates to the user
- Logs user and conversation details for traceability
Key features:
- Built using the Microsoft Bot Framework and Teams SDK
- Supports authentication and secure communication with backend APIs
- Can be extended to support adaptive cards, notifications, and more
The Orchestrator API is the core logic engine that:
- Receives requests from the Teams bot and other clients
- Uses Semantic Kernel to manage conversation flow and AI completions
- Maintains conversation state and memory in Redis
- Loads claim data from Azure Blob Storage for RAG
- Returns structured responses and conversation history
Typical flow:
- User sends a message in Teams
- Teams bot forwards the message to the Orchestrator API
- Orchestrator builds the chat context, injects claim data, and calls Azure OpenAI via Semantic Kernel
- The response is stored and sent back to the user via the bot
For more details, see the source code in source/ClaimAgent/Services/Orchestrator.cs, source/ClaimTeamsAgent/Bots/TeamsAgentBot.cs, and related files.