Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
221 changes: 91 additions & 130 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,160 +6,121 @@ ApplyAI is a cloud-native, AI-powered assistant that helps streamline the job se

- [Overview & Goals](#overview--goals)
- [Architecture & Tech Stack](#architecture--tech-stack)
- [Core features](#core-features)
- [Data model (Firestore)](#data-model-firestore)
- [Getting started (local development)](#getting-started-local-development)
- [Deployment notes](#deployment-notes)
- [Next steps](#next-steps)
- [Core Features](#core-features)
- [Data Model (Firestore)](#data-model-firestore)
- [Getting Started (Local Development)](#getting-started-local-development)
- [Testing Suite](#testing-suite)
- [Roadmap](#roadmap)
- [License](#license)

## Overview & Goals

ApplyAI helps job seekers automate and improve their job applications. The MVP aims to:

- Reduce application time by automating tailored materials.
- Increase application throughput by making it easy to generate high-quality applications.
- Maximize job-fit relevance by emphasizing the most relevant skills in tailored resumes.
- **Reduce application time** by automating tailored materials.
- **Increase application throughput** by making it easy to generate high-quality applications.
- **Maximize job-fit relevance** by emphasizing the most relevant skills in tailored resumes using state-of-the-art LLMs.

## Architecture & Tech Stack

This project is a full-stack, decoupled application.
```
[User Browser (localhost)] <--> [Next.js (React) Frontend] <--> [Firebase Auth]
|
v
[FastAPI Backend (Local/Cloud)] <--> [Gemini API]
|
v
[Firestore Database]
```

### Stack & Rationale
This project is a full-stack, decoupled application following a modern serverless-first architecture.

- **Frontend:** **React (Next.js)** + **TypeScript** + **Tailwind CSS**
- *Why:* A modern, production-grade React framework for fast, type-safe, and beautifully styled component-based UI.
- **Auth:** **Firebase Authentication** (Google OAuth)
- *Why:* Secure, easy-to-implement identity management that integrates natively with Firestore.
- **Backend:** **Python** + **FastAPI**
- *Why:* High-performance, asynchronous-first framework that's perfect for I/O-bound tasks like calling external AI APIs.
- **AI Model:** **Google Gemini API**
- *Why:* High-quality generation for conversational chat and complex text-rewriting tasks.
- **Database:** **Google Firestore (NoSQL)**
- *Why:* A serverless, scalable document database for storing unstructured data like chat history.
- **DevOps:** **Docker** + **Google Cloud Run**
- *Why:* A containerized, serverless-first deployment that scales to zero.

## Core features
```mermaid
graph TD;
User((User)) -->|Next.js/React| FE[Frontend Client];
FE -->|Auth| FB_Auth[Firebase Auth];
FE -->|JSON/REST| BE[FastAPI Backend];
BE -->|Query/Write| FS[(Cloud Firestore)];
BE -->|Prompt/Text| Gemini[Google Gemini AI];
```

1. **Authentication (Frontend Implemented)**
- Secure Google Sign-In via Firebase.
- Global session management using React Context.
To ensure VS Code recognizes the formatting immediately, I have placed the remaining content inside a single block. Copy everything from the box below and paste it directly after your Mermaid diagram.

2. **Resume Tailoring (Full-Stack Implemented)**
- `POST /resumes` endpoint that accepts a base resume and job description.
- Generates a tailored resume using Gemini and renders it as formatted Markdown.
- Saves the result to Firestore linked to the user.
Markdown
### Stack & Rationale

3. **AI Chat Agent (Backend Implemented)**
- `POST /chat` endpoint that takes a user message and returns a Gemini-powered response.
* **Frontend:** **React (Next.js 15)** + **TypeScript** + **Tailwind CSS**
* Modern, type-safe UI with efficient client-side routing and global state management via React Context.
* **Auth:** **Firebase Authentication** (Google OAuth)
* Secure identity management that provides unique UIDs to link data across the stack.
* **Backend:** **Python 3.12** + **FastAPI**
* High-performance, asynchronous framework optimized for I/O-bound tasks like AI model inference.
* **AI Model:** **Google Gemini 2.5 Flash**
* Utilized for both conversational "Career Coach" interactions and complex "Resume Tailoring" logic.
* **Database:** **Google Firestore (NoSQL)**
* A serverless document database used to persist user-specific chat history and resume generation sessions.
* **DevOps:** **GitHub Actions** + **Pytest**
* Automated CI/CD pipeline ensuring code quality and 100% mocked backend testing.

## Core Features

1. **Authentication & Security**
* Secure Google Sign-In with persistent session management.
* Environment-driven configuration for API keys and Cloud credentials.
2. **Persistent AI Chat History**
* `POST /chat` and `GET /chats/{user_id}` endpoints.
* Career guidance that persists across sessions, allowing users to pick up where they left off.
3. **Resume Tailoring Engine**
* `POST /resumes` and `GET /resumes/{user_id}` endpoints.
* Intelligent rewriting of resumes based on job descriptions, saved to the cloud for future reference.

## Data Model (Firestore)

* **chats** (collection)
* `doc_id` (auto-generated)
* `user_id` (string)
* `messages` (array of `{ role: 'user'|'ai', content: string }`)
* `timestamp` (serverTimestamp)

* **tailored_resumes** (collection)
* `doc_id` (auto-generated)
* `user_id` (string)
* `jobDescription` (string)
* `originalResume` (string)
* `tailoredResume` (string - Markdown)
* `createdAt` (timestamp)

## Getting Started (Local Development)

## Data model (Firestore)
### Prerequisites
* Python 3.12+ / Node.js 18+
* Google Cloud Service Account with Firestore and Gemini API access.

- `chats` (collection)
- `chatId` (document)
- `userId` (string)
- `messages` (array of `{ role: 'user'|'ai', content: string }`)
### 1) Backend (`server/`)
1. `cd server && python3 -m venv .venv && source .venv/bin/activate`
2. `pip install -r requirements.txt`
3. Create `.env` with `GEMINI_API_KEY` and `GOOGLE_APPLICATION_CREDENTIALS`.
4. Run: `PYTHONPATH=. uvicorn main:app --reload --port 8000`

- `tailored_resumes` (collection)
- `resumeId` (document)
- `userId` (string)
- `originalResume` (string)
- `jobDescription` (string)
- `tailoredResume` (string - Markdown)
- `createdAt` (timestamp)
### 2) Frontend (`client/`)
1. `cd client && npm install`
2. Create `.env.local` with your Firebase config and `NEXT_PUBLIC_API_URL`.
3. Run: `npm run dev`

## Getting started (local development)
## Testing Suite

This repository is a monorepo containing both the `client` and `server`.
The backend includes a robust unit testing suite using `Pytest` and `mocker`. To maintain speed and zero-cost CI, all external calls to Gemini and Firestore are fully mocked to prevent network dependency and unnecessary API costs.

### Prerequisites
**To run tests:**
```bash
cd server
pytest
```

- Python 3.12+
- Node.js 18+ and `npm`
- A Google Cloud project with Firestore, Gemini API, and Firebase Auth enabled.
## Deployment Notes

### 1) Backend (`server/`)
The backend is deployed to **Google Cloud Run**. The frontend is currently local but configured for deployment to Vercel or Firebase Hosting.

1. **Navigate to the server directory:**
```bash
cd server
```
2. **Create and activate a virtual environment:**
```bash
python3 -m venv .venv
source .venv/bin/activate
```
3. **Install dependencies:**
```bash
pip install -r requirements.txt
```
4. **Configure environment variables:**
Create a `.env` file in `server/` with:
```
GEMINI_API_KEY="your-gemini-api-key"
GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
```
5. **Run the backend locally:**
(Run from root folder)
```bash
PYTHONPATH=$PYTHONPATH:$(pwd)/server uvicorn server.main:app --reload --port 8000
```
API docs: `http://127.0.0.1:8000/docs`
* **CORS:** The FastAPI app allows requests from `http://localhost:3000`. For production, update `server/main.py` with the deployed frontend domain.

### 2) Frontend (`client/`)
## Roadmap

1. **Navigate to the client directory:**
```bash
cd client
```
2. **Install dependencies:**
```bash
npm install
```
3. **Configure environment variables:**
Create a `.env.local` file in `client/` with:
```bash
# Point to your local Python backend
NEXT_PUBLIC_API_URL=[http://127.0.0.1:8000](http://127.0.0.1:8000)

# Firebase Configuration (Get these from Firebase Console)
NEXT_PUBLIC_FIREBASE_API_KEY=...
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=...
NEXT_PUBLIC_FIREBASE_PROJECT_ID=...
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=...
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=...
NEXT_PUBLIC_FIREBASE_APP_ID=...
NEXT_PUBLIC_FIREBASE_MEASUREMENT_ID=...
```
4. **Run the frontend locally:**
```bash
npm run dev
```
App: `http://localhost:3000`

## Deployment notes

The backend is deployed to **Google Cloud Run**. The frontend is currently local but can be deployed to Vercel or Firebase Hosting.

- **CORS:** The FastAPI app allows requests from `http://localhost:3000`. For production, update `server/main.py` with the deployed frontend domain.

## Next steps

- [x] Render the tailored resume as Markdown.
- [x] Implement full user authentication (Firebase Auth).
- [ ] Connect Frontend User ID to Backend (Save real user data).
- [ ] Implement the frontend UI for the `/chat` endpoint.
- [ ] Add a full CI/CD pipeline with GitHub Actions.
- [x] **Phase 1:** Core AI Tailoring Logic & Form UI.
- [x] **Phase 2:** Firebase Authentication & Global Context.
- [x] **Phase 3:** Backend Persistence (Firestore) & Naming Convention Alignment.
- [x] **Phase 4:** Mocked Test Infrastructure & History API Endpoints.
- [ ] **Phase 5:** Frontend Hydration (Displaying historical chats/resumes in the UI).
- [ ] **Phase 6:** Production Deployment (Cloud Run & Vercel).

## License

Expand Down
60 changes: 41 additions & 19 deletions client/components/Chat.tsx
Original file line number Diff line number Diff line change
@@ -1,15 +1,17 @@
"use client";
import { useState } from "react";
import ReactMarkdown from "react-markdown";
import { useAuth } from "@/context/AuthContext";

interface Message {
role: "user" | "ai";
content: string;
}

export default function Chat(){
export default function Chat() {
const { user } = useAuth();
const [input, setInput] = useState("");
const [messages, setMessages] = useState<Message[]>([])
const [messages, setMessages] = useState<Message[]>([]);
const [isLoading, setIsLoading] = useState(false);

const handleSend = async () => {
Expand All @@ -19,49 +21,69 @@ export default function Chat(){
setMessages((prev) => [...prev, userMessage]);
setInput("");
setIsLoading(true);

let res;
try {
const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: userMessage.content }),
});
if (!user) {
res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
message: userMessage.content,
userId: "None",
}),
});
} else {
res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
message: userMessage.content,
userId: user?.uid,
}),
});
}

if (!res.ok) throw new Error("Failed to fetch response");

const data = await res.json();
const aiMessage: Message = { role: "ai", content: data.reply || data.response };
const aiMessage: Message = {
role: "ai",
content: data.reply || data.response,
};
setMessages((prev) => [...prev, aiMessage]);
} catch (error) {
console.error(error);
setMessages((prev) => [
...prev,
{ role: "ai", content: "Sorry, I encountered an error. Please try again." },
{
role: "ai",
content: "Sorry, I encountered an error. Please try again.",
},
]);
} finally {
setIsLoading(false);
}
};


return(
<div className="flex flex-col h-[600px] w-full max-w-2xl border border-brand-gray/30 bg-surface rounded-xl overflow-hidden shadow-2xl shadow-primary/10">

return (
<div className="flex flex-col h-[600px] w-full max-w-2xl border border-brand-gray/30 bg-surface rounded-xl overflow-hidden shadow-2xl shadow-primary/10">
{/* Messages Area */}
<div className="flex-1 overflow-y-auto p-4 flex flex-col gap-4">
{messages.length === 0 && (
<div className="text-center text-brand-gray mt-20">
<p className="text-xl font-semibold text-white">Hi! I'm ApplyAI.</p>
<p className="text-sm">Ask me how to improve your resume or prepare for an interview.</p>
<p className="text-sm">
Ask me how to improve your resume or prepare for an interview.
</p>
</div>
)}

{messages.map((msg, idx) => (
<div
key={idx}
className={`p-3 rounded-lg max-w-[80%] ${
msg.role === "user"
? "bg-primary text-white self-end ml-auto shadow-md"
? "bg-primary text-white self-end ml-auto shadow-md"
: "bg-surface border border-brand-gray/30 text-brand-gray self-start"
}`}
>
Expand All @@ -74,7 +96,7 @@ export default function Chat(){
)}
</div>
))}

{isLoading && (
<div className="self-start bg-surface border border-brand-gray/20 p-3 rounded-lg text-brand-gray text-sm animate-pulse">
Thinking...
Expand Down Expand Up @@ -102,4 +124,4 @@ export default function Chat(){
</div>
</div>
);
}
}
Loading