
We've all been there⦠it's midnight, your essay's due, and the word limit is 500. You ask ChatGPT to concise it, and it spits out 490⦠or 512. Then you waste half an hour going back and forth trying to hit the exact count.
That's exactly why I built ShortNExact π an agentic AI tool that trims or expands your text to hit the exact word count you need. No more stress, no more guesswork.
Just plug in your OpenAI key, paste your text, and boom β deadline saved π.
- Exact Word Count Matching: Uses an intelligent LLM orchestrator to iteratively adjust text until the precise word count is achieved
- Two Processing Modes:
- Concisely Present Ideas: Aggressively condense large texts while preserving key concepts
- Shorten Text: Gently reduce word count with minimal structural changes
- Agentic AI Architecture: Autonomous tool selection and execution using OpenAI function calling
- Scalable Backend: HAProxy load balancing with multiple FastAPI instances
- Rate Limiting: Redis-based rate limiting to prevent abuse
- API Key Management: PostgreSQL-backed authentication system with time-limited keys
- Modern UI: Clean Gradio interface for easy interaction
- Production-Ready: Dockerized microservices with health checks and auto-restart
βββββββββββββββ
β Frontend β (Gradio UI on port 80/443)
β (Gradio) β
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β HAProxy β (Load Balancer on port 4000)
β (LB) β
ββββββββ¬βββββββ
β
ββββββββββββ¬βββββββββββ
βΌ βΌ βΌ
βββββββ βββββββ βββββββ
βAPI-1β βAPI-2β βAPI-3β (FastAPI instances)
ββββ¬βββ ββββ¬βββ ββββ¬βββ
β β β
βββββββββββΌββββββββββ
β
ββββββββββ΄βββββββββ
βΌ βΌ
βββββββββββ ββββββββββββ
β Redis β β Postgres β
β (Cache) β β (DB) β
βββββββββββ ββββββββββββ
git clone https://github.com/AdiistheGoat/ShortNExact
cd ShortNExactpython3 -m venv venv
source ./venv/bin/activatebash start.shhaproxy.cfg
init.sql
pgbouncer.ini
compose.yml
userlist.txt docker pull adityagoyal333/short_n_exact:api_img
docker pull adityagoyal333/short_n_exact:lb_img
docker pull adityagoyal333/short_n_exact:frontend_imgdocker tag adityagoyal333/short_n_exact:api_img api_img:latest
docker tag adityagoyal333/short_n_exact:lb_img lb_img:latest
docker tag adityagoyal333/short_n_exact:frontend_img frontend_img:latestdocker compose down
docker system prune
docker compose up -d-
Generate an API Key (first-time users):
- Navigate to the API Key Generation tab
- Enter your name, email, and desired validity period (max 31 days)
- Save your generated key
-
Process Your Text:
- Enter your OpenAI API key
- Paste your ShortNExact app key (generated in step 1)
- Input your text
- Set your target word count
- Select processing mode:
- Concisely present ideas: For aggressive summarization
- Shorten text: For gentle reduction
- Click submit and get your perfectly sized text!
curl -X GET http://localhost:4000/ \
-H "Content-Type: application/json" \
-H "ip_address: YOUR_IP" \
-d '{
"llm_api_key": "YOUR_OPENAI_KEY",
"app_key": "YOUR_APP_KEY",
"option": 1,
"input_text": "Your text here.. .",
"no_of_words": 500
}'curl -X GET http://localhost:4000/api_key \
-H "Content-Type: application/json" \
-H "X-Forwarded-For: YOUR_IP" \
-d '{
"name": "Your Name",
"email": "your.email@example.com",
"validity": 30
}'| Component | Technology |
|---|---|
| Frontend | Gradio |
| Backend API | FastAPI |
| Load Balancer | HAProxy |
| Database | PostgreSQL |
| Cache | Redis |
| LLM Provider | OpenAI API |
| NLP | NLTK |
| Containerization | Docker & Docker Compose |
ShortNExact/
βββ api. py # FastAPI backend with rate limiting
βββ ml_layer.py # LLM orchestrator and agentic processing
βββ frontend. py # Gradio UI components
βββ compose.yml # Docker Compose configuration
βββ haproxy.cfg # Load balancer configuration
βββ init. sql # Database schema initialization
βββ start.sh # Build and deployment script
βββ Dockerfile. api # API service container
βββ Dockerfile.frontend # Frontend service container
βββ Dockerfile.lb # Load balancer container
βββ api_requirements.txt # Python dependencies for API
βββ frontend_requirements.txt # Python dependencies for frontend
βββ README.md
ShortNExact uses an agentic LLM orchestrator that:
- Analyzes the current word count vs. target
- Selects the appropriate tool:
process_concisely: Aggressive restructuringprocess_short: Gentle trimmingincrease_words: Expand contentdecrease_words: Minor reduction
- Executes the selected tool via OpenAI function calling
- Iterates until the exact word count is achieved
The system uses regex-based word counting with Unicode support for accurate results across languages.
- Redis-based rate limiting: 10,000 API key generations per 24 hours
- IP-based tracking: Prevents abuse from single sources
- Time-limited API keys: Maximum 31-day validity
- Health checks: All services monitored with automatic restart
- Database persistence: PostgreSQL with volume mounting
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI for the powerful GPT models
- The open-source community for the amazing tools
- All contributors who help improve this project
If this project helped you, please consider giving it a star! It helps others discover the tool.