- Project Overview
- Project Structure
- Model Overview
- Dataset
- Training & Fine-tuning
- Model Performance
- API Usage
This repository demonstrates a Transformer-Encoder-based sequence classification model built from scratch in PyTorch. The model is pre-trained on the tweet_eval dataset for emotion classification, including labels like anger, joy, optimism, sadness, fear, and love.
Further, this model is fine-tuned on the Amazaon Review Sentiment dataset for binary classification including labels as negative and positive
.
├── Architectures/ # Model architectures
│ └── Basic_Sequence_classification.py
├── layers/ # Custom Transformer layers
│ ├── attention.py
│ ├── embedding.py
│ ├── encoderlayer.py
│ └── feedforward.py
├── best_model.pt # Saved PyTorch model
├── fine_tune.ipynb # Fine-tuning notebook
├── trainer.ipynb # Training script/notebook
├── finetuned-assistant/ # (Optional) Related outputs or helper modules
├── wandb/ # Weights & Biases logs (if used)
└── README.md # Project description
The model Transformer_For_Sequence_Classification2 is a custom implementation resembling the BERT architecture, composed of:
- Token Embedding: Converts token IDs to dense vectors.
- Positional Encoding: Adds sequence order information.
- Transformer Encoder: Custom multi-head self-attention encoder stack.
- Dropout Layer
- Classification Head: Maps pooled embedding to 6 emotion classes.
You can find the individual building blocks in the layers/ directory.
- Dataset:
tweet_eval - Task: Emotion classification
- Classes:
anger,joy,optimism,sadness,fear,love - Source: Twitter
from datasets import load_dataset
dataset = load_dataset("tweet_eval", "emotion")Use the provided notebooks:
trainer.ipynb: Contains the training loop, evaluation, and logging. See More in Notebookfine_tune.ipynb: Fine-tune the model on thetweet_evaldataset. See More in Notebook
You can save the model using:
torch.save(model.state_dict(), "best_model.pt")| Dataset | Training Type | Accuracy | F1-Score |
|---|---|---|---|
| Tweet Dataset | From Scratch | 65.5% | 60.3% |
| Amazon Reviews | Fine-tuned | 89.1% | 88.8% |
Overall improvement in customer sentiment analysis efficiency: +28.5%
Check if API is Healthy - Don't Misuse it
curl -X GET https://sentiment-analyzer-hm69.onrender.com{"message":"Sentiment analysis model is up and running! Have a great Day XD"}% Example Usage
curl -X POST https://sentiment-analyzer-hm69.onrender.com/predict \
-H "Content-Type: application/json" \
-d '{"review": "This product is amazing!"}'{"Negative":0.00019336487457621843,"Positive":0.9998067021369934}