Advanced AI-Powered Content Moderation System
Detect and classify NSFW content with state-of-the-art YOLOv8 technology
Features β’ Quick Start β’ Demo β’ API β’ Documentation
- Overview
- Features
- Demo Results
- Installation
- Quick Start
- API Usage
- Detection Classes
- Performance
- Model Specifications
- Advanced Usage
- Deployment
- Best Practices
- FAQ
- Support
Limit-X NSFW Detector is a production-ready, high-performance content moderation model built on YOLOv8 Nano architecture. It provides real-time detection and classification of NSFW content with 18 distinct classes covering various body parts and exposure levels.
- β Real-time Detection: Process images in under 200ms
- β High Accuracy: 80-90% confidence scores on most classes
- β Lightweight: Only 6.2MB model size
- β Edge Ready: Deploy on mobile, edge devices, or cloud
- β Multi-class: 18 detailed classes for granular moderation
- β Production Tested: Battle-tested in real-world scenarios
- 18 Detection Classes: Comprehensive body part and exposure classification
- Multi-Object Detection: Detect multiple objects in a single image
- Bounding Box Localization: Precise location of detected content
- Confidence Scoring: Reliability score for each detection (0-1)
- Batch Processing: Process multiple images simultaneously
- GPU Acceleration: CUDA support for high-speed inference
- Export Options: ONNX, TensorRT, CoreML, TorchScript
| Category | Classes | Use Case |
|---|---|---|
| Critical | Exposed genitalia, exposed breasts, exposed buttocks, exposed anus | High-priority blocking |
| Moderate | Covered private parts, exposed belly, exposed armpits | Context-dependent filtering |
| General | Faces (male/female), feet, armpits, belly | Metadata and analytics |
Before (Original Image)
Input: image.jpg (320x320)
After (Predicted with Bounding Boxes)
Output: prediction_result.jpg
Detection Results:
β FACE_FEMALE β Confidence: 88.4%
β FEMALE_BREAST_COVERED β Confidence: 84.0%
β FEMALE_BREAST_COVERED β Confidence: 83.6%
β BELLY_EXPOSED β Confidence: 82.5%
Total Detections: 4
Inference Time: ~72ms
Before (Original Image)
Input: images.jpeg (320x224)
After (Predicted with Bounding Boxes)
Output: prediction_result2.jpg
Detection Results:
β FEMALE_BREAST_EXPOSED β Confidence: 78%
β FEET_EXPOSED β Confidence: 71%
β FACE_FEMALE β Confidence: 69%
β FEET_EXPOSED β Confidence: 67%
β FEMALE_GENITALIA_EXPOSED β Confidence: 64%
β BUTTOCKS_EXPOSED β Confidence: 56%
β BELLY_EXPOSED β Confidence: 45%
Total Detections: 7
Inference Time: ~66ms
Status: β οΈ HIGH RISK - Immediate Action Required
Python >= 3.8
PyTorch >= 1.9.0
CUDA >= 11.0 (optional, for GPU acceleration)pip install ultralytics opencv-python pillow numpyPlace the best.pt model file in your project directory.
from ultralytics import YOLO
# Load the Limit-X model
model = YOLO('best.pt')
# Run detection
results = model('your_image.jpg')
# Process results
for result in results:
boxes = result.boxes
for box in boxes:
cls_id = int(box.cls[0])
confidence = float(box.conf[0])
class_name = model.names[cls_id]
print(f"Detected: {class_name} (Confidence: {confidence:.2%})")from ultralytics import YOLO
import cv2
model = YOLO('best.pt')
results = model('image.jpg')
# Get annotated image
annotated = results[0].plot()
# Save result
cv2.imwrite('output.jpg', annotated)from flask import Flask, request, jsonify
from ultralytics import YOLO
import base64
import cv2
import numpy as np
app = Flask(__name__)
model = YOLO('best.pt')
@app.route('/detect', methods=['POST'])
def detect_nsfw():
# Get image from request
file = request.files['image']
img_bytes = file.read()
nparr = np.frombuffer(img_bytes, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
# Run detection
results = model(img, conf=0.4)
# Parse results
detections = []
risk_level = "SAFE"
for box in results[0].boxes:
cls_id = int(box.cls[0])
confidence = float(box.conf[0])
class_name = model.names[cls_id]
# Determine risk
if "EXPOSED" in class_name and "GENITALIA" in class_name:
risk_level = "CRITICAL"
elif "EXPOSED" in class_name and confidence > 0.6:
risk_level = "HIGH" if risk_level != "CRITICAL" else risk_level
detections.append({
"class": class_name,
"confidence": confidence,
"bbox": box.xyxy[0].tolist()
})
return jsonify({
"risk_level": risk_level,
"detections": detections,
"total_detections": len(detections)
})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)curl -X POST http://localhost:5000/detect \
-F "image=@test_image.jpg"{
"risk_level": "HIGH",
"detections": [
{
"class": "FEMALE_BREAST_EXPOSED",
"confidence": 0.78,
"bbox": [120, 150, 280, 350]
},
{
"class": "FACE_FEMALE",
"confidence": 0.69,
"bbox": [180, 50, 260, 140]
}
],
"total_detections": 2
}The Limit-X model detects 18 distinct classes:
| ID | Class Name | Description | Risk Level |
|---|---|---|---|
| 3 | FEMALE_BREAST_EXPOSED |
Exposed female breast | π΄ CRITICAL |
| 4 | FEMALE_GENITALIA_EXPOSED |
Exposed female genitalia | π΄ CRITICAL |
| 14 | MALE_GENITALIA_EXPOSED |
Exposed male genitalia | π΄ CRITICAL |
| 2 | BUTTOCKS_EXPOSED |
Exposed buttocks | π΄ CRITICAL |
| 6 | ANUS_EXPOSED |
Exposed anus | π΄ CRITICAL |
| ID | Class Name | Description | Risk Level |
|---|---|---|---|
| 0 | FEMALE_GENITALIA_COVERED |
Covered female genitalia | π‘ MODERATE |
| 16 | FEMALE_BREAST_COVERED |
Covered female breast | π‘ MODERATE |
| 17 | BUTTOCKS_COVERED |
Covered buttocks | π‘ MODERATE |
| 15 | ANUS_COVERED |
Covered anus | π‘ MODERATE |
| 13 | BELLY_EXPOSED |
Exposed belly/abdomen | π‘ MODERATE |
| 11 | ARMPITS_EXPOSED |
Exposed armpits | π‘ MODERATE |
| 5 | MALE_BREAST_EXPOSED |
Exposed male chest | π‘ MODERATE |
| 7 | FEET_EXPOSED |
Exposed feet | π‘ MODERATE |
| ID | Class Name | Description | Risk Level |
|---|---|---|---|
| 1 | FACE_FEMALE |
Female face | π’ SAFE |
| 12 | FACE_MALE |
Male face | π’ SAFE |
| 8 | BELLY_COVERED |
Covered belly | π’ SAFE |
| 9 | FEET_COVERED |
Covered feet | π’ SAFE |
| 10 | ARMPITS_COVERED |
Covered armpits | π’ SAFE |
| Hardware | Batch Size | Inference Time | FPS | Throughput |
|---|---|---|---|---|
| CPU (Intel i7) | 1 | ~180ms | 5.5 | 330 images/min |
| GPU (GTX 1660) | 1 | ~15ms | 66 | 4,000 images/min |
| GPU (RTX 3080) | 1 | ~8ms | 125 | 7,500 images/min |
| GPU (RTX 3080) | 16 | ~90ms | 177 | 10,600 images/min |
Average Confidence: 70-85%
Detection Rate: 95%+
False Positive Rate: <5%
Processing Resolution: 320x320 pixels
Model Size: 6.2 MB
RAM Usage: ~500 MB (CPU) / ~1.5 GB (GPU)
VRAM Usage: ~1.2 GB (GPU inference)
Disk Space: <10 MB
Base Model: YOLOv8n
Parameters: 3,014,358 (~3M)
Layers: 225 (Conv, C2f, SPPF, Detect)
Input Size: 320x320 RGB
Output: Bounding boxes + 18 class probabilities
FLOPs: ~8.7 GFLOPs
Epochs: 100
Batch Size: 508
Image Size: 320x320
Optimizer: Auto (AdamW)
Learning Rate: 0.01
Data Augmentation:
- HSV: Enabled
- Flip: 50% horizontal
- Scale: 50% jitter
- Translate: 10%
Loss Weights:
- Box: 7.5
- Class: 0.5
- DFL: 1.5Model Version: 8.0.173
Training Date: 2023-09-08
Framework: Ultralytics YOLOv8
PyTorch Backend: 1.9+
ONNX Compatible: Yes
TensorRT Compatible: Yes
# Low threshold - more detections
results = model('image.jpg', conf=0.25)
# Balanced (recommended)
results = model('image.jpg', conf=0.40)
# High precision - fewer false positives
results = model('image.jpg', conf=0.70)# Only detect critical classes
critical_classes = [2, 3, 4, 6, 14] # Exposed private parts
results = model('image.jpg', classes=critical_classes)import glob
# Get all images
images = glob.glob('images/*.jpg')
# Process in batches
results = model(images, stream=True)
for i, result in enumerate(results):
print(f"Image {i}: {len(result.boxes)} detections")
result.save(f'output_{i}.jpg')import cv2
model = YOLO('best.pt')
cap = cv2.VideoCapture('video.mp4')
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
results = model(frame, conf=0.5)
annotated = results[0].plot()
cv2.imshow('Limit-X Detection', annotated)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()# Export to ONNX (cross-platform)
model.export(format='onnx')
# Export to TensorRT (NVIDIA optimization)
model.export(format='engine', half=True)
# Export to CoreML (iOS/macOS)
model.export(format='coreml')
# Export to TorchScript
model.export(format='torchscript')FROM python:3.9-slim
WORKDIR /app
# Install dependencies
RUN pip install ultralytics opencv-python-headless flask
# Copy model and code
COPY best.pt /app/
COPY app.py /app/
EXPOSE 5000
CMD ["python", "app.py"]# Build image
docker build -t limitx-nsfw-detector .
# Run container
docker run -p 5000:5000 limitx-nsfw-detectorimport json
import boto3
from ultralytics import YOLO
import base64
import cv2
import numpy as np
model = YOLO('best.pt')
def lambda_handler(event, context):
# Decode image
image_data = base64.b64decode(event['image'])
nparr = np.frombuffer(image_data, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
# Run detection
results = model(img, conf=0.4)
# Return results
detections = []
for box in results[0].boxes:
detections.append({
'class': model.names[int(box.cls[0])],
'confidence': float(box.conf[0])
})
return {
'statusCode': 200,
'body': json.dumps(detections)
}# Use case based thresholds
THRESHOLDS = {
'strict': 0.70, # Minimal false positives
'balanced': 0.40, # Recommended for most cases
'sensitive': 0.25 # Catch everything, review manually
}
results = model('image.jpg', conf=THRESHOLDS['balanced'])def classify_risk(detections, model):
critical_keywords = ['GENITALIA_EXPOSED', 'BREAST_EXPOSED', 'ANUS_EXPOSED']
moderate_keywords = ['BUTTOCKS_EXPOSED', 'COVERED', 'BELLY_EXPOSED']
for box in detections:
class_name = model.names[int(box.cls[0])]
if any(k in class_name for k in critical_keywords):
return 'CRITICAL'
elif any(k in class_name for k in moderate_keywords):
risk = 'MODERATE'
return risk if 'risk' in locals() else 'SAFE'def should_review_manually(results, threshold=0.6):
"""Flag low-confidence detections for human review"""
for box in results[0].boxes:
if box.conf[0] < threshold:
return True
return False
if should_review_manually(results):
# Send to moderation queue
send_to_review_queue(image, results)# Enable FP16 for 2x speed boost
results = model('image.jpg', half=True, device='cuda:0')
# Reduce input size for faster processing
results = model('image.jpg', imgsz=256) # Trade accuracy for speed
# Disable visualization for production
results = model('image.jpg', verbose=False, save=False)For detailed technical documentation, see:
- MODEL_REPORT.md - Complete technical specifications
- API Documentation - Integration guides
- Training Guide - Model training details
A: The model runs on any modern CPU (Intel i5+) with 4GB RAM. GPU is recommended for production use.
A: Yes! The model supports video frame-by-frame processing. See Video Processing section.
A: Average confidence scores are 70-85% with detection rate >95%. Performance varies by image quality and lighting.
A: Yes, but implement human review for critical decisions. See Best Practices.
A: Yes, use Ultralytics YOLOv8 training pipeline with your custom dataset.
A: Process images securely, don't store without consent, and implement data retention policies. See legal requirements in MODEL_REPORT.md.
A: Increase confidence threshold (0.6-0.7), implement context checks, and use human review for edge cases.
- Privacy First: Always obtain consent before processing personal images
- Human Oversight: Implement human review for content decisions
- Bias Awareness: The model may have demographic biases
- Legal Compliance: Ensure compliance with local laws (GDPR, CCPA, etc.)
- No Guarantees: AI is not 100% accurate - plan for errors
- Secure Deployment: Protect the model and user data
- Transparent Use: Inform users about automated moderation
# Implement rate limiting
from flask_limiter import Limiter
limiter = Limiter(app, default_limits=["100 per hour"])
# Log all detections (for audit)
import logging
logging.basicConfig(filename='detections.log', level=logging.INFO)
@app.route('/detect', methods=['POST'])
@limiter.limit("10 per minute")
def detect_with_logging():
results = model(image)
logging.info(f"Detection: {len(results[0].boxes)} objects found")
return jsonify(results)- Technical Issues: Check MODEL_REPORT.md for troubleshooting
- Model Performance: Review Best Practices
- Integration Help: See API Usage examples
- Ultralytics Documentation: https://docs.ultralytics.com/
- YOLOv8 GitHub: https://github.com/ultralytics/ultralytics
- PyTorch Documentation: https://pytorch.org/docs/
This model is proprietary software. Unauthorized distribution or modification is prohibited.
Β© 2025 Limit-X. All rights reserved.
Built with β€οΈ using YOLOv8
Empowering safer digital spaces through AI
