Skip to content

VISHALSARMAH/face-authentication-attendance-system

Repository files navigation

Face Authentication Attendance System

A complete face recognition-based attendance system with anti-spoofing capabilities, built with Python, OpenCV, and dlib. Now features a modern web interface with real-time camera access!

🎯 Features

  • Real-time Face Detection: Fast HOG-based face detection for live video streams
  • Face Recognition: High-accuracy face recognition using dlib's ResNet-based embeddings (99.38% on LFW)
  • Liveness Detection: Basic anti-spoofing with blink detection and head movement tracking
  • Attendance Management: Complete punch-in/punch-out system with duplicate prevention
  • Multi-user Support: Register and manage multiple users
  • Robust to Lighting: Adaptive histogram equalization for varying lighting conditions
  • Quality Validation: Automatic face quality scoring before registration
  • 🌐 Web UI: Modern, responsive web interface with real-time camera access
  • 📊 Dashboard: Live statistics and attendance tracking
  • 🚀 Deploy Ready: Configured for easy deployment to Vercel and other platforms

📋 Table of Contents

🏗 System Architecture

Components

Face Authentication Attendance System/
├── app.py                   # Flask web application
├── templates/
│   └── index.html          # Web UI
├── static/
│   ├── style.css           # Styles
│   └── script.js           # Frontend logic
├── data/
│   ├── embeddings/          # Face embeddings (128-dim vectors)
│   ├── users/               # User profiles (JSON)
│   └── attendance/          # Attendance logs (CSV)
├── src/
│   ├── face_detector.py     # HOG-based face detection
│   ├── face_recognizer.py   # ResNet-based face recognition
│   ├── liveness_detector.py # Anti-spoofing (blink + movement)
│   ├── attendance_manager.py # Attendance tracking
│   └── utils.py             # Helper functions
├── main.py                  # CLI application
├── vercel.json             # Vercel deployment config
├── start.sh                # Quick start script
├── requirements.txt         # Dependencies
├── DEPLOYMENT.md           # Deployment guide
└── README.md               # This file

Technology Stack

Component Technology Purpose
Face Detection dlib HOG Fast, CPU-friendly frontal face detection
Face Recognition dlib ResNet 128-dim embeddings, 99.38% accuracy on LFW
Liveness Detection Eye Aspect Ratio (EAR) Blink detection for anti-spoofing
Image Processing OpenCV Camera access, preprocessing
Data Storage JSON + CSV User profiles and attendance logs

📦 Installation

Prerequisites

  • Python 3.8 or higher
  • Webcam (built-in or USB)
  • macOS, Linux, or Windows

Step 1: Clone/Download the Project

If you haven't already, navigate to your project directory:

cd "/Users/vishalsarmah/Desktop/Face Authentication Attendance System"

Step 2: Create Virtual Environment (Recommended)

# Create virtual environment
python3 -m venv venv

# Activate virtual environment
# macOS/Linux:
source venv/bin/activate

# Windows:
venv\\Scripts\\activate

Step 3: Install Dependencies

pip install --upgrade pip
pip install -r requirements.txt

Note: Installing dlib can be challenging on some systems. If you encounter issues:

macOS:

brew install cmake
pip install dlib

Ubuntu/Debian:

sudo apt-get install build-essential cmake
sudo apt-get install libopenblas-dev liblapack-dev
pip install dlib

Windows: Download pre-built wheel from here or use conda:

conda install -c conda-forge dlib

Step 4: Verify Installation

python -c "import cv2, face_recognition, dlib; print('✓ All dependencies installed!')"

🚀 Quick Start

Web Interface (Recommended)

Option 1: Use Start Script

./start.sh

Option 2: Manual Start

# Install Flask dependencies
pip install Flask Flask-CORS

# Run the web app
python app.py

Then open your browser to: http://localhost:5000

Features:

  • 📝 Register Tab: Register new users with webcam
  • Attendance Tab: Mark punch-in/punch-out
  • 📊 Records Tab: View attendance history and statistics
  • 📱 Real-time camera preview
  • 🎯 Face detection guide overlay
  • 📈 Live dashboard statistics

For detailed web UI usage, see WEB_UI_GUIDE.md


Command Line Interface

python main.py

Demo Flow

1. Register a User

  1. Select option 1 (Register New User)
  2. Enter user details:
    • Full name
    • User ID (or auto-generate)
    • Email (optional)
    • Department (optional)
  3. Follow on-screen instructions:
    • Position face in center
    • Ensure good lighting
    • Blink naturally 2-3 times
    • Press s to capture

2. Punch In

  1. Select option 2 (Punch In)
  2. Look at the camera
  3. Blink naturally
  4. System will authenticate and record punch-in time

3. Punch Out

  1. Select option 3 (Punch Out)
  2. Authenticate with face
  3. System records punch-out and calculates total hours

4. View Attendance

  1. Select option 4 (View Attendance Log)
  2. Choose from:
    • View personal attendance
    • View all attendance for today
    • View summary report

🚀 Deployment to Vercel

Quick Deploy

  1. Install Vercel CLI:

    npm install -g vercel
  2. Initialize Git (if not done):

    git init
    git add .
    git commit -m "Initial commit"
  3. Deploy:

    vercel
  4. Access your app at: https://your-project.vercel.app

Important Notes for Vercel

⚠️ Limitations:

  • Vercel has read-only filesystem (attendance data won't persist)
  • 10-second timeout on free tier
  • Better for demos than production

For Production:

  • Consider Railway, Heroku, or DigitalOcean
  • Use external database (PostgreSQL, MongoDB)
  • Implement cloud storage (AWS S3, Cloudinary)

📖 Full deployment guide: See DEPLOYMENT.md

🔬 Technical Details

Face Detection

Method: Histogram of Oriented Gradients (HOG)

  • Algorithm: Extracts gradient orientation histograms from image patches
  • Speed: ~30 FPS on modern CPUs
  • Accuracy: Good for frontal faces, struggles with extreme poses (>45°)
  • Preprocessing: Histogram equalization for lighting normalization

Why HOG?

  • Fast enough for real-time applications
  • Low computational requirements (CPU-only)
  • Robust to moderate lighting variations
  • Well-suited for controlled environments (office, school)

Face Recognition

Model: dlib ResNet-based Face Embedding

  • Architecture: 29-layer ResNet trained on 3 million faces
  • Output: 128-dimensional face embedding (feature vector)
  • Training Data: 3 million faces from various datasets
  • Benchmark: 99.38% accuracy on LFW (Labeled Faces in the Wild)
  • Comparison Method: Euclidean distance between embeddings
  • Threshold: 0.6 (distances below this = same person)

Recognition Process:

  1. Registration:

    Face Image → Face Detection → Landmark Detection → 
    Face Alignment → ResNet → 128-D Embedding → Save to Disk
    
  2. Authentication:

    Face Image → Generate Embedding → 
    Compare with All Known Embeddings → 
    Find Closest Match → Check Distance < Threshold
    

Why This Model?

  • Pre-trained (no retraining needed for new users)
  • Fast inference (~50ms per face on CPU)
  • Robust to aging, makeup, glasses, facial hair
  • Good generalization to unseen faces

Liveness Detection (Anti-Spoofing)

1. Eye Blink Detection

Method: Eye Aspect Ratio (EAR)

EAR = (||p2-p6|| + ||p3-p5||) / (2 * ||p1-p4||)

where p1-p6 are eye landmark points.

  • Open Eye: EAR ≈ 0.3
  • Closed Eye: EAR < 0.25
  • Blink Detection: EAR drops below threshold for 2-3 frames, then rises

Parameters:

  • EAR Threshold: 0.25
  • Consecutive Frames: 2
  • Required Blinks: 2-3

2. Head Movement Detection

Tracks face centroid position across frames:

  • Calculates displacement from initial position
  • Movement threshold: 30 pixels
  • Helps detect static photos

3. Detection Consistency

Monitors face detection stability:

  • Very high consistency (>98%) may indicate static photo
  • Natural micro-movements cause slight variations
  • Helps identify video replays

Limitations:

  • Can be fooled by video replays with blinking
  • Professional attacks (silicone masks) will bypass
  • Deep learning methods (texture analysis, 3D depth) are more robust

Accuracy Expectations

Scenario Expected Accuracy
Good lighting, frontal face 95-99%
Moderate lighting 85-95%
Low lighting 60-80%
Side profile (>30°) 40-70%
Occlusion (mask, glasses) 70-90%

Factors Affecting Accuracy:

  • ✅ Good: Frontal face, even lighting, high-res camera
  • ⚠️ Moderate: Slight rotation, glasses, facial hair changes
  • ❌ Poor: Extreme angles, very low light, heavy occlusion

Known Failure Cases

  1. Extreme Lighting:

    • Very bright backlighting (face in shadow)
    • Very dark environments
    • Solution: Use histogram equalization (implemented)
  2. Extreme Pose Variation:

    • Profile views (>45° rotation)
    • Looking down/up significantly
    • Solution: Guide users to face camera frontally
  3. Occlusion:

    • Medical masks covering nose/mouth
    • Hands covering face
    • Solution: Partial - landmarks can still detect eyes
  4. Image Quality:

    • Low-resolution webcams (<480p)
    • Motion blur
    • Solution: Quality scoring before registration
  5. Spoofing Attacks:

    • High-quality photos
    • Video replays
    • Solution: Liveness detection (basic), consider depth cameras
  6. Identical Twins:

    • May be recognized as same person
    • Solution: Combine with additional authentication

📚 Usage Guide

Registering Users

Best Practices:

  • Use good lighting (natural light or bright indoor lighting)
  • Position face centered in frame
  • Look directly at camera
  • Blink naturally (don't force rapid blinking)
  • Capture when quality score > 50%
  • Avoid shadows on face

Multiple Samples: The system captures one high-quality sample. For better accuracy, you can modify the code to capture multiple samples with slight pose variations.

Marking Attendance

Punch-In:

  • Must not have an active punch-in
  • Liveness check required (blink detection)
  • Face must match registered user
  • Records timestamp automatically

Punch-Out:

  • Must have an active punch-in
  • Cannot punch-out without punch-in
  • Automatically calculates total hours
  • Updates attendance status to 'completed'

Attendance Reports

CSV Format:

user_id,name,date,punch_in_time,punch_out_time,total_hours,status
john_doe,John Doe,2026-01-29,09:00:00,17:30:00,8.50,completed

Monthly Logs:

  • Separate CSV file per month: attendance_YYYY-MM.csv
  • Located in data/attendance/

⚙️ Configuration

Adjusting Recognition Threshold

Edit in main.py:

self.face_recognizer = FaceRecognizer(
    embeddings_dir="data/embeddings",
    threshold=0.6  # Lower = stricter, Higher = lenient
)

Threshold Guide:

  • 0.5: Strict (fewer false positives, more false negatives)
  • 0.6: Recommended (balanced)
  • 0.7: Lenient (more false positives, fewer false negatives)

Adjusting Liveness Parameters

Edit in src/liveness_detector.py:

EAR_THRESHOLD = 0.25        # Eye closure threshold
EAR_CONSEC_FRAMES = 2       # Frames to confirm blink
MOVEMENT_THRESHOLD = 30     # Pixels for head movement

Camera Selection

If you have multiple cameras:

cap = cv2.VideoCapture(0)  # Change 0 to 1, 2, etc.

🔧 Troubleshooting

Camera Not Working

Issue: "Cannot access camera"

Solutions:

  1. Check camera permissions (System Preferences → Security & Privacy → Camera)
  2. Try different camera index: cv2.VideoCapture(1)
  3. Verify camera with: ls /dev/video* (Linux) or check Device Manager (Windows)

Face Not Detected

Issue: "No face detected"

Solutions:

  1. Improve lighting
  2. Move closer to camera
  3. Ensure face is frontal (not in profile)
  4. Check camera focus
  5. Try preprocessing: Set enhance_image_quality() in detector

Poor Recognition Accuracy

Issue: Wrong person recognized or frequent failures

Solutions:

  1. Re-register with better quality images
  2. Lower threshold for stricter matching
  3. Ensure consistent lighting during registration and authentication
  4. Capture multiple registration samples from different angles

Slow Performance

Issue: Low FPS or laggy video

Solutions:

  1. Reduce frame size: frame = cv2.resize(frame, (640, 480))
  2. Process every Nth frame: Skip frames between processing
  3. Use HOG instead of CNN model (already default)

Installation Issues

dlib won't install:

  • macOS: brew install cmake, then pip install dlib
  • Use conda: conda install -c conda-forge dlib
  • Download pre-built wheel

OpenCV issues:

pip uninstall opencv-python opencv-contrib-python
pip install opencv-contrib-python==4.8.1.78

📊 Performance Benchmarks

Tested on MacBook Pro M1:

Operation Time FPS
Face Detection (HOG) ~30ms 30
Face Recognition ~50ms 20
Liveness Check ~40ms 25
Full Pipeline ~120ms 8-10

🔐 Security Considerations

Current Implementation

  • ✅ Blink detection (basic liveness)
  • ✅ Head movement tracking
  • ✅ Face quality scoring
  • ✅ Detection consistency monitoring
  • ❌ No encryption of embeddings
  • ❌ No secure communication

Production Recommendations

  1. Encrypt Stored Embeddings: Use AES encryption
  2. Add Depth Sensing: Use Intel RealSense or iPhone TrueDepth
  3. Texture Analysis: Detect photo artifacts
  4. Challenge-Response: Ask users to perform actions
  5. Multi-Factor Auth: Combine with PIN/password
  6. Audit Logging: Track all authentication attempts
  7. Regular Updates: Update face embeddings periodically

📝 License

This project is for educational purposes. Free to use and modify.

🤝 Contributing

Suggestions and improvements welcome! Consider adding:

  • Database support (SQLite/PostgreSQL)
  • Web interface (Flask/FastAPI)
  • Advanced anti-spoofing (3D face models)
  • Mobile app integration
  • Cloud deployment

📞 Support

For issues or questions:

  1. Check Troubleshooting section
  2. Review error messages carefully
  3. Ensure all dependencies are correctly installed
  4. Check camera and lighting conditions

🙏 Acknowledgments

  • dlib: Davis King's excellent computer vision library
  • face_recognition: Adam Geitgey's easy-to-use wrapper
  • OpenCV: Open Source Computer Vision Library
  • Research: Eye Aspect Ratio from Soukupová and Čech (2016)

Built with ❤️ for Face Authentication Attendance System

Last Updated: January 29, 2026

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors