Skip to content

Methasit-Pun/BEAM-detection-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

12 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ›ด E-Scooter Safety Detection System

Python PyTorch TensorFlow OpenCV NVIDIA Jetson

License: MIT Status Accuracy

Real-time computer vision system for detecting multi-rider e-scooter violations

Deployed at Chulalongkorn University | 90.74% Detection Accuracy | 87% Violation Reduction in 5 Days

Features โ€ข Architecture โ€ข Results โ€ข Installation โ€ข Documentation


โšก TL;DR

Real-time AI system detecting multi-rider e-scooter violations at Chulalongkorn University using SSD-MobileNet on Jetson Nano. 90.74% accuracy, 87% violation reduction in 5 days. Processes 15-20 FPS with instant audio alerts. Built with PyTorch, TensorFlow, OpenCV.

Quick Start:

git clone https://github.com/Methasit-Pun/BEAM-detection-system.git
cd BEAM-detection-system
pip install -r requirements.txt
python detect_e_scooter.py --config config.yaml

๐Ÿ“‹ Table of Contents


๐ŸŽฏ Problem Statement

Problem: Multiple riders on single e-scooters increase accident risk

Multiple riders on single e-scooters create safety hazards on campus. This common violation increases accident risk and requires automated monitoring.


๐Ÿ’ก Solution Overview

Detection system with real-time audio alerts

Real-time computer vision system that:

  • Detects e-scooters and counts riders using SSD-MobileNet
  • Triggers audio alerts when 2+ riders detected
  • Runs on Jetson Nano for edge deployment
  • Processes video at 15-20 FPS

๐Ÿ“Š Key Results

๐ŸŽฏ Performance Metrics

Metric Value Impact
Detection Accuracy 90.74% High reliability in real-world conditions
Violation Reduction 31% โ†’ 4% 87% decrease in 5 days
Processing Speed 15-20 FPS Real-time monitoring capability
Response Time <200ms Instant alert generation
False Positive Rate <10% Minimal incorrect alerts

Field Deployment Statistics:

  • ๐Ÿ“ˆ Alert-to-incident correlation: >95%
  • ๐Ÿ‘ฅ User compliance: Variable (0-50%), decreasing trend over time
  • โฐ Test period: 5 days across multiple campus locations
  • ๐Ÿ• Test hours: Morning (08:00-10:00) and Evening (16:00-18:00)

๐Ÿ† Features

  • Real-time Detection: Processes video at 15-20 FPS on edge hardware
  • High Accuracy: 90.74% detection accuracy in real-world deployment
  • Instant Alerts: Audio notifications triggered within 200ms of detection
  • Edge Computing: Runs entirely on Jetson Nano without cloud dependency
  • Custom Dataset: Trained on 100+ campus-specific labeled images
  • Scalable Architecture: Easily deployable across multiple locations

โš™๏ธ Technical Stack

๐Ÿ”ง Hardware

  • Computing: NVIDIA Jetson Nano 4GB
  • Camera: IMX219 CSI Camera (1080p)
  • Audio: USB Speaker/Buzzer
  • Power: 5V 4A DC Adapter
  • Storage: 32GB microSD

๐Ÿ’ป Software

  • Training: PyTorch 1.10+
  • Deployment: TensorFlow 2.0+
  • Vision: OpenCV 4.5+
  • Language: Python 3.8+
  • Audio: PyAudio
  • Data: NumPy, Pandas

๐Ÿง  Model Architecture

  • Base Model: SSD-MobileNet
  • Input Size: 300ร—300 RGB
  • Framework: PyTorch โ†’ ONNX
  • Inference Time: ~50ms per frame
  • Dataset Format: Pascal VOC

๐Ÿ› ๏ธ Development Tools

  • Version Control: Git/GitHub
  • Container: Docker (optional)
  • IDE: VS Code, Jupyter
  • Annotation: camera-capture tool
  • Monitoring: TensorBoard

๐Ÿ—๏ธ System Architecture

Software Architecture

%%{init: {'theme':'base', 'themeVariables': { 'fontSize':'16px'}}}%%
graph LR
    A["๐Ÿ“น Camera<br/>(CSI/USB)"] --> B["๐ŸŽฌ Video<br/>Capture"]
    B --> C["๐Ÿ”„ Preprocess<br/>(300x300)"]
    C --> D["๐Ÿง  Model<br/>(SSD-MobileNet)"]
    D --> E["๐Ÿ“ฆ Detections"]
    E --> F{"๐Ÿ” Multi-Rider<br/>Check"}
    F -->|"โœ“ Yes"| G["๐Ÿšจ ALERT<br/>(Audio+Visual)"]
    F -->|"โœ— No"| H["โœ… Continue<br/>Monitoring"]
    G --> I["๐Ÿ“ Log<br/>Violation"]
    H --> B
    I --> B
    
    style F fill:#ffeb3b,stroke:#f57c00,stroke-width:4px,color:#000
    style G fill:#ff6b6b,stroke:#c62828,stroke-width:3px,color:#fff
    style H fill:#51cf66,stroke:#2e7d32,stroke-width:2px,color:#000
    style D fill:#339af0,stroke:#1565c0,stroke-width:2px,color:#fff
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
Loading

Pipeline Flow

  1. Video Acquisition โ†’ Camera captures 1080p video at 30 FPS
  2. Frame Processing โ†’ Resize to 300ร—300, normalize pixel values
  3. Inference โ†’ SSD-MobileNet processes frame (~50ms)
  4. Post-Processing โ†’ Extract bounding boxes with confidence scores
  5. Violation Logic โ†’ Check spatial overlap between scooter and person boxes
  6. Alert Generation โ†’ Trigger audio if multi-rider detected
  7. Display & Logging โ†’ Render annotated frame and log event

Key Components

Component Technology Purpose
Detection Model SSD-MobileNet (ONNX) Real-time object detection
Inference Engine TensorRT / TensorFlow Lite Optimized inference on Jetson
Video Handler OpenCV VideoCapture Frame acquisition and processing
Violation Detector Custom Python Logic Spatial overlap analysis
Alert System PyAudio + Wave Audio alert playback
Logger Python Logging Event tracking and statistics

Data Flow

Camera โ†’ Frame Buffer โ†’ Preprocessing โ†’ Neural Network โ†’ 
Detected Objects โ†’ Violation Check โ†’ Alert/Log โ†’ Display

Processing Stages:

  • Input: 1920ร—1080 RGB frame
  • Preprocessed: 300ร—300 RGB tensor
  • Detection: Bounding boxes + class labels + confidence scores
  • Analysis: Rider count per scooter
  • Output: Alert trigger + annotated frame

Hardware Setup

Hardware design and component layout

Deployed System

Physical deployment on campus

Experiment Locations

Test locations across campus

๐Ÿ’ป Implementation

Dataset Categories

  • Empty scooter
  • Single rider (compliant)
  • Multiple riders (violation)
  • (Future) Speed detection
  • (Future) Direction compliance

Data Collection Workflow

1. Setup Dataset Structure

cd jetson-inference/python/training/detection/ssd/data
mkdir <your-dataset>
cd <your-dataset>
echo -e "empty\nsingle_rider\nmulti_rider" > labels.txt

2. Capture & Label Images

Using jetson-inference camera-capture tool:

camera-capture csi://0              # MIPI CSI camera
camera-capture /dev/video0          # USB camera
  • Set mode to "Detection" in UI
  • Freeze frame, draw bounding boxes
  • Assign class labels
  • Save and repeat

Note: We initially tried Kaggle e-scooter dataset but accuracy was insufficient. Custom campus-specific data performed better.

Sample from custom dataset Labeled training sample

3. Train Model

cd jetson-inference/python/training/detection/ssd
python3 train_ssd.py --dataset-type=voc --data=data/<your-dataset> --model-dir=models/<your-model>

4. Export to ONNX

python3 onnx_export.py --model-dir=models/<your-model>

5. Deploy Detection

NET=models/<your-model>
detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt \
          --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
          csi://0

6. Audio Alert Integration

import pyaudio
import wave

def trigger_alert():
    """Play alert sound when violation detected"""
    audio_file = "violation_alert.wav"
    wf = wave.open(audio_file, 'rb')
    p = pyaudio.PyAudio()
    stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
                    channels=wf.getnchannels(),
                    rate=wf.getframerate(),
                    output=True)
    stream.write(wf.readframes(1024))

Technical Resources


๐Ÿงช Field Testing Results

Observations:

  • Morning sessions: higher compliance
  • Evening (16:00-18:00): lower compliance
  • Detection responsiveness: consistent across sessions
  • User behavior: compliance decreased over time

Insights:

  • High detection reliability (90.74%)
  • Need stronger enforcement beyond audio alerts
  • Consider visual indicators or mobile notifications
  • Behavior change requires sustained intervention

๐Ÿš€ Installation

Prerequisites

Component Specification
Hardware NVIDIA Jetson Nano (4GB)
Camera CSI or USB (1080p recommended)
Audio Speaker/Buzzer for alerts
Power 5V 4A DC adapter
Storage 32GB+ microSD card

Dependencies

Python 3.8+
PyTorch 1.10+
TensorFlow 2.0+
OpenCV 4.5+
PyAudio
NumPy

Quick Start

# Clone repository
git clone https://github.com/Methasit-Pun/BEAM-detection-system.git
cd BEAM-detection-system

# Install dependencies
pip3 install -r requirements.txt

# Run detection system
python3 detect_e_scooter.py --camera csi://0 --model models/trained-model/ssd-mobilenet.onnx

Configuration

Edit config.yaml to customize:

  • Detection threshold
  • Alert sensitivity
  • Camera input source
  • Model path
  • Audio alert settings

๐Ÿ”ฎ Future Work

Model Improvements:

  • Expand training data: diverse lighting, angles, clothing
  • Reduce false positives in crowded scenes
  • Integrate weight sensors for validation

Enhanced Alerts:

  • LED visual indicators
  • Progressive warning system
  • Mobile push notifications

Detection Capabilities:

  • Speed monitoring
  • Directional compliance
  • Integration with campus enforcement

Deployment Expansion:

  • Campus gates and intersections
  • Parking zone monitoring
  • Multi-campus rollout

๐Ÿค Contributing

Contributions are welcome! Whether you're fixing bugs, improving documentation, or proposing new features.

How to Contribute

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Development Guidelines

  • Follow PEP 8 style guide for Python code
  • Add tests for new features
  • Update documentation as needed
  • Ensure all tests pass before submitting PR

๐Ÿ“„ License

MIT License - See LICENSE file for details.


๐Ÿ“š References & Acknowledgments

Special Thanks:

  • Chulalongkorn University for deployment support
  • NVIDIA for Jetson Nano platform
  • Open-source computer vision community

๐Ÿ“Š Project Statistics

GitHub stars GitHub forks GitHub watchers

Made with โค๏ธ for campus safety

Report Bug ยท Request Feature ยท Documentation

ยฉ 2024-2026 BEAM Detection System Team. All rights reserved.

About

Live deployed system at Chula that actively prevents unsafe double riding on Beam scooters via real-time audio alerts

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors