LeafLens is a SwiftUI-based iOS application that detects real-world objects using the device camera, matches them with trained machine learning models, and stores the results for later review.
The project demonstrates practical integration of Vision and CoreML frameworks within a clean MVVM architecture.
- π· Real-time object detection using camera
- π§ Smart classification powered by CoreML
- π Vision framework integration
- πΎ Save and review previous detection results
- π¨ Clean and responsive SwiftUI interface
- π Scalable MVVM architecture
The app follows MVVM (Model-View-ViewModel) pattern:
- Model β Data structures & ML result mapping
- View β SwiftUI UI components
- ViewModel β Business logic & Vision/ML handling
This ensures clean separation of concerns and testable logic.
- Swift
- SwiftUI
- Vision Framework
- CoreML
- AVFoundation (Camera handling)
- MVVM Architecture
Replace the image path with your actual preview image stored in the repository.
LeafLens/ ββ Models/
ββ Views/
ββ ViewModels/
ββ Resources/
- Real-time image classification
- Vision + CoreML integration
- Camera pipeline handling
- Clean architecture implementation
- SwiftUI reactive UI updates
Navin Rai
iOS Developer specializing in Swift, SwiftUI, UIKit, and CoreML.
Passionate about building scalable, production-ready iOS applications.
π LinkedIn: https://www.linkedin.com/in/navinkumarrai
π» GitHub: https://github.com/Navin-Rai-Developer
