DCU-Net is a dual-channel U-shaped network designed to detect and locate image splicing forgeries. This method is capable of identifying suspicious tampered areas within images by leveraging deep learning techniques. The DCU-Net model incorporates both RGB and residual image features to improve accuracy and robustness in forgery detection.
Image splicing forgery detection involves identifying regions within an image that have been tampered with, often by cutting content from one image and pasting it into another. DCU-Net addresses this challenge by using a dual-channel approach to capture both the original and tampered features of an image.
- Dual-Channel Input: Utilizes both RGB features and residual image features to enhance detection accuracy.
- High-Pass Filters: Extracts edge information of tampered areas to generate residual images.
- Dilated Convolution: Captures tampered features with varying granularity.
- Robust to Attacks: Maintains performance under Gaussian noise and JPEG compression attacks.
The DCU-Net model is composed of three main parts:
- Encoder: Extracts deep features from both the original and residual images using a dual-channel network.
- Feature Fusion: Combines deep features from both channels at multiple stages to enhance contextual understanding.
- Decoder: Decodes the fused features to predict the tampered regions at the pixel level.
DCU-Net was evaluated using two datasets:
- CASIA 2.0: A widely-used dataset for image tampering detection.
- Columbia: Another benchmark dataset for evaluating forgery detection methods.
DCU-Net outperforms state-of-the-art methods in both accuracy and robustness. Key performance metrics on the CASIA and Columbia datasets include:
| Metric | CASIA | Columbia |
|---|---|---|
| F-measure | 0.7667 | 0.8992 |
| Precision | 0.7772 | 0.8406 |
| Recall | 0.7893 | 0.9665 |
| Accuracy | 0.8793 | 0.9505 |
The model demonstrates strong resistance to both Gaussian noise and JPEG compression, maintaining high detection performance even under challenging conditions.
Contributions are welcome! Please open an issue or submit a pull request.
This project is licensed under the MIT License.




