PyTorch implementation of projected gradient descent (PGD) adversarial noise attack
-
Updated
Jun 15, 2024 - Python
PyTorch implementation of projected gradient descent (PGD) adversarial noise attack
CAP6938-Fall2025: This is a PyTorch framework for benchmarking ResNet18 and ViT robustness under PGD and targeted Controlled PGD (CPGD) adversarial attacks across six image datasets. UCF Trustworthy ML.
Hands-on AI security workshop by GDSC Asia Pacific University – explore the fundamentals of attacking machine learning systems through white-box and black-box techniques. Learn to evade image classifiers and manipulate LLM behavior using real-world tools and methods.
Interactive gradient descent visualizer for optimization algorithms. Explore GD, SGD, projected gradient descent, and Frank-Wolfe methods step-by-step.
Project developed during the course of 'Optimization for Data Science' in the University of Padua. The project provides an Implementation of Frank-Wolfe Methods for Recommender Systems in Python.
Add a description, image, and links to the projected-gradient-descent topic page so that developers can more easily learn about it.
To associate your repository with the projected-gradient-descent topic, visit your repo's landing page and select "manage topics."