This repository contains the simulation code for the research paper titled "Self-Defending 6G Networks Through AI-Driven Adaptive Decoy Generation at the Edge".
The framework provides a proactive security mechanism for 6G networks using three core AI components:
- Conditional GAN (cGAN): To generate realistic, context-aware network decoys.
- Reinforcement Learning (PPO): To dynamically manage decoy deployment strategies.
- Federated Learning: To synchronize models across distributed edge nodes in a privacy-preserving manner.
- Simulation of edge nodes in a 6G network environment.
- Generation of adaptive decoys using conditional GANs.
- Dynamic decision-making with Proximal Policy Optimization (PPO).
- Privacy-preserving model synchronization via Federated Learning.
- Data preprocessing and visualization for the CIC-IoT-2023 dataset.
- Clone the repository:
git clone <https://github.com/PhobosQ-ai/Self-Defending-6G-Networks>
This folder contains the simulation code that implements the Adaptive Decoy Generation framework described in the paper "Self-Defending 6G Networks Through AI-Driven Adaptive Decoy Generation at the Edge".
This README includes the core mathematical expressions used by the implementation so they render in Markdown viewers that support TeX (MathJax/KaTeX).
- Clone the repository and change into the python folder:
git clone <https://github.com/PhobosQ-ai/Self-Defending-6G-Networks>- Install dependencies:
pip install -r requirements.txt- Run the simulation:
python main.py- Conditional Generative Adversarial Network (cGAN) for context-aware decoy generation.
- Reinforcement Learning (PPO) agent for dynamic decoy deployment and resource management.
- Federated Learning (FedAvg-like) synchronization across distributed edge nodes.
- Data loading, preprocessing, plotting utilities and a simulation harness.
Below are the key equations and loss functions referenced in the implementation and paper. Put simply, these are the formal contracts that guide training and synchronization.
The canonical conditional GAN objective used to train generator G and discriminator D (conditioned on y) is:
In practice we use a stabilized variant (WGAN-GP) for improved training stability.
Using the Wasserstein loss with gradient penalty (coefficient
where
Generator loss (Wasserstein style) typically minimizes the negative critic score:
When attack interaction data
This expresses a GAN inversion (finding latent
The agent seeks a policy
For PPO (Proximal Policy Optimization) the clipped surrogate objective is commonly used:
where
The federated aggregation step used by the server in the paper is expressed as (a FedAvg-like weighted update):
where
Detection Rate (DR):
False Positive Rate (FPR):
Latency (interaction-to-alert) is measured as the elapsed time between the first packet interacting with a decoy and the generated alert at the edge node.
main.py— simulation harness and orchestrator.gan_model.py— cGAN (Generator / Discriminator) implementations.rl_agent.py— PPO agent and utilities.edge_node.py— simulated edge node behaviors and decoy deployment.data_loader.py— dataset loading and preprocessing (CIC-IoT-2023 adapter).federated_learning.py— simple FedAvg orchestration used in the simulation.plots.py— helper functions to render figures used in the paper.requirements.txt— Python dependencies.