Volume 6 , Issue 2 , PP: 56-66, 2022 | Cite this article as | XML | Html | PDF | Full Length Article
Mahmoud A. Zaher 1 * , Heba H. Aly 2
Doi: https://doi.org/10.54216/JISIoT.060205
Federated learning (FL) is a recently evolved distributed learning paradigm that gains increased research attention. To alleviate privacy concerns, FL fundamentally suggests that many entities can cooperatively train the machine/deep learning model by exchanging the learning parameters instead of raw data. Nevertheless, FL still exhibits inherent privacy problems caused by exposing the users’ data based on the training gradients. Besides, the unnoticeable adjustments on inputs done by adversarial attacks pose a critical security threat leading to damaging consequences on FL. To tackle this problem, this study proposes an innovative Federated Deep Resistance (FDR) framework, to provide collaborative resistance against adversarial attacks from various sources in a Fog-assisted IIoT environment. The FDR is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures that contributors have no access to the data of each other, where class probabilities are protected utilizing a private identifier generated for each class. The FDR mainly emphasizes convolutional networks for image recognition from the Food-101 and CIFAR-100 datasets. The empirical results have revealed that FDR outperformed the state-of-the-art adversarial attacks resistance approaches with 5% of accuracy improvements.
Adversarial Attacks , Federated Learning , Fog Computing , Industrial Internet of Things (IIoT)
[1] M. B. Sariyildiz, R. G. Cinbis, and E. Ayday, “Key protected classification for collaborative learning,” Pattern Recognit., vol. 104, 2020, doi: 10.1016/j.patcog.2020.107327.
[2] L. Bossard, M. Guillaumin, and L. Van Gool, “Food-101 - Mining discriminative components with random forests,” 2014, doi: 10.1007/978-3-319-10599-4_29.
[3] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images.(2009),” Cs.Toronto.Edu, pp. 1–58, 2009.
[4] N. Ma, X. Zhang, H. T. Zheng, and J. Sun, “Shufflenet V2: Practical guidelines for efficient cnn architecture design,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11218 LNCS. pp. 122–138, 2018, doi: 10.1007/978-3-030-01264-9_8.
[5] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in 32nd International Conference on Machine Learning, ICML 2015, 2015, vol. 1, pp. 448–456.
[6] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Journal of Machine Learning Research, 2010, vol. 9, pp. 249–256.
[7]. X. Zhang, Y. Zhou, S. Pei, J. Zhuge and J. Chen, "Adversarial Examples Detection for XSS Attacks Basedon Generative Adversarial Networks," in IEEE Access, vol. 8, pp. 10989-10996, 2020, doi: 10.1109/ACCESS.2020.2965184.
[8]. K. Madono, M. Tanaka, M. Onishi and T. Ogawa, "SIA-GAN: Scrambling Inversion Attack Using Generative Adversarial Network," in IEEE Access, vol. 9, pp. 129385-129393, 2021, doi: 10.1109/ACCESS.2021.3112684.
[9]. D. Wang, L. Dong, R. Wang, D. Yan and J. Wang, "Targeted Speech Adversarial Example Generation With Generative Adversarial Network," in IEEE Access, vol. 8, pp. 124503-124513, 2020, doi: 10.1109/ACCESS.2020.3006130.
[10]. V. R. Kebande, S. Alawadi, F. M. Awaysheh and J. A. Persson, "Active Machine Learning Adversarial
Attack Detection in the User Feedback Process," in IEEE Access, vol. 9, pp. 36908-36923, 2021, doi:
10.1109/ACCESS.2021.3063002.
[11]. X. Hu, D. Cheng, J. Chen, X. Jin and B. Wu, "Multiontology Construction and Application of Threat Model Based on Adversarial Attack and Defense Under ISO/IEC 27032," in IEEE Access, vol. 10, pp. 117955-117972, 2022, doi: 10.1109/ACCESS.2022.3220637.
[12]. A. Kuppa and N. -A. Le-Khac, "Adversarial XAI Methods in Cybersecurity," in IEEE Transactions on Information Forensics and Security, vol. 16, pp. 4924-4938, 2021, doi: 10.1109/TIFS.2021.3117075.
[13]. Y. -Y. Chen, C. -T. Chen, C. -Y. Sang, Y. -C. Yang and S. -H. Huang, "Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy," in IEEE Access, vol. 9, pp. 50667-50685, 2021, doi: 10.1109/ACCESS.2021.3068768.
[14]. I. Aliyu, S. Van Engelenburg, M. B. Mu’Azu, J. Kim and C. G. Lim, "Statistical Detection of Adversarial Examples in Blockchain-Based Federated Forest In-Vehicle Network Intrusion Detection Systems," in IEEE Access, vol. 10, pp. 109366-109384, 2022, doi: 10.1109/ACCESS.2022.3212412.
[15]. I. Alsmadi et al., "Adversarial Machine Learning in Text Processing: A Literature Survey," in IEEE Access, vol. 10, pp. 17043-17077, 2022, doi: 10.1109/ACCESS.2022.3146405.
[16]. Y. Zheng, Y. Lu and S. Velipasalar, "An Effective Adversarial Attack on Person Re-Identification in Video Surveillance via Dispersion Reduction," in IEEE Access, vol. 8, pp. 183891-183902, 2020, doi: 10.1109/ACCESS.2020.3024149.
[17]. W. Zhang, "Generating Adversarial Examples in One Shot With Image-to-Image Translation GAN," in IEEE Access, vol. 7, pp. 151103-151119, 2019, doi: 10.1109/ACCESS.2019.2946461.
[18]. C. Park, Y. Kim, J. -G. Park, D. Hong and C. Seo, "Evaluating Differentially Private Generative Adversarial Networks Over Membership Inference Attack," in IEEE Access, vol. 9, pp. 167412-167425, 2021, doi: 10.1109/ACCESS.2021.3137278.
[19]. X. Zhang, J. Wang and S. Zhu, "Dual Generative Adversarial Networks Based Unknown Encryption Ransomware Attack Detection," in IEEE Access, vol. 10, pp. 900-913, 2022, doi: 10.1109/ACCESS.2021.3128024.
[20]. F. Nikfam, A. Marchisio, M. Martina and M. Shafique, "AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks Through Accuracy Gradient," in IEEE Access, vol. 10, pp. 108997-109007, 2022, doi: 10.1109/ACCESS.2022.3213734.
[21]. Y. Sun and L. Fu, "A New Threat for Pseudorange-Based RAIM: Adversarial Attacks on GNSS Positioning," in IEEE Access, vol. 7, pp. 126051-126058, 2019, doi: 10.1109/ACCESS.2019.2939141.
[22]. T. -T. -H. Le, H. Kang and H. Kim, "Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial Images With Different Patch Sizes and Perturbation Ratios," in IEEE Access, vol. 9, pp. 133049-133061, 2021, doi: 10.1109/ACCESS.2021.3115764
[23]. X. Kang, B. Song, X. Du and M. Guizani, "Adversarial Attacks for Image Segmentation on Multiple Lightweight Models," in IEEE Access, vol. 8, pp. 31359-31370, 2020, doi: 10.1109/ACCESS.2020.2973069.
[24]. X. Zhang, Y. Zhou, S. Pei, J. Zhuge and J. Chen, "Adversarial Examples Detection for XSS Attacks Based on Generative Adversarial Networks," in IEEE Access, vol. 8, pp. 10989-10996, 2020, doi: 10.1109/ACCESS.2020.2965184.
[25]. R. Wang, Z. Chen, H. Dong and Q. Xuan, "You Can’t Fool All the Models: Detect Adversarial Samples via Pruning Models," in IEEE Access, vol. 9, pp. 163780-163790, 2021, doi: 10.1109/ACCESS.2021.3133334.
[26]. K. Yamanaka, R. Matsumoto, K. Takahashi and T. Fujii, "Adversarial Patch Attacks on Monocular Depth Estimation Networks," in IEEE Access, vol. 8, pp. 179094-179104, 2020, doi: 10.1109/ACCESS.2020.3027372.
[27]. Z. Li, C. Feng, J. Zheng, M. Wu and H. Yu, "Towards Adversarial Robustness via Feature Matching," in IEEE Access, vol. 8, pp. 88594-88603, 2020, doi: 10.1109/ACCESS.2020.2993304.
[28]. Á. L. Perales Gómez, L. F. Maimó, F. J. G. Clemente, J. A. M. Morales, A. H. Celdrán and G. Bovet, "A Methodology for Evaluating the Robustness of Anomaly Detectors to Adversarial Attacks in Industrial Scenarios," in IEEE Access, vol. 10, pp. 124582-124594, 2022, doi: 10.1109/ACCESS.2022.3224930.
[29]. Y. Bakhti, S. A. Fezza, W. Hamidouche and O. Déforges, "DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder," in IEEE Access, vol. 7, pp. 160397-160407, 2019, doi: 10.1109/ACCESS.2019.2951526.
[30]. F. O. Catak, M. Kuzlu, E. Catak, U. Cali and O. Guler, "Defensive Distillation-Based Adversarial AttackMitigation Method for Channel Estimation Using Deep Learning Models in Next-Generation Wireless Networks," in IEEE Access, vol. 10, pp. 98191-98203, 2022, doi: 10.1109/ACCESS.2022.3206385.
[31]. R. H. Randhawa, N. Aslam, M. Alauthman, H. Rafiq and F. Comeau, "Security Hardening of Botnet Detectors Using Generative Adversarial Networks," in IEEE Access, vol. 9, pp. 78276-78292, 2021, doi: 10.1109/ACCESS.2021.3083421.
[32]. Z. Liu and X. Yin, "LSTM-CGAN: Towards Generating Low-Rate DDoS Adversarial Samples for Blockchain-Based Wireless Network Detection Models," in IEEE Access, vol. 9, pp. 22616-22625, 2021, doi: 10.1109/ACCESS.2021.3056482.
[33]. X. Kuang, H. Liu, Y. Wang, Q. Zhang, Q. Zhang and J. Zheng, "A CMA-ES-Based Adversarial Attack on Black-Box Deep Neural Networks," in IEEE Access, vol. 7, pp. 172938-172947, 2019, doi: 10.1109/ACCESS.2019.2956553.