A Review on Attacks against Artificial Intelligence (AI) and Their Defence Image Recognition and Generation Machine Learning, Artificial Intelligence

Md. Tarek Hossain, Rumi Afrin, Mohd. Al- Amin Biswas

Abstract


The main objective this paper is to review the adversarial assaults, data poisoning, model inversion attacks, and other methods that potentially jeopardize the integrity and dependability of AI-based image recognition and generation models. As artificial intelligence (AI) systems become more popular in numerous sectors, their vulnerability to attacks has arisen as a major worry. We focus on attacks especially targeting AI models used in picture identification and creation tasks in our review study. We investigate the wide range of assault strategies, including both traditional and more complex techniques. These attacks take use of flaws in machine learning algorithms, frequently resulting in misclassification, falsified picture production, or unauthorized access to sensitive data. We survey numerous defense strategies developed by scholars and practitioners to overcome these difficulties. Among these defenses are adversarial training, robust feature extraction, input sanitization, and model distillation. We explore the usefulness and limitations of each protection mechanism, highlighting the importance of a comprehensive approach that integrates numerous techniques to improve the resilience of AI models. Furthermore, we investigate the possible impact of these attacks on real-world applications such as driverless vehicles, medical imaging systems, and security monitoring, emphasizing the threats to public safety and privacy. The study also covers the legislative and ethical aspects surrounding AI security, as well as the responsibilities of AI developers in establishing adequate defense measures. This analysis highlights the critical need for continued research and collaboration to develop more secure AI systems that can withstand sophisticated attacks. As AI evolves and integrates into important areas, a concerted effort must be made to strengthen these systems' resilience against hostile threats and assure their responsible deployment for the benefit of society.

Keywords


Artificial Intelligence; Attacks Against AI; Image Recognition; Machine Learning; Challenges

Full Text:

PDF

References


M. Javaid, A. Haleem, I. H. Khan, R. Suman, “Understanding the potential applications of Artificial Intelligence in Agriculture Sector,” Advanced Agrochem, vol. 2, no. 1, pp. 15-30, 2023, https://doi.org/10.1016/j.aac.2022.10.001.

L. Liu, Y. Wang and W. Chi, “Image Recognition Technology Based on Machine Learning,” IEEE Access, pp. 1-9, 2017, https://doi.org/10.1109/ACCESS.2020.3021590.

A. Bohr, K. Memarzadeh, “The rise of artificial intelligence in healthcare applications,” Artificial Intelligence in healthcare, pp. 25-60, 2020, https://doi.org/10.1016/B978-0-12-818438-7.00002-2.

T. Ahmad et al., “Artificial intelligence in sustainable energy industry: Status Quo, challenges and opportunities,” Journal of Cleaner Production, vol. 289, p. 125834, 2021, https://doi.org/10.1016/j.jclepro.2021.125834.

A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv, 2018, https://doi.org/10.48550/arXiv.1810.00069.

A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, D. Mukhopadhyay, “A survey on adversarial attacks and defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25-45, 2021, https://doi.org/10.1049/cit2.12028.

A. Rawal, D. Rawat, B. M. Sadler, “Recent advances in adversarial machine learning: status, challenges and perspectives,” Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, vol. 11746, pp. 701-712, 2021, https://doi.org/10.1117/12.2583970.

C. Janiesch, P. Zschech, K. Heinrich, “Machine learning and deep learning,” Electronic Markets, vol. 31, no. 3, pp. 685-695, 2021, https://doi.org/10.1007/s12525-021-00475-2.

M. J. Walter, A. Barrett, D. J. Walker, K. Tam, “Adversarial AI testcases for maritime autonomous systems,” AI, Computer Science and Robotics Technology, vol. 2, 2023, https://doi.org/10.5772/acrt.15.

J. Kim, N. Park, “Blockchain-based data-preserving AI learning environment model for AI cybersecurity systems in IoT service environments,” Applied Sciences, vol. 10, no. 14, p. 4718, 2020, https://doi.org/10.3390/app10144718.

H. Wu, H. Han, X. Wang and S. Sun, “Research on Artificial Intelligence Enhancing Internet of Things Security: A Survey,” IEEE Access, vol. 8, pp. 153826-153848, 2020, https://doi.org/10.1109/ACCESS.2020.3018170.

M. Ebers, “Standardizing AI-The Case of the European Commission's Proposal for an Artificial Intelligence Act,” The Cambridge handbook of artificial intelligence: global perspectives on law and ethics, pp. 1-23, 2022, https://doi.org/10.2139/ssrn.3900378.

T. M. Johansson, D. Dimitrios, A. Pastra, “Maritime robotics and autonomous systems operations: Exploring pathways for overcoming international techno-regulatory data barriers,” Journal of Marine Science and Engineering, vol. 9, no. 6, p. 594, 2021, https://doi.org/10.3390/jmse9060594.

P. Liu, X. Xu, W. Wang, “Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives,” Cybersecurity, vol. 5, no. 1, pp. 1-19, 2020, https://doi.org/10.1186/s42400-021-00105-6.

S. Zhou, C. Liu, D. Ye, T. Zhu, W. Zhou, P. S. Yu, “Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity,” ACM Computing Surveys, vol. 55, no. 8, pp. 1-39, 2022, https://doi.org/10.1145/3547330.

H. Nguyen, F. D. Troia, G. Ishigaki, M. Stamp, “Generative adversarial networks and image-based malware classification,” Journal of Computer Virology and Hacking Techniques, vol. 19, pp. 579-595, 2023, https://doi.org/10.1007/s11416-023-00465-2.

J. S. Devagiri, S. Paheding, Q. Niyaz, X. Yang, S. Smith, “Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges,” Expert Systems with Applications, vol. 207, p. 118002, 2022, https://doi.org/10.1016/j.eswa.2022.118002.

J. C. Costa, T. Roxo, H. Proença, P. R. Inácio, “How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses,” arXiv 2023, https://doi.org/10.48550/arXiv.2305.10862.

S. Kaviani, K. J. Han, I. Sohn, “Adversarial attacks and defenses on AI in medical imaging informatics: A survey,” Expert Systems with Applications, vol. 198, p. 116815, 2022, https://doi.org/10.1016/j.eswa.2022.116815.

K. Bhandari, K. Kumar and A. L. Sangal, “Artificial Intelligence in Software Engineering: Perspectives and Challenges,” 2023 Third International Conference on Secure Cyber Computing and Communication (ICSCCC), pp. 133-137, 2023, https://doi.org/10.1109/ICSCCC58608.2023.10176436.

I. Giachos, E. C. Papakitsos, P. Savvidis, N. Laskaris, “Inquiring Natural Language Processing Capabilities on Robotic Systems through Virtual Assistants: A Systemic Approach,” Journal of Computer Science Research, vol. 5, no. 2, pp. 28-36, 2023, https://doi.org/10.30564/jcsr.v5i2.5537.

A. Michel, S. K. Jha, R. Ewetz, “A survey on the vulnerability of deep neural networks against adversarial attacks,” Progress in Artificial Intelligence, vol. 11, pp. 131-141, 2022, https://doi.org/10.1007/s13748-021-00269-9.

E. Alshahrani, D. Alghazzawi, R. Alotaibi, O. Rabie, “Adversarial attacks against supervised machine learning based network intrusion detection systems,” Plos one, vol. 17, no. 10, p. e0275971, 2022, https://doi.org/10.1371/journal.pone.0275971.

A. J. G. D. Azambuja, C. Plesker, K. Schützer, R. Anderl, B. Schleich, V. R. Almeida, “Artificial Intelligence-Based Cyber Security in the Context of Industry 4.0—A Survey,” Electronics, vol. 12, no. 8, p. 1920, 2023, https://doi.org/10.3390/electronics12081920.

M. Conti, J. Li, S. Picek, J. Xu, “Label-only membership inference attack against node-level graph neural networks,” Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, pp. 1-12, 2022, https://doi.org/10.1145/3560830.3563734.

M. J. Willemink et al., “Preparing medical imaging data for machine learning,” Radiology, vol. 295, no. 1, pp. 4-15, 2020, https://doi.org/10.1148/radiol.2020192224.

H. Liang, E. He, Y. Zhao, Z. Jia, H. Li, “Adversarial attack and defense: A survey,” Electronics, vol. 11, no. 8, p. 1283, 2022, https://doi.org/10.3390/electronics11081283.

Z. Qian, K. Huang, Q. F. Wang, X. Y. Zhang, “A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies,” Pattern Recognition, vol. 131, p. 108889, 2022, https://doi.org/10.1016/j.patcog.2022.108889.

Y. Chen, M. Zhang, J. Li and X. Kuang, “Adversarial Attacks and Defenses in Image Classification: A Practical Perspective,” 2022 7th International Conference on Image, Vision and Computing (ICIVC), pp. 424-430, 2022, https://doi.org/10.1109/ICIVC55077.2022.9886997.

S. Qiu, Q. Liu, S. Zhou, C. Wu, “Review of artificial intelligence adversarial attack and defense technologies,” Applied Sciences, vol. 9, no. 5, p. 909, 2019, https://doi.org/10.3390/app9050909.

J. A. Esterhuizen, B. R. Goldsmith, S. Linic, “Interpretable machine learning for knowledge generation in heterogeneous catalysis,” Nature Catalysis, vol. 5, no. 3, pp. 175-184, 2022, https://doi.org/10.1038/s41929-022-00744-z.

A. Alanazi, “Using machine learning for healthcare challenges and opportunities,” Informatics in Medicine Unlocked, vol. 30, p. 100924, 2022, https://doi.org/10.1016/j.imu.2022.100924.

J. Yang, W. Zhang, J. Liu, J. Wu, J. Yang, “Generating de-identification facial images based on the attention models and adversarial examples,” Alexandria Engineering Journal, vol. 61, no. 11, pp. 8417-8429, 2022, https://doi.org/10.1016/j.aej.2022.02.007.

N. R. Zhou, T. F. Zhang, X. W. Xie, J. Y. Wu, “Hybrid quantum–classical generative adversarial networks for image generation via learning discrete distribution,” Signal Processing: Image Communication, vol. 110, p. 116891, 2023, https://doi.org/10.1016/j.image.2022.116891.

N. -B. Nguyen, K. Chandrasegaran, M. Abdollahzadeh and N. -M. Cheung, “Re-Thinking Model Inversion Attacks Against Deep Neural Networks,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16384-16393, 2023, https://doi.org/10.1109/CVPR52729.2023.01572.

R. Zhang, S. Hidano, F. Koushanfar, “Text revealer: Private text reconstruction via model inversion attacks against transformers,” arXiv, 2022, https://doi.org/10.48550/arXiv.2209.10505.

B. Amerirad, M. Cattaneo, R. S. Kenett, E. Luciano, “Adversarial Artificial Intelligence in Insurance: From an Example to Some Potential Remedies,” Risks, vol. 11, no. 1, p. 20, 2023, https://doi.org/10.3390/risks11010020.

L. Xu, X. Zheng, X. Li, Y. Zhang, L. Liu and H. Ma, “WiCAM: Imperceptible Adversarial Attack on Deep Learning based WiFi Sensing,” 2022 19th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 10-18, 2022, https://doi.org/10.1109/SECON55815.2022.9918564.

A. Amirkhani, M. P. Karimi, A. Banitalebi-Dehkordi, “A survey on adversarial attacks and defenses for object detection and their applications in autonomous vehicles,” The Visual Computer, vol. 39, pp. 5293-5307, 2022, https://doi.org/10.1007/s00371-022-02660-6.




DOI: https://doi.org/10.59247/csol.v2i1.73

Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Ruma Moriom

 

Control Systems and Optimization Letters
ISSN: 2985-6116
Website: https://ejournal.csol.or.id/index.php/csol
Email: alfian_maarif@ieee.org
Publisher: Peneliti Teknologi Teknik Indonesia
Address: Jl. Empu Sedah No. 12, Pringwulung, Condongcatur, Kec. Depok, Kabupaten Sleman, Daerah Istimewa Yogyakarta 55281, Indonesia