Potential Applications and Limitations of Artificial Intelligence in Remote Sensing Data Interpretation: A Case Study
Abstract
This research aims to comprehensively review the applications and limitations of artificial intelligence (AI) in interpreting remote sensing data, highlighting its potential through a detailed case study. AI technologies, particularly machine learning and deep learning, have shown remarkable promise in enhancing the accuracy and efficiency of data interpretation tasks in remote sensing, such as anomaly detection, change detection, and land cover classification. AI-driven analysis has a lot of options because to remote sensing, which can gather massive amounts of environmental data via drones, satellites, and other aerial platforms. AI approaches, in particular machine learning and deep learning, have demonstrated potential to improve the precision and effectiveness of data interpretation tasks, including anomaly identification, change detection, and land cover classification. Nevertheless, the research also points to a number of drawbacks, including challenges related to data quality, the need for large labeled datasets, and the risk of model overfitting. Furthermore, the intricacy of AI models can occasionally result in a lack of transparency, which makes it challenging to understand and accept the outcomes. The case study emphasizes the necessity for a balanced strategy that makes use of the advantages of both AI and conventional techniques by highlighting both effective applications of AI in remote sensing and areas where traditional methods still perform better than AI. This research concludes that while AI holds significant potential for advancing remote sensing data interpretation, careful consideration of its limitations is crucial for its effective application in real-world scenarios.
Keywords
Full Text:
PDFReferences
L. Zhang and L. Zhang, "Artificial Intelligence for Remote Sensing Data Analysis: A review of challenges and opportunities," IEEE Geoscience and Remote Sensing Magazine, vol. 10, no. 2, pp. 270-294, 2022, https://doi.org/10.1109/MGRS.2022.3145854.
H. Shirmard, E. Farahbakhsh, R. D. Müller, R. Chandra, “A review of machine learning in processing remote sensing data for mineral exploration,” Remote Sensing of Environment, vol. 268, p. 112750, 2022, https://doi.org/10.1016/j.rse.2021.112750.
A. Bohr, K. Memarzadeh, “The rise of artificial intelligence in healthcare applications,” Artificial Intelligence in healthcare, pp. 25-60, 2020, https://doi.org/10.1016/B978-0-12-818438-7.00002-2.
T. Ahmad et al., “Artificial intelligence in sustainable energy industry: Status Quo, challenges and opportunities,” Journal of Cleaner Production, vol. 289, p. 125834, 2021, https://doi.org/10.1016/j.jclepro.2021.125834.
A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv Machine Learning, 2018, https://doi.org/10.48550/arXiv.1810.00069.
A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, D. Mukhopadhyay, “A survey on adversarial attacks and defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25-45, 2021, https://doi.org/10.1049/cit2.12028.
A. Rawal, D. Rawat, B. M. Sadler, “Recent advances in adversarial machine learning: status, challenges and perspectives,” Proceedings of the SPIE, vol. 11746, pp. 701-712, 2021, https://ui.adsabs.harvard.edu/link_gateway/2021SPIE11746E..2QR/doi:10.1117/12.2583970.
A. Rawal, J. McCoy, D. B. Rawat, B. M. Sadler and R. S. Amant, "Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and Perspectives," IEEE Transactions on Artificial Intelligence, vol. 3, no. 6, pp. 852-866, 2022, https://doi.org/10.1109/TAI.2021.3133846.
M. J. Walter, A. Barrett, D. J. Walker, K. Tam, “Adversarial AI testcases for maritime autonomous systems,” AI, Computer Science and Robotics Technology, vol. 2, 2023, https://doi.org/10.5772/acrt.15.
J. Kim, N. Park, “Blockchain-based data-preserving AI learning environment model for AI cybersecurity systems in IoT service environments,” Applied Sciences, vol. 10, no. 14, pp. 4718, 2020, https://doi.org/10.3390/app10144718.
H. Wu, H. Han, X. Wang and S. Sun, "Research on Artificial Intelligence Enhancing Internet of Things Security: A Survey," IEEE Access, vol. 8, pp. 153826-153848, 2020, https://doi.org/10.1109/ACCESS.2020.3018170.
M. Ebers, “Standardizing AI-The Case of the European Commission's Proposal for an Artificial Intelligence Act,” The Cambridge handbook of artificial intelligence: global perspectives on law and ethics, 2021, https://doi.org/10.2139/ssrn.3900378.
Y. Xue, Q. Wei, X. Gong, F. Wu, Y. Luo, Z. Chen, “An Assurance Case Practice of AI-Enabled Systems on Maritime Inspection,” International Conference on Artificial Intelligence Security and Privacy, pp. 283-299, 2023, https://doi.org/10.1007/978-981-99-9785-5_20.
P. Liu, X. Xu, W. Wang, “Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives,” Cybersecurity, vol. 5, no. 1, pp. 1-19, 2022, https://doi.org/10.1186/s42400-021-00105-6.
S. Zhou, C. Liu, D. Ye, T. Zhu, W. Zhou, P. S. Yu, “Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity,” ACM Computing Surveys, vol. 55, no. 8, pp. 1-39, 2022, https://doi.org/10.1145/3547330.
H. Nguyen, F. D. Troia, G. Ishigaki, M. Stamp, “Generative adversarial networks and image-based malware classification,” Journal of Computer Virology and Hacking Techniques, vol. 19, pp. 579–595, 2023, https://doi.org/10.1007/s11416-023-00465-2.
J. S. Devagiri, S. Paheding, Q. Niyaz, X. Yang, S. Smith, “Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges,” Expert Systems with Applications, vol. 207, p. 118002, 2022, https://doi.org/10.1016/j.eswa.2022.118002.
J. C. Costa, T. Roxo, H. Proença and P. R. M. Inácio, "How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses," IEEE Access, vol. 12, pp. 61113-61136, 2024, https://doi.org/10.1109/ACCESS.2024.3395118.
S. Kaviani, K. J. Han, I. Sohn, “Adversarial attacks and defenses on AI in medical imaging informatics: A survey,” Expert Systems with Applications, vol. 198, p. 116815, 2022, https://doi.org/10.1016/j.eswa.2022.116815.
K. Bhandari, K. Kumar and A. L. Sangal, "Artificial Intelligence in Software Engineering: Perspectives and Challenges," 2023 Third International Conference on Secure Cyber Computing and Communication (ICSCCC), pp. 133-137, 2023, https://doi.org/10.1109/ICSCCC58608.2023.10176436.
I. Giachos, E. C. Papakitsos, P. Savvidis, N. Laskaris, “Inquiring Natural Language Processing Capabilities on Robotic Systems through Virtual Assistants: A Systemic Approach,” Journal of Computer Science Research, vol. 5, no. 2, pp. 28-36, 2023, https://doi.org/10.30564/jcsr.v5i2.5537.
A. Michel, S. K. Jha, R. Ewetz, “A survey on the vulnerability of deep neural networks against adversarial attacks,” Progress in Artificial Intelligence, vol. 11, pp. 131-141, 2022, https://doi.org/10.1007/s13748-021-00269-9.
E. Alshahrani, D. Alghazzawi, R. Alotaibi, O. Rabie, “Adversarial attacks against supervised machine learning based network intrusion detection systems,” Plos one, vol. 17, no. 10, p. e0275971, 2022, https://doi.org/10.1371/journal.pone.0275971.
A. J. G. D. Azambuja, C. Plesker, K. Schützer, R. Anderl, B. Schleich, V. R. Almeida, “Artificial Intelligence-Based Cyber Security in the Context of Industry 4.0—A Survey,” Electronics, vol. 12, no. 8, p. 1920, 2023, https://doi.org/10.3390/electronics12081920.
M. Conti, J. Li, S. Picek, J. Xu, “Label-only membership inference attack against node-level graph neural networks,” Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, pp. 1-12, 2022, https://doi.org/10.1145/3560830.3563734.
A. S. Panayides et al., "AI in Medical Imaging Informatics: Current Challenges and Future Directions," IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 7, pp. 1837-1857, 2020, https://doi.org/10.1109/JBHI.2020.2991043.
H. Liang, E. He, Y. Zhao, Z. Jia, H. Li, “Adversarial attack and defense: A survey,” Electronics, vol. 11, no. 8, p. 1283, 2022, https://doi.org/10.3390/electronics11081283.
Z. Qian, K. Huang, Q. F. Wang, X. Y. Zhang, “A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies,” Pattern Recognition, vol. 131, p. 108889, 2022, https://doi.org/10.1016/j.patcog.2022.108889.
Y. Chen, M. Zhang, J. Li and X. Kuang, "Adversarial Attacks and Defenses in Image Classification: A Practical Perspective," 2022 7th International Conference on Image, Vision and Computing (ICIVC), pp. 424-430, 2022, https://doi.org/10.1109/ICIVC55077.2022.9886997.
X. Yuan, P. He, Q. Zhu and X. Li, "Adversarial Examples: Attacks and Defenses for Deep Learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2805-2824, 2019, https://doi.org/10.1109/TNNLS.2018.2886017.
J. A. Esterhuizen, B. R. Goldsmith, S. Linic, “Interpretable machine learning for knowledge generation in heterogeneous catalysis,” Nature Catalysis, vol. 5, no, 3, pp. 175-184, 2022, https://doi.org/10.1038/s41929-022-00744-z.
A. Alanazi, “Using machine learning for healthcare challenges and opportunities,” Informatics in Medicine Unlocked, vol. 30, p. 100924, 2022, https://doi.org/10.1016/j.imu.2022.100924.
J. Yang, W. Zhang, J. Liu, J. Wu, J. Yang, “Generating de-identification facial images based on the attention models and adversarial examples,” Alexandria Engineering Journal, vol. 61, no. 11, pp. 8417-8429, 2022, https://doi.org/10.1016/j.aej.2022.02.007.
N. Zhou, T. Zhang, X. Xie, J. Wu, “Hybrid quantum–classical generative adversarial networks for image generation via learning discrete distribution,” Signal Processing: Image Communication, vol. 110, p. 116891, 2023, https://doi.org/10.1016/j.image.2022.116891.
N. -B. Nguyen, K. Chandrasegaran, M. Abdollahzadeh and N. -M. Cheung, "Re-Thinking Model Inversion Attacks Against Deep Neural Networks," 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16384-16393, 2023, https://doi.org/10.1109/CVPR52729.2023.01572.
R. Zhang, S. Hidano, F. Koushanfar, “Text revealer: Private text reconstruction via model inversion attacks against transformers,” arXiv Computation and Language, 2022, https://doi.org/10.48550/arXiv.2209.10505.
B. Amerirad, M. Cattaneo, R. S. Kenett, E. Luciano, “Adversarial Artificial Intelligence in Insurance: From an Example to Some Potential Remedies,” Risks, vol. 11, no. 1, p. 20, 2023, https://doi.org/10.3390/risks11010020.
L. Xu, X. Zheng, X. Li, Y. Zhang, L. Liu and H. Ma, "WiCAM: Imperceptible Adversarial Attack on Deep Learning based WiFi Sensing," 2022 19th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 10-18, 2022, https://doi.org/10.1109/SECON55815.2022.9918564.
A. Amirkhani, M. P. Karimi, A. Banitalebi-Dehkordi, “A survey on adversarial attacks and defenses for object detection and their applications in autonomous vehicles,” The Visual Computer, vol. 39, pp. 5293–5307, 2023, https://doi.org/10.1007/s00371-022-02660-6.
DOI: https://doi.org/10.59247/csol.v2i3.128
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Monir Hamim