Volume 17, Issue 2 (5-2025)                   itrc 2025, 17(2): 9-17 | Back to browse issues page

XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Sajjadi S H, Karamizadeh S, Yadegari A. Defending Against Adversarial Attacks in Artificial Intelligence Technologies. itrc 2025; 17 (2) :9-17
URL: http://ijict.itrc.ac.ir/article-1-725-en.html
1- , s.karamizadeh@itrc.ac.ir
Abstract:   (894 Views)
The rapid adoption of artificial intelligence (AI) technologies across diverse sectors has exposed vulnerabilities, particularly to adversarial attacks designed to deceive AI models by manipulating input data. This paper comprehensively reviews adversarial attacks, categorising them into training-phase and testing-phase types, with testing-phase attacks further divided into white-box and black-box categories. We explore defence mechanisms such as data modification, model enhancement, and auxiliary tools, focusing on the critical need for robust AI security in healthcare and autonomous systems sectors. Additionally, the paper highlights the role of AI in cybersecurity, offering a taxonomy for AI applications in threat detection, vulnerability assessment, and incident response. By analysing current defence strategies and outlining potential research directions, this paper aims to enhance the resilience of AI systems against adversarial threats, thereby strengthening AI's deployment in sensitive applications.
Full-Text [PDF 710 kb]   (404 Downloads)    
Type of Study: Research | Subject: Network

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.