Skip to main content
2024 47th International Conference on Telecommunications and Signal Processing (TSP)

Full Program »
Video (.mp4)
View File
mp4
37.3MB

Robust Vision Transformer Model Against Adversarial Attacks in Medical Image Classification

Recent advancements in deep learning have significantly enhanced the rapid and precise classification of medical images. Vision transformers, an advanced model, have started replacing CNNs in several medical image tasks. However, recent research indicates that these sophisticated models are susceptible to adversarial attacks generated by various methods. This paper proposes a ViT-based model to bolster the robustness of Vision Transformers against such attacks. Extensive experiments across diverse datasets demonstrate that the ViT model attains higher accuracy on clean images compared to the proposed model. However, the proposed classification model exhibits greater resilience against adversarial attacks. Across all datasets and attack methods, the proposed model achieves the lowest accuracy of 66.67%, contrasting with 0% for the ViT model. These findings underscore the paramount importance of constructing robust models for medical image classification.

Elif Kanca
Karadeniz Technical University
Turkey

Tolgahan Gulsoy
Karadeniz Technical University
Turkey

Selen Ayas
Karadeniz Technical University
Turkey

Elif Baykal Kablan
Karadeniz Technical University
Turkey

 

Privacy Policy

Powered by OpenConf®
Copyright ©2002-2024 Zakon Group LLC