Model Fooling Attacks Against Medical Imaging: A Short Survey

Publication Type:

Journal Article

Source:

Information & Security: An International Journal, Volume 46, Issue 2, p.215-224 (2020)

Keywords:

adversarial images, artificial neural networks, deep learning, machine learning, medical imaging

Abstract:

This study aims to find a list of methods to fool artificial neural networks used in medical imaging. We collected a short list of publications related to machine learning model fooling to see if these methods have been used in the medical imaging domain. Specifically, we focused our interest to pathological whole slide images used to study human tissues. While useful, machine learning models such as deep neural networks can be fooled by quite simple attacks involving purposefully engineered images. Such attacks pose a threat to many domains, including the one we focused on since there have been some studies describing such threats.