close
close

Scientists discover quantum-inspired vulnerabilities in neural networks

This article was reviewed according to Science

fact checked

trusted source

proofread


(A) Illustrates the final training results of the network, highlighting areas of class prediction. Shaded areas delineate these regions, with individual point colors indicating the true labels of the corresponding test samples, demonstrating an overall alignment between the network’s predictions and the actual classifications. In (B), all test samples were subjected to gradient-based attacks, which caused perturbed sample points to noticeably deviate from their correct categorical regions, leading to misclassifications by the network model. (C) Focuses on the evolving prediction region for the number ‘8’ in epochs 1, 21 and 41. The deeper the shadow of the region, the greater the network’s confidence in its prediction. (D) Similar to (C), but with conflicting predictions for the attacked images, it is observed that as training progresses, the effective propagation radius for the attack points increases. This suggests that as the network’s accuracy in identifying input features increases, its vulnerability to attack also escalates. Credit: Science China Press

× close to


(A) Illustrates the final training results of the network, highlighting areas of class prediction. Shaded areas delineate these regions, with individual point colors indicating the true labels of the corresponding test samples, demonstrating an overall alignment between the network’s predictions and the actual classifications. In (B), all test samples were subjected to gradient-based attacks, which caused perturbed sample points to noticeably deviate from their correct categorical regions, leading to misclassifications by the network model. (C) Focuses on the evolving prediction region for the number ‘8’ in epochs 1, 21 and 41. The deeper the shadow of the region, the greater the network’s confidence in its prediction. (D) Similar to (C), but with conflicting predictions for the attacked images, it is observed that as training progresses, the effective propagation radius for the attack points increases. This suggests that as the network’s accuracy in identifying input features increases, its vulnerability to attack also escalates. Credit: Science China Press

In a recent study that merges the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng explored the vulnerabilities of neural networks through the lens of the uncertainty principle in physics.

Their work, published in the National Science Reviewdraws a parallel between the susceptibility of neural networks to targeted attacks and the limitations imposed by the uncertainty principle – a well-established theory in quantum physics that highlights the challenges of simultaneously measuring certain pairs of properties.

The quantum-inspired analysis of neural networks’ vulnerabilities suggests that adversarial attacks exploit the trade-off between the precision of input functions and their computed gradients.

“When we look at the architecture of deep neural networks, which involve a loss function for learning, we can always define a conjugate variable for the inputs by determining the gradient of the loss function with respect to those inputs,” says Dr. Zhang , whose expertise lies in mathematical physics.

This research is hoped to prompt a reevaluation of the presumed robustness of neural networks and encourage a deeper understanding of their limitations. By subjecting a neural network model to adversarial attacks, Dr. Zhang and Prof. Make a trade-off between the accuracy and resilience of the model.


Subfigures (A), (C), (E), (G), (I) and (K) show the test accuracy and the robust accuracy, the latter being assessed on images distorted by the Projected Gradient Descent (PDG ) attack method. Subfigures (B), (D), (F), (H), (J), and (L) reveal the trade-off between accuracy and robustness. Credit: Science China Press

× close to


Subfigures (A), (C), (E), (G), (I) and (K) show the test accuracy and the robust accuracy, the latter being assessed on images distorted by the Projected Gradient Descent (PDG ) attack method. Subfigures (B), (D), (F), (H), (J), and (L) reveal the trade-off between accuracy and robustness. Credit: Science China Press

Their findings indicate that neural networks, mathematically akin to quantum systems, struggle to accurately solve both conjugate variables – the gradient of the loss function and the input feature – simultaneously, suggesting an intrinsic vulnerability. This insight is crucial for the development of new protective measures against advanced threats.

“The significance of this research is far-reaching,” notes Prof. Meng, an expert in machine learning and the paper’s corresponding author.

“As neural networks play an increasingly critical role in mission-critical systems, it becomes imperative to understand and strengthen their security. This interdisciplinary research offers a new perspective for demystifying these complex ‘black box’ systems, and could potentially serve as a basis for the design of more secure and interpretable AI models.”

More information:
Jun-Jie Zhang et al., Quantum-inspired vulnerability analysis in neural networks: the role of conjugate variables in system attacks, National Science Review (2024). DOI: 10.1093/nsr/nwae141