Schemes of attacks on machine learning models

Dmitry Namiot

Abstract


This article discusses attack schemes on artificial intelligence systems (on machine learning models). Classically, attacks on machine learning systems are special data modifications at one of the stages of the machine learning pipeline, which are designed to influence the model in the necessary way for the attacker. Attacks can be aimed at lowering the overall accuracy or fairness of the model, or at, for example, providing, under certain conditions, the desired result of the classification. Other forms of attacks may include direct impact on machine learning models (their code) with the same goals as above. There is also a special class of attacks that is aimed at extracting from the model its logic (algorithm) or information about the training data set. In the latter case, there is no data modification, but specially prepared multiple queries to the model are used.

A common problem for attacks on machine learning models is the fact that modified data is the same legitimate data as unmodified data. Accordingly, there is no explicit way to unambiguously identify such attacks. Their effect in the form of incorrect functioning of the model can manifest itself without a targeted impact. In fact, all discriminant models are subject to attacks.


Full Text:

PDF (Russian)

References


DOI: 10.5281/zenodo.7963386

Ilyushin, Eugene, Dmitry Namiot, and Ivan Chizhov. "Attacks on machine learning systems-common problems and methods." International Journal of Open Information Technologies 10.3 (2022): 17-22.(in Russian)

Kostyumov, Vasily. "A survey and systematization of evasion attacks in computer vision." International Journal of Open Information Technologies 10.10 (2022): 11-20.

Artificial Intelligence in Cybersecurity. https://cs.msu.ru/node/3732 (in Russian) Retrieved: Dec, 2022

Magisterskaja programma Programmnoe obespechenie vychislitel'nyh setej http://master.cmc.msu.ru/?q=ru/node/3318 Retrieved: Feb, 2023

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "The rationale for working on robust machine learning." International Journal of Open Information Technologies 9.11 (2021): 68-74. (in Rissian)

Namiot, Dmitry, and Eugene Ilyushin. "Data shift monitoring in machine learning models." International Journal of Open Information Technologies 10.12 (2022): 84-93. . (in Rissian)

Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv, 2014; arXiv:1412.6572.

Namiot, Dmitry, and Eugene Ilyushin. "On the reasons for the failures of machine learning projects." International Journal of Open Information Technologies 11.1 (2023): 60-69. (in Rissian)

Namiot, Dmitry, and Eugene Ilyushin. "On the robustness and security of Artificial Intelligence systems." International Journal of Open Information Technologies 10.9 (2022): 126-134. . (in Rissian)

Facebook wants machines to see the world through our eyes https://www.technologyreview.com/2021/10/14/1037043/facebook-machine-learning-ai-vision-see-world-human-eyes/ Retrieved: Mar, 2022

First white-box testing model finds thousands of errors in self-driving cars https://www.eurekalert.org/news-releases/596974 Retrieved: Mar, 2022

Namiot, Dmitry. "Introduction to Data Poison Attacks on Machine Learning Models." International Journal of Open Information Technologies 11.3 (2023): 58-68. . (in Rissian)

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Artificial intelligence and cybersecurity." International Journal of Open Information Technologies 10.9 (2022): 135-147. . (in Rissian)

Bagdasaryan, Eugene, and Vitaly Shmatikov. "Blind backdoors in deep learning models." Usenix Security. 2021

ONNX https://onnx.ai/ Retrieved: Dec, 2022

Fickling https://github.com/trailofbits/fickling Retrieved: Dec, 2022

WEAPONIZING MACHINE LEARNING MODELS WITH RANSOMWARE https://hiddenlayer.com/research/weaponizing-machine-learning-models-with-ransomware/ Retrieved: Dec, 2022

HuggingFace https://huggingface.co/ Retrieved: Dec, 2022

TensorFlow Hub https://www.tensorflow.org/hub/overview Retrieved: Dec, 2022

Parker, Sandra, Zhe Wu, and Panagiotis D. Christofides. "Cybersecurity in process control, operations, and supply chain." Computers & Chemical Engineering (2023): 108169.

Costales, Robby, et al. "Live trojan attacks on deep neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020.

Namiot, Dmitry, Eugene Ilyushin, and Oleg Pilipenko. "On Trusted AI Platforms." International Journal of Open Information Technologies 10.7 (2022): 119-127. (in Rissian)

Li, Qingru, et al. "A Label Flipping Attack on Machine Learning Model and Its Defense Mechanism." Algorithms and Architectures for Parallel Processing: 22nd International Conference, ICA3PP 2022, Copenhagen, Denmark, October 10–12, 2022, Proceedings. Cham: Springer Nature Switzerland, 2023.

Steinhardt, Jacob, Pang Wei W. Koh, and Percy S. Liang. "Certified defenses for data poisoning attacks." Advances in neural information processing systems 30 (2017).

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Ongoing academic and industrial projects dedicated to robust machine learning." International Journal of Open Information Technologies 9.10 (2021): 35-46. (in Rissian)

Xue, Mingfu, et al. "Intellectual property protection for deep learning models: Taxonomy, methods, attacks, and evaluations." IEEE Transactions on Artificial Intelligence 3.6 (2021): 908-923.

Mittre https://attack.mitre.org/ Retrieved: Dec, 2022

Adversarial ML Threat Matrix https://github.com/mitre/advmlthreatmatrix Retrieved: Dec, 2022

ML | Underfitting and Overfitting https://www.geeksforgeeks.org/underfitting-and-overfitting-in-machine-learning/ Retrieved: Dec, 2022

Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199 (2013).

Daniely, Amit, and Hadas Shacham. "Most ReLU Networks Suffer from $ell^ 2$ Adversarial Perturbations." Advances in Neural Information Processing Systems 33 (2020): 6629-6636.

Yang, Yao-Yuan, et al. "A closer look at accuracy vs. robustness." Advances in neural information processing systems 33 (2020): 8588-8601.

Dmitry Namiot1, Eugene Ilyushin1, Ivan Chizov On the Practical Generation of Counterfactual Examples https://damdid2022.frccsc.ru/files/article/DAMDID_2022_paper_7030.pdf Retrieved: Dec, 2022

A Practical Guide to Adversarial Robustness https://www.fiddler.ai/blog/a-practical-guide-to-adversarial-robustness Retrieved: Dec, 2022

Kurakin, Alexey, et al. "Adversarial attacks and defences competition." The NIPS'17 Competition: Building Intelligent Systems. Springer International Publishing, 2018.

Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). IEEE, 2017.

Papernot, Nicolas, et al. "Practical black-box attacks against machine learning." Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017.

Namiot, Dmitry, and Eugene Ilyushin. "Generative Models in Machine Learning." International Journal of Open Information Technologies 10.7 (2022): 101-118.

Adi, Erwin, Zubair Baig, and Sherali Zeadally. "Artificial Intelligence for Cybersecurity: Offensive Tactics, Mitigation Techniques and Future Directions." Applied Cybersecurity & Internet Governance 1 (2022).

Bai, Tao, et al. "Ai-gan: Attack-inspired generation of adversarial examples." 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021.

Shumailov, Ilia, et al. "Sponge examples: Energy-latency attacks on neural networks." 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2021.

Qiu, Han, et al. "Adversarial attacks against network intrusion detection in IoT systems." IEEE Internet of Things Journal 8.13 (2020): 10327-10335.

Apruzzese, Giovanni, et al. "Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples." IEEE Transactions on Network and Service Management (2022).

Ilyushin, Eugene, and Dmitry Namiot. "An approach to the automatic enhancement of the robustness of ML models to external influences on the example of the problem of biometric speaker identification by voice." International Journal of Open Information Technologies 9.6 (2021): 11-19. (in Rissian)

Fawaz, Hassan Ismail, et al. "Adversarial attacks on deep neural networks for time series classification." 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019.

Zhang, Huangzhao, et al. "Generating fluent adversarial examples for natural languages." arXiv preprint arXiv:2007.06174 (2020).

Du, Andrew, et al. "Physical adversarial attacks on an aerial imagery object detector." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022.

Kupriyanovsky, Vasily, et al. "On internet of digital railway." International journal of open information technologies 4.12 (2016): 53-68.

Nikolaev, D. E., et al. "Cifrovaja zheleznaja doroga-innovacionnye standarty i ih rol' na primere Velikobritanii." International Journal of Open Information Technologies 4.10 (2016): 55-61.

Nassi, Ben, et al. "Phantom of the adas: Phantom attacks on driver-assistance systems." Cryptology ePrint Archive (2020).

Knitting an anti-surveillance jumper https://kddandco.com/2022/11/02/knitting-an-anti-surveillance-jumper/ Retrieved: Apr 2023

Guetta, Nitzan, et al. "Dodging attack using carefully crafted natural makeup." arXiv preprint arXiv:2109.06467 (2021).

ArcFace https://github.com/chenggongliang/arcface Retrieved: Apr 2023

Gao, Yansong, et al. "Backdoor attacks and countermeasures on deep learning: A comprehensive review." arXiv preprint arXiv:2007.10760 (2020).

Gu, Tianyu, et al. "Badnets: Evaluating backdooring attacks on deep neural networks." IEEE Access 7 (2019): 47230-47244.

Salem, Ahmed, Michael Backes, and Yang Zhang. "Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks." arXiv preprint arXiv:2010.03282 (2020).

Gan, Leilei, et al. "Triggerless backdoor attack for NLP tasks with clean labels." arXiv preprint arXiv:2111.07970 (2021).

Adi, Yossi, et al. "Turning your weakness into a strength: Watermarking deep neural networks by backdooring." 27th {USENIX} Security Symposium ({USENIX} Security 18). 2018.

Gao, Yansong, et al. "Strip: A defence against trojan attacks on deep neural networks." Proceedings of the 35th Annual Computer Security Applications Conference. 2019.

Chen, Xinyun, et al. "Targeted backdoor attacks on deep learning systems using data poisoning." arXiv preprint arXiv:1712.05526 (2017).

TrojAI - Trojans in Artificial Intelligence https://www.nist.gov/itl/ssd/trojai Retrieved: Apr, 2023

White Paper NIST AI 100-2e2023 (Draft) Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations https://csrc.nist.gov/publications/detail/white-paper/2023/03/08/adversarial-machine-learning-taxonomy-and-terminology/draft Retrieved: Apr, 2023

Malekzadeh, Mohammad, and Deniz Gunduz. "Vicious Classifiers: Data Reconstruction Attack at Inference Time." arXiv preprint arXiv:2212.04223 (2022).

Song, Junzhe, and Dmitry Namiot. "A Survey of Model Inversion Attacks and Countermeasures."

Zhang, Jiliang, et al. "Privacy threats and protection in machine learning." Proceedings of the 2020 on Great Lakes Symposium on VLSI. 2020.

Carlini, Nicholas, et al. "The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks." USENIX Security Symposium. Vol. 267. 2019.

Hisamoto, Sorami, Matt Post, and Kevin Duh. "Membership inference attacks on sequence-to-sequence models: Is my data in your machine translation system?." Transactions of the Association for Computational Linguistics 8 (2020): 49-63.

De Cristofaro, Emiliano. "An overview of privacy in machine learning." arXiv preprint arXiv:2005.08679 (2020)


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность FRUCT 2024

ISSN: 2307-8162