The rationale for working on robust machine learning

Dmitry Namiot, Eugene Ilyushin, Ivan Chizhov

Abstract


With the growing use of systems based on machine learning, which, from a practical point of view, are considered as systems of artificial intelligence today, attention to the issues of reliability (stability) of such systems and solutions is also growing. For so-called critical applications such as real-time decision-making systems, special systems, etc. sustainability issues are crucial from the point of view of the practical use of machine learning systems. The use of machine learning systems (artificial intelligence systems, which is now, in fact, a synonym) in such areas is possible only with the proof of stability (determination of guaranteed performance parameters). Resiliency problems arise from different characteristics of the data during training (training) and testing (practical application). At the same time, additional complexity is created by the fact that, in addition to natural reasons (unbalanced samples, measurement errors, etc.), the data can be deliberately modified. These are the so-called attacks on machine learning systems. Accordingly, it is impossible to talk about the reliability of machine learning systems without protection against such actions. In this case, attacks can be directed both at the data and at the models themselves.


Full Text:

PDF (Russian)

References


Artificial Intelligence in Cybersecurity. http://master.cmc.msu.ru/?q=ru/node/3496 (in Russian) Retrieved: Sep, 2021

Namiot D., Ilyushin E., Chizhov I. Ongoing academic and industrial projects dedicated to robust machine learning //International Journal of Open Information Technologies. – 2021. – Т. 9. – №. 10. – С. 35-46.

Qayyum, Adnan, et al. "Secure and robust machine learning for healthcare: A survey." IEEE Reviews in Biomedical Engineering 14 (2020): 156-180.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199 , 2013

A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 6103–6113.

X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, 2019.

A Complete List of All (arXiv) Adversarial Example Papers https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html. Retrieved: Sep, 2021

Xu, H., Mannor, S.: Robustness and generalization. In: COLT, pp. 503–515 (2010)

"A Model-Based Approach for Robustness Testing" (PDF). Dl.ifip.org. Retrieved 2016-11-13.

IEEE Standard Glossary of Software Engineering Terminology, IEEE Std 610.12-1990

Tyler Folkman Machine learning: introduction, monumental failure, and hope https://towardsdatascience.com/machine-learning-introduction-monumental-failure-and-hope-65a8c6098a92

Foundations for AI errors https://www.wired.com/story/foundations-ai-riddled-errors/ Retrieved: Sep, 2021

Tsipras, Dimitris, et al. "From imagenet to image classification: Contextualizing progress on benchmarks." International Conference on Machine Learning. PMLR, 2020.

Stat 260 https://www.stat.berkeley.edu/~jsteinhardt/stat260/index.html Retrieved: Sep, 2021

Francisco Herrera Dataset Shift in Classification: Approaches and Problems http://iwann.ugr.es/2011/pdf/InvitedTalk-FHerrera-IWANN11.pdf Retrieved: Sep, 2021

Jacob Steinhardt https://www.stat.berkeley.edu/~jsteinhardt/index.html Retrieved: Sep, 2021

DeepMind Robust and Verified Deep Learning group https://deepmind.com/blog/article/robust-and-verified-ai Retrieved: Sep, 2021

Madry Lab https://people.csail.mit.edu/madry/6.S979/files/lecture_4.pdf Retrieved: Sep, 2021

Shafique, Muhammad, et al. "Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead." IEEE Design & Test 37.2 (2020): 30-57.

Yahoo Research AI https://www.financialexpress.com/archive/yahoo-research-uses-artificial-intelligence-everywhere/191870/ Retrieved: Sep, 2021

Namiot, Dmitry. "Context-Aware Browsing--A Practical Approach." 2012 Sixth International Conference on Next Generation Mobile Applications, Services and Technologies. IEEE, 2012.

Namiot, Dmitry, and Manfred Sneps-Sneppe. "Proximity as a service." 2012 2nd Baltic Congress on Future Internet Communications. IEEE, 2012.

Rojo, Jordi, et al. "Machine learning applied to wi-fi fingerprinting: The experiences of the ubiqum challenge." 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 2019.

Souyris, Jean, et al. "Formal verification of avionics software products." International symposium on formal methods. Springer, Berlin, Heidelberg, 2009.

DO-178C cert-kit for airborne machine learning to be researched by Intelligent Artifacts https://militaryembedded.com/avionics/software/do-178c-cert-kit-for-airborne-machine-learning-to-be-researched-by-intelligent-artifacts Retrieved: Sep, 2021

Seshia, Sanjit A., et al. "Formal specification for deep neural networks." International Symposium on Automated Technology for Verification and Analysis. Springer, Cham, 2018.

Li, Guofu, et al. "Security matters: A survey on adversarial machine learning." arXiv preprint arXiv:1810.07339 (2018).


Refbacks

  • There are currently no refbacks.


Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162