An approach to the automatic enhancement of the robustness of ML models to external influences on the example of the problem of biometric speaker identification by voice

Eugene Ilyushin, Dmitry Namiot


AI technologies, which have received an impetus in development in recent years due to the emergence of a significant amount of data and computing resources, have greatly influenced many areas of human life, and for some of them, they have become an integral part. It is worth noting separately the emergence of various means of biometric user identification, which today have found their application in general­purpose systems and critical ones. Such tools include biometric user identification by fingerprint, face, voice, iris, hand geometry, etc. We use these tools daily in our everyday life when we use smartphones, personal computers, or interact with banks, which have begun to use biometric identification tools when interacting with customers widely. Thus, the reliability of biometric identification of users becomes undoubtedly important and requires due attention from information security specialists. Since these systems are usually based on ML models, their resistance to external influences plays a key role. In this paper, we presented an approach to automatically increasing the stability of ML models to external influences, using the example of the speaker identification problem by voice.

Full Text:

PDF (Russian)


Z. Bai and X.­L. Zhang, “Speaker recognition based on deep learning: An overview.” [Online]. Available:

I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples.” [Online]. Available: http://arxiv.


X. Cao and N. Z. Gong, “Mitigating evasion attacks to deep neural networks via region­based classification.” [Online]. Available:

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks.” [Online]. Available:

N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks.” [Online]. Available:

N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings.” [Online]. Available:

I. J. Goodfellow, J. Pouget­Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks.” [Online]. Available:

D. Terjék, “Adversarial lipschitz regularization.” [Online]. Available:

T. Miyato, S.­i. Maeda, M. Koyama, K. Nakae, and S. Ishii, “Distributional smoothing with virtual adversarial training.” [Online].


B. P. Miller, L. Fredriksen, and B. So, “An empirical study of the reliability of UNIX utilities,” vol. 33, no. 12, pp. 32–44. [Online]. Available:

A. Abhishek and N. Cris. Fuzzing for security. [Online]. Available:­for­security.html

K. Pei, Y. Cao, J. Yang, and S. Jana, “DeepXplore: Automated whitebox testing of deep learning systems,” pp. 1–18. [Online]. Available:

N. Wenzler, “Not all neurons are created equal: Towards a feature level deep neural network test coverage metric.” [Online]. Available:­final.pdf

Y. Sun, X. Huang, D. Kroening, J. Sharp, M. Hill, and R. Ashmore, “Testing deep neural networks.” [Online]. Available:

V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: an ASR corpus based on public domain audio books,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, pp. 5206–5210.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition.” [Online]. Available:

S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks.” [Online]. Available:

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large­scale image recognition.” [Online]. Available: http://arxiv.


J. Oglesby, “What’s in a number? moving beyond the equal error rate,” vol. 17, no. 1, pp. 193–208. [Online]. Available:

E. Gangan, M. Kudus, and E. Ilyushin, “Survey of multi armed bandit algorithms applied to recommendation systems,” vol. 9, no. 4, pp. 12– 27. [Online]. Available:


  • There are currently no refbacks.

Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162