Artificial intelligence and cybersecurity

Dmitry Namiot, Eugene Ilyushin, Ivan Chizhov


In this article, we consider the relationship between artificial intelligence systems and cybersecurity. In the modern interpretation, artificial intelligence systems are machine learning systems, sometimes it is even more narrowed down to artificial neural networks. If we are talking about the ever-widening penetration of machine learning into various areas of application of information technology, then, naturally, there should be intersections with cybersecurity. But the problem is that such an intersection cannot be described by any one model. Combinations of Artificial intelligence and cybersecurity have many different applications. Common is, of course, the use of machine learning methods, but the tasks, as well as the results achieved to date, are completely different. For example, if the use of machine learning for attack and intrusion detection shows real achievements compared to previously used approaches, then attacks on machine learning systems themselves have so far completely defeated possible defenses. This article is devoted to the classification of models for the application of machine learning in cybersecurity.

Full Text:

PDF (Russian)


II pereopredelil komp'jutery

Applications for artificial intelligence in Department of Defense cyber missions

Magisterskaja programma "Iskusstvennyj intellekt v kiberbezopasnosti

Information Security Analysts

Cybersecurity Workforce Study

Kouliaridis, Vasileios, and Georgios Kambourakis. "A comprehensive survey on machine learning techniques for android malware detection." Information 12.5 (2021): 185.

AV-Test Institute

ML for malware detection,5

Yuan, Zhenlong, et al. "Droid-sec: deep learning in android malware detection." Proceedings of the 2014 ACM conference on SIGCOMM. 2014.

Vinayakumar, R., et al. "Robust intelligent malware detection using deep learning." IEEE Access 7 (2019): 46717-46738.

Using fuzzy hashing and deep learning to counter malware detection evasion techniques

Tajaddodianfar, Farid, Jack W. Stokes, and Arun Gururajan. "Texception: a character/word-level deep learning model for phishing URL detection." ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020.

Basnet, Ram, Srinivas Mukkamala, and Andrew H. Sung. "Detection of phishing attacks: A machine learning approach." Soft computing applications in industry. Springer, Berlin, Heidelberg, 2008. 373-383.

Divakaran, Dinil Mon, and Adam Oest. "Phishing Detection Leveraging Machine Learning and Deep Learning: A Review." arXiv preprint arXiv:2205.07411 (2022).

Shenfield, Alex, David Day, and Aladdin Ayesh. "Intelligent intrusion detection systems using artificial neural networks." Ict Express 4.2 (2018): 95-99.

Mishra, Preeti, et al. "A detailed investigation and analysis of using machine learning techniques for intrusion detection." IEEE Communications Surveys & Tutorials 21.1 (2018): 686-728.

Alsaheel, Abdulellah, et al. "{ATLAS}: A sequence-based learning approach for attack investigation." 30th USENIX Security Symposium (USENIX Security 21). 2021.

Ongun, Talha, et al. "Living-Off-The-Land Command Detection Using Active Learning." 24th International Symposium on Research in Attacks, Intrusions and Defenses. 2021.

Kok, S., et al. "Ransomware, threat and detection techniques: A review." Int. J. Comput. Sci. Netw. Secur 19.2 (2019): 136.

Wu, Yirui, Dabao Wei, and Jun Feng. "Network attacks detection methods based on deep learning techniques: a survey." Security and Communication Networks 2020 (2020).

Xin, Yang, et al. "Machine learning and deep learning methods for cybersecurity." IEEE Access 6 (2018): 35365-35381.

Noor, Umara, et al. "A machine learning framework for

investigating data breaches based on semantic analysis of adversary’s attack patterns in threat intelligence repositories." Future Generation Computer Systems 95 (2019): 467-487.

Pitropakis, Nikolaos, et al. "An enhanced cyber attack attribution framework." International Conference on Trust and Privacy in Digital Business. Springer, Cham, 2018.

Enhanced Attribution

Gao, Peng, et al. "Enabling efficient cyber threat hunting with cyber threat intelligence." 2021 IEEE 37th International Conference on Data Engineering (ICDE). IEEE, 2021.

Gao, Peng, et al. "A system for automated open-source threat intelligence gathering and management." Proceedings of the 2021 International Conference on Management of Data. 2021.

Yamin, Muhammad Mudassar, et al. "Weaponized AI for cyber

attacks." Journal of Information Security and Applications 57 (2021): 102722.

Mirsky, Yisroel, et al. "The threat of offensive ai to organizations." arXiv preprint arXiv:2106.15764 (2021).

Abdul Rehman Javed, Mirza Omer Beg, Muhammad Asim, Thar

Baker, and Ali Hilal Al-Bayatti. 2020. AlphaLogger: Detecting motion-based side-channel attack using smartphone keystrokes. Journal of Ambient Intelligence and Humanized Computing (2020), 1–14

Philip Marquardt, Arunabh Verma, Henry Carter, and Patrick Traynor. 2011. (sp) iphone: Decoding vibrations from nearby keyboards using mobile phone accelerometers. In Proceedings of the 18th ACM conference on Computer and communications security. 551–562.

Y. Abid, Abdessamad Imine, and Michaël Rusinowitch. 2018. Sensitive Attribute Prediction for Social Networks Users. In EDBT/ICDT Workshop

Jian Jiang, Xiangzhan Yu, Yan Sun, and Haohua Zeng. 2019. A Survey of the Software Vulnerability Discovery Using Machine Learning Techniques. In International Conference on Artificial Intelligence and Security. Springer, 308–317.

Guanjun Lin, Sheng Wen, Qing-Long Han, Jun Zhang, and Yang Xiang. 2020. Software Vulnerability Detection Using Deep Neural Networks: A Survey. Proc. IEEE 108, 10 (2020), 1825–1848.

Serguei A. Mokhov, Joey Paquet, and Mourad Debbabi. 2014. The Use of NLP Techniques in Static Code Analysis to Detect Weaknesses and Vulnerabilities. In Advances in Artificial Intelligence, Marina Sokolova and Peter van Beek (Eds.). Springer International Publishing, Cham, 326–332

Yisroel Mirsky, Tom Mahler, Ilan Shelef, and Yuval Elovici. 2019. CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning. In 28th USENIX Security Symposium (USENIX Security 19). USENIX Association, Santa Clara, CA, 461–47

Vernit Garg and Laxmi Ahuja. 2019. Password Guessing Using Deep Learning. In 2019 2nd International Conference on Power Energy, Environment and Intelligent Control (PEEIC). IEEE, 38–40.

Dongqi Han, Zhiliang Wang, Ying Zhong, Wenqi Chen, Jiahai Yang, Shuqiang Lu, Xingang Shi, and Xia Yin. 2020. Practical traffic-space adversarial attacks on learning-based nidss. arXiv preprint arXiv:2005.07519 (2020).

Yisroel Mirsky and Wenke Lee. 2021. The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR) 54, 1 (2021), 1–41.

Rahman, Tanzila, Mrigank Rochan, and Yang Wang. "Video-based person re-identification using refined attention networks." 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2019.

X. Zhu, X. Jing, X. You, X. Zhang, and T. Zhang. 2018. Video-Based Person Re-Identification by Simultaneously Learning Intra-Video and Inter-Video Distance Metrics. IEEE Transactions on Image Processing 27, 11 (2018), 5683–5695.

Gavai, Gaurang, et al. "Detecting insider threat from enterprise social and online activity data." Proceedings of the 7th ACM CCS international workshop on managing insider security threats. 2015.

Zhou, Qingyu, et al. "Neural document summarization by jointly learning to score and select sentences." arXiv preprint arXiv:1807.02305 (2018).

Aniello Castiglione, Roberto De Prisco, Alfredo De Santis, Ugo Fiore, and Francesco Palmieri. 2014. A botnet-based command and control approach relying on swarm intelligence. Journal of Network and Computer Applications 38 (2014), 22–33.

John A. Bland, Mikel D. Petty, Tymaine S. Whitaker, Katia P. Maxwell, and Walter Alan Cantrell. 2020. Machine Learning Cyberattack and Defense Strategies. Computers & Security 92 (2020), 101738.

B. Buchanan, J. Bansemer, D. Cary, et al., Automating Cyber Attacks: Hype and Reality, Center for Security and Emerging Technology, November 2020.

How cyberattacks are changing according to new Microsoft Digital Defense Report

Virtualization/Sandbox Evasion, Technique T1497 – Enterprise | MITRE ATT&CK

Himelein-Wachowiak, McKenzie, et al. "Bots and misinformation spread on social media: Implications for COVID-19." Journal of Medical Internet Research 23.5 (2021): e26933.

Deep Exploit

Biggio, Battista, et al. "Adversarial biometric recognition: A review on biometric system security from the adversarial machine-learning perspective." IEEE Signal Processing Magazine 32.5 (2015): 31-41.

Jain, Anil K., Debayan Deb, and Joshua J. Engelsma. "Biometrics: Trust, but verify." arXiv preprint arXiv:2105.06625 (2021).

AlEroud, Ahmed, and George Karabatis. "Bypassing detection of URL-based phishing attacks using generative adversarial deep neural networks." Proceedings of the Sixth International Workshop on Security and Privacy Analytics. 2020.

J. Seymour and P. Tully, Generative Models for Spear Phishing Posts on Social Media, 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017.

Implications of Artificial Intelligence for Cybersecurity: A Workshop, National Academy of Sciences, 2019.

B. Hitaj, P. Gasti, G. Ateniese, F. Perez-Cruz, PassGAN: A Deep Learning Approach for Password Guessing, NeurIPS 2018 Workshop on Security in Machine Learning (SecML’18), December 2018.

S. Datta, DeepObfusCode: Source Code Obfuscation through Sequence-to-Sequence Networks In: Arai, K. (eds) Intelligent Computing. Lecture Notes in Networks and Systems, vol 284. Springer, Cham.

J. Li, L. Zhou, H. Li, L. Yan and H. Zhu, “Dynamic Traffic Feature Camouflaging via Generative Adversarial Networks,” 2019 IEEE Conference on Communications and Network Security (CNS), 2019, pp. 268-276

Castiglione, Aniello, et al. "A botnet-based command and control approach relying on swarm intelligence." Journal of Network and Computer Applications 38 (2014): 22-33.

National Security Commission on Artificial Intelligence report

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Ongoing academic and industrial projects dedicated to robust machine learning." International Journal of Open Information Technologies 9.10 (2021): 35-46. (in Russian)

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "The rationale for working on robust machine learning." International Journal of Open Information Technologies 9.11 (2021): 68-74. (in Russian)

Ilyushin, Eugene, Dmitry Namiot, and Ivan Chizhov. "Attacks on machine learning systems-common problems and methods." International Journal of Open Information Technologies 10.3 (2022): 17-22. (in Russian)

Tensorflow : Vulnerability Statistics

Xiao, Qixue, et al. “Security risks in deep learning implementations.” 2018 IEEE Security and privacy workshops (SPW). IEEE, 2018.

Chen, Hongsong, et al. "Security issues and defensive approaches in deep learning frameworks." Tsinghua Science and Technology 26.6 (2021): 894-905.

He, Yingzhe, et al. "Towards security threats of deep learning systems: A survey." IEEE Transactions on Software Engineering (2020).

Gu, Tianyu, Brendan Dolan-Gavitt, and Siddharth Garg. "Badnets: Identifying vulnerabilities in the machine learning model supply chain." arXiv preprint arXiv:1708.06733 (2017).

Major ML datasets have tens of thousands of errors

Northcutt, Curtis G., Anish Athalye, and Jonas Mueller. "Pervasive label errors in test sets destabilize machine learning benchmarks." arXiv preprint arXiv:2103.14749 (2021).

Yu, Honggang, et al. "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples." NDSS. 2020.

Yang, Ziqi, et al. "Neural network inversion in adversarial setting via background knowledge alignment." Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019.

Kumar, Ram Shankar Siva, et al. Adversarial machine learning-industry perspectives. 2020 IEEE Security and Privacy Workshops (SPW). IEEE, 2020.

den Hollander, Richard, et al. "Adversarial patch camouflage against aerial detection." Artificial Intelligence and Machine Learning in Defense Applications II. Vol. 11543. SPIE, 2020.

Namiot, Dmitry, and Eugene Ilyushin. "Generative Models in Machine Learning." International Journal of Open Information Technologies 10.7 (2022): 101-118. (in Russian)

Ribeiro, Mauro, Katarina Grolinger, and Miriam AM Capretz. "Mlaas: Machine learning as a service." 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA). IEEE, 2015.

Dmitry, Namiot, Ilyushin Eugene, and Chizhov Ivan. "On a formal verification of machine learning systems." International Journal of Open Information Technologies 10.5 (2022): 30-34.

China invests in artificial intelligence to counter US Joint Warfighting Concept: Records

Madry, Aleksander, et al. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083 (2017).

Namiot, Dmitry, Eugene Ilyushin, and Oleg Pilienko. "On Trusted AI Platforms." International Journal of Open Information Technologies 10.7 (2022): 119-127.

Li, Huayu, and Dmitry Namiot. "A Survey of Adversarial Attacks and Defenses for image data on Deep Learning." International Journal of Open Information Technologies 10.5 (2022): 9-16.


Failure Modes in Machine Learning



DOD Adopts Ethical Principles for Artificial Intelligence

AI Risk Management

Fedushko, Solomia. "Artificial Intelligence Technologies Using in Social Engineering Attacks." (2020).

The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media

Semantic Forensics (SemaFor)


C2PA Releases Specification of World’s First Industry Standard for Content Provenance, Coalition for Content Provenance and Authenticity, January 26, 2022,

A Milestone Reached

Deepfake Task Force Act, S. 2559, 117th Congress,

Project Origin,

J. Aythora, et al. Multi-stakeholder Media Provenance Management to Counter Synthetic Media Risks in News Publishing, International Broadcasting Convention 2020 (IBC 2020), Amsterdam, NL 2020

Content Authenticity Initiative,

Coalition for Content Provenance and Authenticity (C2PA),

Chan, Christopher Chun Ki, et al. "Combating deepfakes: Multi-LSTM and blockchain as proof of authenticity for digital media." 2020 IEEE/ITU International Conference on Artificial Intelligence for Good (AI4G). IEEE, 2020.

Smith, Hannah, and Katherine Mansted. "Weaponised deep fakes." (2020).

Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Military applications of machine learning." International Journal of Open Information Technologies 10.1 (2021): 69-76.

Defence Artificial Intelligence Strategy

Government launches Defence Centre for AI Research

Shumailov, Ilia, et al. "Sponge examples: Energy-latency attacks on neural networks." 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2021.


  • There are currently no refbacks.

Abava  Кибербезопасность MoNeTec 2024

ISSN: 2307-8162