On the robustness and security of Artificial Intelligence systems
Abstract
Full Text:
PDF (Russian)References
Artificial Intelligence in Cybersecurity. http://master.cmc.msu.ru/?q=ru/node/3496 (in Russian) Retrieved: May, 2022.
Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "Ongoing academic and industrial projects dedicated to robust machine learning." International Journal of Open Information Technologies 9.10 (2021): 35-46.
Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "The rationale for working on robust machine learning." International Journal of Open Information Technologies 9.11 (2021): 68-74.
Why Robustness is not Enough for Safety and Security in Machine Learning https://towardsdatascience.com/why-robustness-is-not-enough-for-safety-and-security-in-machine-learning-1a35f6706601
[5] Borg, Markus, et al. "Safely entering the deep: A review of verification and validation for machine learning and a challenge elicitation in the automotive industry." arXiv preprint arXiv:1812.05389 (2018).
Namiot, Dmitry, Eugene Ilyushin, and Oleg Pilipenko. "On Trusted AI Platforms." International Journal of Open Information Technologies 10.7 (2022): 119-127.
Ilyushin, Eugene, Dmitry Namiot, and Ivan Chizhov. "Attacks on machine learning systems-common problems and methods." International Journal of Open Information Technologies 10.3 (2022): 17-22.
Namiot, Dmitry, Eugene Ilyushin, and Ivan Chizhov. "On a formal verification of machine learning systems." International Journal of Open Information Technologies 10.5 (2022): 30-34.
Wang, Bolun, et al. "Neural cleanse: Identifying and mitigating backdoor attacks in neural networks." 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019.
Zhang, Yuheng, et al. "The secret revealer: Generative model-inversion attacks against deep neural networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
Rigaki, Maria, and Sebastian Garcia. "A survey of privacy attacks in machine learning." arXiv preprint arXiv:2007.07646 (2020).
Cool Or Creepy? Facebook Is Building An AI That Sees The World Like Humans Do https://wechoiceblogger.com/cool-or-creepy-facebook-is-building-an-ai-that-sees-the-world-like-humans-do/
Tian, Yuchi, et al. "Deeptest: Automated testing of deep-neural-network-driven autonomous cars." Proceedings of the 40th international conference on software engineering. 2018.
Allen-Zhu, Zeyuan, and Yuanzhi Li. "Feature purification: How adversarial training performs robust deep learning." 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2022.
Goodfellow, Ian, Patrick McDaniel, and Nicolas Papernot. "Making machine learning robust against adversarial inputs." Communications of the ACM 61.7 (2018): 56-66.
Dong, Guozhu, and Huan Liu, eds. Feature engineering for machine learning and data analytics. CRC Press, 2018.
Dmitry, Namiot, Ilyushin Eugene, and Chizhov Ivan. "On a formal verification of machine learning systems." International Journal of Open Information Technologies 10.5 (2022): 30-34.
Ruan, Wenjie, et al. "Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance." International Joint Conferences on Artificial Intelligence Organization, 2019.
Gopinath, Divya, et al. "Deepsafe: A data-driven approach for assessing robustness of neural networks." International symposium on automated technology for verification and analysis. Springer, Cham, 2018.
Plex https://ai.googleblog.com/2022/07/towards-reliability-in-deep-learning.html
Shafaei, Sina, et al. "Uncertainty in machine learning: A safety perspective on autonomous driving." International Conference on Computer Safety, Reliability, and Security. Springer, Cham, 2018.
Francisco Herrera Dataset Shift in Classification: Approaches and Problems http://iwann.ugr.es/2011/pdf/InvitedTalk-FHerreraIWANN11.pdf Retrieved: Jul, 2022
Lu, Jie, et al. "Learning under concept drift: A review." IEEE Transactions on Knowledge and Data Engineering 31.12 (2018): 2346-2363
Fijalkow, Nathanaël, and Mohit Kumar Gupta. "Verification of neural networks: Specifying global robustness using generative models." arXiv preprint arXiv:1910.05018 (2019).
Everything You “Know” About Software and Safety is Probably Wrong https://2020.icse-conferences.org/details/icse-2020-plenary/8/Everything-You-Know-About-Software-and-Safety-is-Probably-Wrong
Identifying and eliminating bugs in learned predictive models https://www.deepmind.com/blog/identifying-and-eliminating-bugs-in-learned-predictive-models.
Refbacks
- There are currently no refbacks.
Abava Кибербезопасность IT Congress 2024
ISSN: 2307-8162