A survey and systematization of evasion attacks in computer vision
Abstract
Full Text:
PDF (Russian)References
Intriguing properties of neural networks / Christian Szegedy, Wojciech Zaremba, Ilya Sutskever et al. // arXiv preprint arXiv:1312.6199. — 2013.
Universal adversarial perturbations / Seyed-Mohsen MoosaviDezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard // Proceedings of the IEEE conference on computer vision and pattern recognition. — 2017. — P. 1765–1773.
Su Jiawei, Vargas Danilo Vasconcellos, Sakurai Kouichi. One pixel attack for fooling deep neural networks // IEEE Transactions on Evolutionary Computation. — 2019. — Vol. 23, no. 5. — P. 828–841.
Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models / Pin-Yu Chen, Huan Zhang, Yash Sharma et al. // Proceedings of the 10th ACM workshop on artificial intelligence and security. — 2017. — P. 15–26.
Ilyas Andrew, Engstrom Logan, Madry Aleksander. Prior convictions: Black-box adversarial attacks with bandits and priors // arXiv preprint arXiv:1807.07978. — 2018.
Square attack: a query-efficient black-box adversarial attack via random search / Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, Matthias Hein // European Conference on Computer Vision / Springer. — 2020. — P. 484–501.
Brendel Wieland, Rauber Jonas, Bethge Matthias. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models // arXiv preprint arXiv:1712.04248. — 2017.
Chen Jianbo, Jordan Michael I, Wainwright Martin J. Hopskipjumpattack: A query-efficient decision-based attack // 2020 ieee symposium on security and privacy (sp) / IEEE. — 2020. — P. 1277– 1294.
Robust physical-world attacks on deep learning visual classification / Kevin Eykholt, Ivan Evtimov, Earlence Fernandes et al. // Proceedings of the IEEE conference on computer vision and pattern recognition. — 2018. — P. 1625–1634.
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition / Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K Reiter // Proceedings of the 2016 acm sigsac conference on computer and communications security. — 2016. — P. 1528–1540.
Physical adversarial examples for object detectors / Dawn Song, Kevin Eykholt, Ivan Evtimov et al. // 12th USENIX workshop on offensive technologies (WOOT 18). — 2018.
Adversarial t-shirt! evading person detectors in a physical world / Kaidi Xu, Gaoyuan Zhang, Sijia Liu et al. // European conference on computer vision / Springer. — 2020. — P. 665–681.
Mind your weight (s): A large-scale study on insufficient machine learning model protection in mobile apps / Zhichuang Sun, Ruimin Sun, Long Lu, Alan Mislove // 30th USENIX Security Symposium (USENIX Security 21). — 2021. — P. 1955–1972.
Nguyen Anh, Yosinski Jason, Clune Jeff. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images // Proceedings of the IEEE conference on computer vision and pattern recognition. — 2015. — P. 427–436.
Practical black-box attacks against machine learning / Nicolas Papernot, Patrick McDaniel, Ian Goodfellow et al. // Proceedings of the 2017 ACM on Asia conference on computer and communications security. — 2017. — P. 506–519.
Papernot Nicolas, McDaniel Patrick, Goodfellow Ian. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples // arXiv preprint arXiv:1605.07277. — 2016.
The limitations of deep learning in adversarial settings / Nicolas Papernot, Patrick McDaniel, Somesh Jha et al. // 2016 IEEE European symposium on security and privacy (EuroS&P) / IEEE. — 2016. — P. 372–387.
Carlini Nicholas, Wagner David. Towards evaluating the robustness of neural networks // 2017 ieee symposium on security and privacy (sp) / IEEE. — 2017. — P. 39–57.
Adversarial patch / Tom B Brown, Dandelion Mané, Aurko Roy et al. // arXiv preprint arXiv:1712.09665. — 2017.
Croce Francesco, Hein Matthias. Minimally distorted adversarial examples with a fast adaptive boundary attack // International Conference on Machine Learning / PMLR. — 2020. — P. 2196–2205.
Sharma Yash, Chen Pin-Yu. Breaking the madry defense model with 1-based adversarial examples // arXiv preprint arXiv:1710.10733. — 2017.
Goodfellow Ian J, Shlens Jonathon, Szegedy Christian. Explaining and harnessing adversarial examples // arXiv preprint arXiv:1412.6572. — 2014.
Kurakin Alexey, Goodfellow Ian, Bengio Samy. Adversarial machine learning at scale // arXiv preprint arXiv:1611.01236. — 2016.
Kurakin Alexey, Goodfellow Ian J, Bengio Samy. Adversarial examples in the physical world // Artificial intelligence safety and security. — Chapman and Hall/CRC, 2018. — P. 99–112.
Towards deep learning models resistant to adversarial attacks / Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt et al. // arXiv preprint arXiv:1706.06083. — 2017.
Boosting adversarial attacks with momentum / Yinpeng Dong, Fangzhou Liao, Tianyu Pang et al. // Proceedings of the IEEE conference on computer vision and pattern recognition. — 2018. — P. 9185–9193.
Improving transferability of adversarial examples with input diversity / Cihang Xie, Zhishuai Zhang, Yuyin Zhou et al. // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. — 2019. — P. 2730–2739.
Evading defenses to transferable adversarial examples by translationinvariant attacks / Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. — 2019. — P. 4312–4321.
Nesterov accelerated gradient and scale invariance for adversarial attacks / Jiadong Lin, Chuanbiao Song, Kun He et al. // arXiv preprint arXiv:1908.06281. — 2019.
Nesterov Yurii E. A method for solving the convex programming problem with convergence rate o (1/k^ 2) // Dokl. akad. nauk Sssr. — Vol. 269. — 1983. — P. 543–547.
Croce Francesco, Hein Matthias. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks // International conference on machine learning / PMLR. — 2020. — P. 2206–2216.
Moosavi-Dezfooli Seyed-Mohsen, Fawzi Alhussein, Frossard Pascal. Deepfool: a simple and accurate method to fool deep neural networks // Proceedings of the IEEE conference on computer vision and pattern recognition. — 2016. — P. 2574–2582.
Black-box adversarial attacks with limited queries and information / Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin // International Conference on Machine Learning / PMLR. — 2018. — P. 2137– 2146.
Rastrigin LA. The convergence of the random search method in the extremal control of a many parameter system // Automaton & Remote Control. — 1963. — Vol. 24. — P. 1337–1342.
Delving into transferable adversarial examples and black-box attacks / Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song // arXiv preprint arXiv:1611.02770. — 2016.
Synthesizing robust adversarial examples / Anish Athalye, Logan Engstrom, Andrew Ilyas, Kevin Kwok // International conference on machine learning / PMLR. — 2018. — P. 284–293.
Bookstein Fred L. Principal warps: Thin-plate splines and the decomposition of deformations // IEEE Transactions on pattern analysis and machine intelligence. — 1989. — Vol. 11, no. 6. — P. 567–585.
Ilyushin Eugene, Namiot Dmitry, Chizhov Ivan. Attacks on machine learning systems-common problems and methods // International Journal of Open Information Technologies. — 2022. — Vol. 10, no. 3. — P. 17–22.
Dmitry Namiot, Eugene Ilyushin, Ivan Chizhov. On a formal verification of machine learning systems // International Journal of Open Information Technologies. — 2022. — Vol. 10, no. 5. — P. 30–34.
Huayu Li, Dmitry Namiot. A survey of adversarial attacks and defenses for image data on deep learning // International Journal of Open Information Technologies. — 2022. — Vol. 10, no. 5. — P. 9– 16.
Artificial intelligence in cybersecurity. — https://cs.msu.ru/node/3732. — Retrieved: May, 2022. (in Russian).
Refbacks
- There are currently no refbacks.
Abava Кибербезопасность IT Congress 2024
ISSN: 2307-8162