High Spatial Image Classification Problem: Review of Approaches

Fedor Krasnov, Alexander Butorin

Abstract


Leading experts around the world analyzed geophysical images daily by and with the development of computer vision technologies, attempts should be made to automate this process. Image data can be acquired quickly using consumer digital cameras, or potentially using more advanced systems, such as satellite imagery, sonar systems, and drones and aerial vehicles. The authors of this article have developed several approaches to the automatic creation of seismic images. The amount of obtained images became enough to use algorithms of machine learning for their processing. In the last five years, computer vision techniques have evolved at a high rate and have advanced far from the use of Deep Neural Networks (DNN).  It would be reckless to use in work only the latest developments without understanding how they appeared. Therefore, the authors reviewed the approaches of computer vision to determine the most appropriate techniques for processing high spatial images that differ from the most popular tasks of computer vision (face recognition, detection of pedestrians on the street, etc.). The main result of the paper is the set of research hypothesis for computer vision in Geoscience.

Full Text:

PDF

References


D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng, “Person re-identification by multi-channel parts-based cnn with improved triplet loss function,” in Proceedings of the iEEE conference on computer vision and pattern recognition, 2016, pp. 1335–1344.

T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear cnn models for fine-grained visual recognition,” in Proceedings of the iEEE international conference on computer vision, 2015, pp. 1449–1457.

G. Levi and T. Hassner, “Age and gender classification using convolutional neural networks,” in Proceedings of the iEEE conference on computer vision and pattern recognition workshops, 2015, pp. 34–42.

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the iEEE international conference on computer vision, 2015, pp. 3730–3738.

P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” Yale University New Haven United States, 1997.

N. Wang, X. Gao, D. Tao, and X. Li, “Facial feature point detection: A comprehensive survey,” arXiv preprint arXiv:1410.1037, 2014.

X. Xiong and F. De la Torre, “Supervised descent method and its applications to face alignment,” in Proceedings of the iEEE conference on computer vision and pattern recognition, 2013, pp. 532–539.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer vision and pattern recognition, 2005. cVPR 2005. iEEE computer society conference on, 2005, vol. 1, pp. 886–893.

M. Mathias, R. Benenson, M. Pedersoli, and L. Van Gool, “Face detection without bells and whistles,” in European conference on computer vision, 2014, pp. 720–735.

S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele, “How far are we from solving pedestrian detection?” in Proceedings of the iEEE conference on computer vision and pattern recognition, 2016, pp. 1259–1267.

R. B. Girshick, “Fast r-cNN,” in 2015 iEEE international conference on computer vision (iCCV), 2015, pp. 1440–1448.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in neural information processing systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105.

P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp. 743–761, Apr. 2012.

A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The kITTI vision benchmark suite,” in Conference on computer vision and pattern recognition (cVPR), 2012.

R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection,” in Proceedings. international conference on image processing, 2002, vol. 1, pp. 900–903.

Viola and Jones, “Detecting pedestrians using patterns of motion and appearance,” in Proceedings ninth iEEE international conference on computer vision, 2003, vol. 63, pp. 734–741.

P. A. Viola and M. J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 iEEE computer society conference on computer vision and pattern recognition. cVPR 2001, 2001, vol. 1, pp. 511–518.

P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp. 743–761, 2012.

P. Dollár, S. J. Belongie, and P. Perona, “The fastest pedestrian detector in the west.” in Bmvc, 2010, vol. 2, p. 7.

H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23–38, 1998.

J. H. Hosang, M. Omran, R. Benenson, and B. Schiele, “Taking a deeper look at pedestrians,” in 2015 iEEE conference on computer vision and pattern recognition (cVPR), 2015, pp. 4073–4082.

H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A convolutional neural network cascade for face detection,” in 2015 iEEE conference on computer vision and pattern recognition (cVPR), 2015, pp. 5325–5334.

R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR ’14 proceedings of the 2014 iEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.

K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904–1916, 2015.

S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster r-cNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.

Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, “A unified multi-scale deep convolutional neural network for fast object detection,” european conference on computer vision, pp. 354–370, 2016.

J. Huang, “Speed/Accuracy trade-offs for modern convolutional object detectors,” in 2017 iEEE conference on computer vision and pattern recognition (cVPR), 2017, pp. 7310–7311.

J. Dai, Y. Li, K. He, and J. Sun, “R-fCN: Object detection via region-based fully convolutional networks,” neural information processing systems, pp. 379–387, 2016.

J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 iEEE conference on computer vision and pattern recognition (cVPR), 2016, pp. 779–788.

W. Liu, “SSD: Single shot multiBox detector,” european conference on computer vision, pp. 21–37, 2016.

D. Stutz, A. Hermans, and B. Leibe, “Superpixels: An evaluation of the state-of-the-art,” Computer Vision and Image Understanding, vol. 166, pp. 1–27, 2018.

J. Strassburg, R. Grzeszick, L. Rothacker, and G. A. Fink, “On the influence of superpixel methods for image parsing.” in VISAPP (2), 2015, pp. 518–527.

D. Stutz, A. Hermans, and B. Leibe, “Superpixels: An evaluation of the state-of-the-art,” Computer Vision and Image Understanding, vol. 166, pp. 1–27, 2018.

S. Li, “Unsupervised detection of earthquake-triggered roof-holes from uAV images using joint color and shape features,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 9, pp. 1823–1827, 2015.

Y.-J. Cha, W. Choi, and O. Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 32, no. 5, pp. 361–378, 2017.

Y.-J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, “Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types,” Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 731–747, 2018.

Y. Narazaki, V. Hoskere, T. A. Hoang, and B. F. Spencer Jr, “Automated vision-based bridge component extraction using multiscale convolutional neural networks,” arXiv preprint arXiv:1805.06042, 2018.

X. Zhang, H. Xu, J. Fang, and G. Sheng, “Urban vehicle detection in high-resolution aerial images via superpixel segmentation and correlation-based sequential dictionary learning,” Journal of Applied Remote Sensing, vol. 11, no. 2, p. 026028, 2017.

A. Zare, N. Young, D. Suen, T. Nabelek, A. Galusha, and J. Keller, “Possibilistic fuzzy local information c-means for sonar image segmentation,” in Computational intelligence (sSCI), 2017 iEEE symposium series on, 2017, pp. 1–8.

Z. Long, “A comparative study of texture attributes for characterizing subsurface structures in seismic volumes,” Interpretation, vol. 6, no. 4, pp. 1–70, 2018.

A. Villa, J. A. Benediktsson, J. Chanussot, and C. Jutten, “Hyperspectral image classification with independent component discriminant analysis,” IEEE transactions on Geoscience and remote sensing, vol. 49, no. 12, pp. 4865–4876, 2011.

J. M. Haut, M. E. Paoletti, J. Plaza, J. Li, and A. Plaza, “Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach,” IEEE Transactions on Geoscience and Remote Sensing, no. 99, pp. 1–22, 2018.

L. Zhu, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Generative adversarial networks for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, no. 99, pp. 1–18, 2018.

G. O. Young, “Synthetic structure of industrial plastics (Book style with paper title and editor),” in Plastics, 2nd ed. vol. 3, J. Peters, Ed. New York: McGraw-Hill, 1964, pp. 15–64.


Refbacks

  • There are currently no refbacks.


Abava  Absolutech FRUCT 2019

ISSN: 2307-8162