Using Ceph in modern Cloud Installation
Abstract
To date, the use of control systems for dynamic clusters of block devices is one of the integral parts of progressive cloud installations. These technologies avoid problems in disk management and configuration, such as inconsistency, unstructured and low availability of data storage. Actual software solutions to these problems have long been known to everyone, but today, taking into account modern realities, it is necessary to select analogues of such systems that are freely distributed and have an open source code. Ceph is such software. This product has a standard set of functions of this class of programs and has a large number of unique useful methods, which makes it a worthy analogue for use in cloud installations. Thus, the purpose of this article is to popularize the use of Ceph clusters. The paper describes typical installations for running a Ceph cluster on them and how to deploy it. Testing was carried out on tasks close to real ones that users may encounter, and an analysis of the results obtained was carried out. The article will be useful, first of all, both for presenting an analogue of control systems for dynamic clusters of block devices in existing cloud installations, and for selecting a system as a solution for newly created installations. This article provides a glimpse of what Ceph can do without physically deploying the necessary infrastructure, which will save potential users time and resources.
Full Text:
PDF (Russian)References
Ceph Block Device [Online]. Available: https://docs.ceph.com/en/quincy/rbd/
Ceph File System [Online]. Available: https://docs.ceph.com/en/quincy/cephfs/
Installing Ceph [Online]. Available: https://docs.ceph.com/en/quincy/install/
Ceph Dashboard [Online]. Available: https://docs.ceph.com/en/quincy/mgr/dashboard/
API Documentation [Online]. Available: https://docs.ceph.com/en/quincy/api/
FIO’s documentation [Online]. Available: https://fio.readthedocs.io/en/latest/
CentOS Documentation [Online]. Available: https://docs.centos.org/en-US/docs/
J. Cannon, “Docker: A Project-Based Approach to Learning Kindle Edition”, p.298
A.M. Kenin, D.N. Kolisnichenko., System administrator tutorial, 6 edition. – SP.: BHV-Peterburg, 2021. – 608 с.
V. Dakic, H.D. Chirammal, Prasad Mukhedkar, “Mastering KVM Virtualization”, 2020, p. 686
Ceph network configuration [Online]. Available: https://access.redhat.com/documentation/en- us/red_hat_ceph_storage/5/html/configuration_guide/ceph-network- configuration
Analyzing Ceph cluster optimize storage cost [Online]. Available: https://www.intel.com/content/dam/www/public/us/en/documents/ref erence-architectures/120315-analyzing-ceph-cluster-optimize- storage-costs.pdf
HDD testing prerequisites [Online]. Available: https://learn.microsoft.com/en-us/windows- hardware/test/hlk/testref/hard-disk-drive-testing-prerequisites
X. Zhang, Ya. Wang, Q. Wang, X. Zhao, “A New Approach to Double I/O performance for Ceph Distributed File System in Cloud Computing,” in 2019 2nd International Conference on Data Intelligence and Security (ICDIS), pp. 68–75.
H. Li, Sh. Zhang, Z. Guo; Z. Huang, L. Qian, “Test and Optimization of Large-scale Ceph System,” in 2020 IEEE 3rd International Conference of Safe Production and Informatization (IICSPI), pp. 237–241.
K. Jeong, C. Duffy, J-S. Kim, J. Lee, “Optimizing the Ceph Distributed File System for High Performance Computing,” in 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 446–451.
M. Hilmi, E. Mulyana, H. Hendrawan, A. Taniwidjaja, “Analysis of Network Capacity Effect on Ceph Based Cloud Storage
Performance,” in 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications (TSSA), pp. 22–24.
M. Oh, S. Park, J. Yoon, S. Kim, K. Lee, S. Weil. H.Y. Yeom, M. Jung, “Design of Global Data Deduplication for a Scale-Out Distributed Storage System,” in 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1063– 1073.
M. Oh, J. Eom, J. Yoon, J.Y. Yun, S. Kim, H.Y. Yeom,
“Performance Optimization for All Flash Scale-Out Storage,” in 2016 IEEE International Conference on Cluster Computing (CLUSTER), pp. 316–325
Refbacks
- There are currently no refbacks.
Abava Кибербезопасность IT Congress 2024
ISSN: 2307-8162