Using Ceph in modern Cloud Installation

V.G. Ilyin, P.S. Tsyngalev, A.S. Boronnikov


To date, the use of control systems for dynamic clusters of block devices is one of the integral parts of progressive cloud installations. These technologies avoid problems in disk management and configuration, such as inconsistency, unstructured and low availability of data storage. Actual software solutions to these problems have long been known to everyone, but today, taking into account modern realities, it is necessary to select analogues of such systems that are freely distributed and have an open source code. Ceph is such software. This product has a standard set of functions of this class of programs and has a large number of unique useful methods, which makes it a worthy analogue for use in cloud installations. Thus, the purpose of this article is to popularize the use of Ceph clusters. The paper describes typical installations for running a Ceph cluster on them and how to deploy it. Testing was carried out on tasks close to real ones that users may encounter, and an analysis of the results obtained was carried out. The article will be useful, first of all, both for presenting an analogue of control systems for dynamic clusters of block devices in existing cloud installations, and for selecting a system as a solution for newly created installations. This article provides a glimpse of what Ceph can do without physically deploying the necessary infrastructure, which will save potential users time and resources.

Full Text:

PDF (Russian)


Ceph Block Device [Online]. Available:

Ceph File System [Online]. Available:

Installing Ceph [Online]. Available:

Ceph Dashboard [Online]. Available:

API Documentation [Online]. Available:

FIO’s documentation [Online]. Available:

CentOS Documentation [Online]. Available:

J. Cannon, “Docker: A Project-Based Approach to Learning Kindle Edition”, p.298

A.M. Kenin, D.N. Kolisnichenko., System administrator tutorial, 6 edition. – SP.: BHV-Peterburg, 2021. – 608 с.

V. Dakic, H.D. Chirammal, Prasad Mukhedkar, “Mastering KVM Virtualization”, 2020, p. 686

Ceph network configuration [Online]. Available: us/red_hat_ceph_storage/5/html/configuration_guide/ceph-network- configuration

Analyzing Ceph cluster optimize storage cost [Online]. Available: erence-architectures/120315-analyzing-ceph-cluster-optimize- storage-costs.pdf

HDD testing prerequisites [Online]. Available: hardware/test/hlk/testref/hard-disk-drive-testing-prerequisites

X. Zhang, Ya. Wang, Q. Wang, X. Zhao, “A New Approach to Double I/O performance for Ceph Distributed File System in Cloud Computing,” in 2019 2nd International Conference on Data Intelligence and Security (ICDIS), pp. 68–75.

H. Li, Sh. Zhang, Z. Guo; Z. Huang, L. Qian, “Test and Optimization of Large-scale Ceph System,” in 2020 IEEE 3rd International Conference of Safe Production and Informatization (IICSPI), pp. 237–241.

K. Jeong, C. Duffy, J-S. Kim, J. Lee, “Optimizing the Ceph Distributed File System for High Performance Computing,” in 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 446–451.

M. Hilmi, E. Mulyana, H. Hendrawan, A. Taniwidjaja, “Analysis of Network Capacity Effect on Ceph Based Cloud Storage

Performance,” in 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications (TSSA), pp. 22–24.

M. Oh, S. Park, J. Yoon, S. Kim, K. Lee, S. Weil. H.Y. Yeom, M. Jung, “Design of Global Data Deduplication for a Scale-Out Distributed Storage System,” in 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1063– 1073.

M. Oh, J. Eom, J. Yoon, J.Y. Yun, S. Kim, H.Y. Yeom,

“Performance Optimization for All Flash Scale-Out Storage,” in 2016 IEEE International Conference on Cluster Computing (CLUSTER), pp. 316–325


  • There are currently no refbacks.

Abava  Absolutech Convergent 2022

ISSN: 2307-8162