© Kateryna Pruchkovska, 123RF.com

© Kateryna Pruchkovska, 123RF.com

The RADOS object store and Ceph filesystem: Part 2

Magical Storage

Article from ADMIN 11/2012
By
In the second part of this workshop on RADOS and Ceph, we cover the inner workings of RADOS and how to avoid pitfalls.

Two issues ago, ADMIN magazine introduced RADOS and Ceph [1] and explained what these tools are all about. In this second article, I will take a closer look and explain the basic concepts that play a role in their development. How does the cluster take care of internal redundancy of the stored objects, for example, and what possibilities exist besides Ceph for accessing the data in the object store?

The first part of this workshop demonstrated how an additional node can be added to an existing cluster using

ceph osd crush add 4 osd.4 1.0 pool=default host=daisy

assuming that daisy is the hostname of the new server. This command integrates the host daisy in the cluster and gives it the same weight (1,0) as all the other nodes. Removing a node from the cluster configuration is as easy as adding one. The command for that is:

ceph osd crush remove osd.4

The pool=default parameter for adding the node already refers to an important feature: pools.

Working with Pools

RADOS offers the option of dividing the storage of the entire object store into individual fragments called pools. One pool, however, does not correspond to a contiguous storage area, as with a partition; rather, it is a logical layer consisting of binary data tagged as belonging to the corresponding pool. Pools allow configurations in which individual users can only access specific pools, for example. The pools metadata, data, and rbd are available in the default configuration. A list of existing pools can be called up with rados lspools (Figure 1).

...
Use Express-Checkout link below to read the full article (PDF).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • RADOS and Ceph: Part 2

    In this second article on scalable storage in cloud environments, we cover the inner workings of RADOS and how to avoid pitfalls.

  • The RADOS Object Store and Ceph Filesystem

    Scalable storage is a key component in cloud environments. RADOS and Ceph enter the field, promising to support seamlessly scalable storage.

  • Fixing Ceph performance problems
    Ceph is powerful and efficient, but wrong settings or faulty hardware can cause the decentralized object store to stumble.
  • What's new in Ceph
    Ceph and its core component RADOS have recently undergone a number of technical and organizational changes. We take a closer look at the benefits that the move to containers, the new setup, and other feature improvements offer.
  • Troubleshooting and maintenance in Ceph
    We look into some everyday questions that administrators with Ceph clusters tend to ask: What do I do if a fire breaks out or I run out of space in the cluster?
comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=