summaryrefslogtreecommitdiffstats
path: root/doc/architecture.rst
diff options
context:
space:
mode:
authorSondra.Menthers <sondra.menthers@dreamhost.com>2011-10-27 20:16:51 +0200
committerSondra.Menthers <sondra.menthers@dreamhost.com>2011-10-27 20:16:51 +0200
commitd9dfd14761e2c6d2a166e472129a0ea85218785f (patch)
tree76229ccbca894d1ec586303ad81a68bf3199260a /doc/architecture.rst
parentrgw: handle swift PUT with incorrect etag (diff)
downloadceph-d9dfd14761e2c6d2a166e472129a0ea85218785f.tar.xz
ceph-d9dfd14761e2c6d2a166e472129a0ea85218785f.zip
rgw: handle swift PUT with incorrect etag
Diffstat (limited to 'doc/architecture.rst')
-rw-r--r--doc/architecture.rst34
1 files changed, 22 insertions, 12 deletions
diff --git a/doc/architecture.rst b/doc/architecture.rst
index 0e8395f7b59..77c576e03a4 100644
--- a/doc/architecture.rst
+++ b/doc/architecture.rst
@@ -3,32 +3,42 @@
======================
======================About this Document======================
-This document describes the features and benefits of using the Ceph Unified Distributed Storage System, and why it is superior to other systems. The audience for this document consists of sales and marketing personnel, new customers, and all persons who need to get a basic overview of the features and functionality of the system.
+This document describes the features and benefits of using the Ceph Unified Distributed Storage System, and why it is superior to other systems.
+The audience for this document consists of sales and marketing personnel, new customers, and all persons who need to get a basic overview of the features and functionality of the system.
======================Introduction to Ceph======================
Ceph is a unified, distributed file system that operates on a large number of hosts connected by a network. Ceph has been designed to accommodate multiple petabytes of storage with ease. Since file sizes and network systems are always increasing, Ceph is perfectly positioned to accommodate these new technologies with its unique, self-healing and self-replicating architecture. Customers that need to move large amounts of metadata, such as media and entertainment companies, can greatly benefit from this product. Ceph is also dynamic; no need to cache data like those old-fashioned client-servers!
Benefits of Using Ceph
-Ceph’s flexible and scalable architecture translates into cost savings for users. Its powerful load balancing technology ensures the highest performance in terms of both speed and reliability. Nodes can be added “on the fly” with no impact to the system. In the case of node failure, the load is re-distributed with no degradation to the system. Failure detection is rapid and immediately remedied by efficiently re-adding nodes that were temporarily cut off from the network.
+Ceph’s flexible and scalable architecture translates into cost savings for users. Its powerful load balancing technology ensures the highest performance in terms of both speed and
+reliability. Nodes can be added “on the fly” with no impact to the system. In the case of node failure, the load is re-distributed with no degradation to the system.
+ Failure detection is rapid and immediately remedied by efficiently re-adding nodes that were temporarily cut off from the network.
Manageability
-Ceph is easy to manage, requiring little or no system administrative intervention. Its powerful placement algorithm and intelligent nodes manage data seamlessly across any node configuration. It also features multiple access methods to its object storage, block storage, and file systems. Figure 1 displays this configuration.
-.
-
-The Reliable Autonomic Distributed Object Store (RADOS) provides a scalable object storage management platform. RADOS allows the Object Storage Devices (OSD) to operate autonomously when recovering from failures or migrating data to expand clusters. RADOS employs existing node device intelligence to maximized scalability.
-The RADOS Block Device (RBD) provides a block device interface to a Linux machine, while striping the data across multiple RADOS objects for improved performance. RDB is supported for Linux kernels 2.6.37 and higher. Each RDB device contains a directory with files and information
+Ceph is easy to manage, requiring little or no system administrative intervention. Its powerful placement algorithm and intelligent nodes manage data seamlessly across any node
+configuration. It also features multiple access methods to its object storage, block storage, and file systems. Figure 1 displays this configuration.
+<img> CephConfig.jpg.
+======================RADOS======================
+The Reliable Autonomic Distributed Object Store (RADOS) provides a scalable object storage management platform. RADOS allows the Object Storage Devices (OSD) to operate autonomously
+when recovering from failures or migrating data to expand clusters. RADOS employs existing node device intelligence to maximized scalability.
+The RADOS Block Device (RBD) provides a block device interface to a Linux machine, while striping the data across multiple RADOS objects for improved performance.
+RDB is supported for Linux kernels 2.6.37 and higher. Each RDB device contains a directory with files and information
======================RADOS GATEWAY======================
``radosgw`` is an S3-compatible RESTful HTTP service for object
storage, using RADOS storage.
-The RADOS Block Device (RBD) provides a block device interface to a Linux machine. To the user, RDB is transparent, which means that the entire Ceph system looks like a single, limitless hard drive that is always up and has no size limitations. .
+The RADOS Block Device (RBD) provides a block device interface to a Linux machine. To the user, RDB is transparent, which means that the entire Ceph system looks like a single,
+limitless hard drive that is always up and has no size limitations. .
-Hypervisor Support
-RBD supports the QEMU processor emulator and the Kernel-based Virtual Machine (KVM) virtualization infrastructure for the Linux kernel. Normally, these hypervisors would not be used together in a single configuration.
+======================Hypervisor Support======================
+RBD supports the QEMU processor emulator and the Kernel-based Virtual Machine (KVM) virtualization infrastructure for the Linux kernel. Normally, these hypervisors would not be used
+together in a single configuration.
KVM RBD
-The Linux Kernel-based Virtual Machine (KVM) RBD provides the functionality for striping data across multiple distributed RADOS objects for improved performance. KVM-RDB is supported for Linux kernels 2.6.37 and higher. Each RDB device contains a directory with files and information.
+The Linux Kernel-based Virtual Machine (KVM) RBD provides the functionality for striping data across multiple distributed RADOS objects for improved performance.
+KVM-RDB is supported for Linux kernels 2.6.37 and higher. Each RDB device contains a directory with files and information.
KVM employs the XEN hypervisor to manage its virtual machines.
QEMU RBD
-QEMU-RBD facilitates striping a VM block device over objects stored in the Ceph distributed object store. This provides shared block storage to facilitate VM migration between hosts. QEMU has its own hypervisor which interfaces with the librdb user-space library to store its virtual machines
+QEMU-RBD facilitates striping a VM block device over objects stored in the Ceph distributed object store. This provides shared block storage to facilitate VM migration between hosts.
+QEMU has its own hypervisor which interfaces with the librdb user-space library to store its virtual machines
.. _monitor: