成都网站建设设计

将想法与焦点和您一起共享

Ceph块设备介绍与安装配置

一:rbd介绍

块是字节序列(例如,一个512字节的数据块)。基于块的存储接口是使用旋转介质(例如硬盘,CD,软盘甚至传统的9-track tape)存储数据的最常用方法。块设备接口的无处不在,使虚拟块设备成为与海量数据存储系统(如Ceph)进行交互的理想候选者。

创新互联公司是专业的南浔网站建设公司,南浔接单;提供网站制作、网站建设,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行南浔网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!


Ceph块设备经过精简配置,可调整大小,并在Ceph集群中的多个OSD上存储条带化数据,ceph块设备利用了RADOS功能,例如快照,复制和一致性。 Ceph的RADOS块设备(RBD)使用内核模块或librbd库与OSD进行交互。
Ceph块设备介绍与安装配置


Ceph的块设备对内核设备,KVMS例如QEMU,基于云的计算系统,例如OpenStack和CloudStack,提供高性能和无限的可扩展性 。你可以使用同一群集同时操作Ceph RADOS网关,Ceph的文件系统和Ceph块设备。

二:创建与使用块设备

创建池和块

[root@ceph-node1 ~]# ceph osd pool create block 6
pool 'block' created


为客户端创建用户,并将密钥文件scp到客户端

[root@ceph-node1 ~]# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=block'| tee ./ceph.client.rbd.keyring
[client.rbd]
key = AQA04PpdtJpbGxAAd+lCJFQnDfRlWL5cFUShoQ==
[root@ceph-node1 ~]#scp ceph.client.rbd.keyring root@ceph-client:/etc/ceph


客户端创建一个大小为2G的块设备

[root@ceph-client /]# rbd create block/rbd0 --size 2048 --name client.rbd


映射此块设备到客户端

[root@ceph-client /]# rbd map --image block/rbd0 --name client.rbd
/dev/rbd0
[root@ceph-client /]# rbd showmapped --name client.rbd
id pool image snap device
0 block rbd0 - /dev/rbd0

注意:这里可能会报如下的错误

[root@ceph-client /]# rbd map --image block/rbd0 --name client.rbd
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (2) No such file or directory

解决方法有三种,看我这篇博客rbd: sysfs write failed解决办法


创建文件系统,并挂载块设备

[root@ceph-client /]# fdisk -l /dev/rbd0

Disk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

[root@ceph-client /]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

[root@ceph-client /]# mount /dev/rbd0 /ceph-rbd0
[root@ceph-client /]# df -Th /ceph-rbd0
Filesystem Type Size Used Avail Use% Mounted on
/dev/rbd0 xfs 2.0G 33M 2.0G 2% /ceph-rb


写入数据测试

[root@ceph-client /]# dd if=/dev/zero of=/ceph-rbd0/file count=100 bs=1M
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0674301 s, 1.6 GB/s
[root@ceph-client /]# ls -lh /ceph-rbd0/file
-rw-r--r-- 1 root root 100M Dec 19 10:50 /ceph-rbd0/file


做成系统服务

[root@ceph-client /]#cat /usr/local/bin/rbd-mount

#!/bin/bash

# Pool name where block device image is stored
export poolname=block

# Disk image name
export rbdimage0=rbd0

# Mounted Directory
export mountpoint0=/ceph-rbd0

# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
   modprobe rbd
   rbd feature disable $rbdimage0 object-map fast-diff deep-flatten
   rbd map $rbdimage0 --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
   mkdir -p $mountpoint0
   mount /dev/rbd/$poolname/$rbdimage0 $mountpoint0
fi
if [ "$1" == "u" ]; then
   umount $mountpoint0
   rbd unmap /dev/rbd/$poolname/$rbdimage0
fi

[root@ceph-client ~]# cat /etc/systemd/system/rbd-mount.service
[Unit]
Description=RADOS block device mapping for $rbdimage in pool $poolname"
Conflicts=shutdown.target
Wants=network-online.target
After=NetworkManager-wait-online.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/rbd-mount m
ExecStop=/usr/local/bin/rbd-mount u
[Install]
WantedBy=multi-user.target


开机自动挂载

[root@ceph-client ~]#systemctl daemon-reload
[root@ceph-client ~]#systemctl enable rbd-mount.service


网站题目:Ceph块设备介绍与安装配置
分享路径:http://chengdu.cdxwcx.cn/article/ggehgi.html