龙空技术网

Linux/CentOS 7 生产环境部署ceph及维护

IT男侃球 154

前言:

此刻我们对“centos73支持asp”可能比较珍视,看官们都想要知道一些“centos73支持asp”的相关内容。那么小编也在网络上网罗了一些有关“centos73支持asp””的相关知识,希望各位老铁们能喜欢,大家一起来学习一下吧!

Ceph在存储中的层次

第一层:物理存储介质。

LUN:通常将硬件生成生成的虚拟磁盘叫LUN, 比如raid卡生成的虚拟磁盘。Volume:通常将软件层次生成的虚拟磁盘叫做卷,比如LVM生成的逻辑卷。Disk:就是物理磁盘

第二层:内核层次的文件系统,维护文件到磁层磁盘的映射关系。(用户一般不需要管)

第三层:应用层次的文件系统(需要用户自己手工安装应用程序,启动应用进程)

第四层:网络文件访问系统NFS, CIFS(服务器端装Server,客户端装Client,挂载目录远程访问)

Ceph原理

参考:

Ceph存储系统的逻辑结构Rados的系统逻辑结构Ceph寻址流程ceph部署网络拓扑

备注:Cluster Network可选,但是最好建议有该网络,用于OSD扩展时,后端网络传输数据用。

在实际工作中时,深有体会,如果只有public network,在OSD扩展时,由于ceph需要重新“搬运”数据,导致升级长达5个小时。如果有专门的集群网络(万兆交换机+光钎),几分钟升级完成。

Ceph安装(ceph-deploy)

参考:

(上述官网的汉化版)

环境准备以及各ceph节点初始化部署逻辑架构

节点

安装组件

备注

hostname:ceph1

ip:192.168.100.110

ceph-deploy、mon、osd

OS:CentOS7.9

Admin Node与Ceph1共享

hostname:ceph2

ip:192.168.100.111

mon、osd

hostname:ceph3

ip:192.168.100.112

mon、osd

该章节的操作均通过root执行且在各个ceph节点均要执行

修改/etc/hostname

#vi /etc/hostname #如果为其他节点调整为其他节点的名称

ceph{number} #如ceph1

#hostname -F /etc/hostname #立即生效,断开shell重新登录

创建安装用户irteam且该用户不需要tty

#useradd -d /home/irteam -k /etc/skel -m irteam

#sudo passwd irteam

#echo " irteam ALL = (root) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/irteam

#chmod 0440 /etc/sudoers.d/irteam

修改/etc/sudoers,irteam用户不需要tty

#chmod 755 /etc/sudoers

#vi /etc/sudoers #添加如下配置,而不是将原来的Default requiretty注释掉

Defaults:irteam !requiretty

#chmod 440 /etc/sudoers

yum源以及ceph源设置

#yum clean all

#rm -rf /etc/yum.repos.d/*.repo

#wget -O /etc/yum.repos.d/CentOS-Base.repo

#wget -O /etc/yum.repos.d/epel.repo

#sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo

#sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo

#vi /etc/yum.repos.d/ceph.repo #增加ceph源

[ceph]

name=ceph

baseurl=

gpgcheck=0

[ceph-noarch]

name=cephnoarch

baseurl=

gpgcheck=0

安装ceph

#yum makecache

#yum install -y ceph

#ceph --version #版本查看

ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)

关闭selinux & firewalld

#sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

#setenforce 0

#systemctl stop firewalld

#systemctl disable firewalld

同步时间节点(rdate & ntp均可以)

参考:

#timedatectl set-timezone Asia/Shanghai #设置时区

#yum install -y rdate

#rdate -s tick.greyware.com #选个可用且权威的服务器

#echo "00 0 1 * * root rdate -s tick.greyware.com" >> /etc/crontab #加入调度

部署Ceph集群

备注:以下操作均在admin-node节点执行,在本文中,由于admin-node与ceph1共享,所以在ceph1执行就可以了,统一用用户:irteam执行

修改/etc/hosts

#sudo vi /etc/hosts

192.168.100.110 ceph1

192.168.100.111 ceph2

192.168.100.112 ceph3

生成密钥对 & 复制秘钥到各节点(防止部署输入密码,即无密码验证)

#sudo su - irteam

#ssh-keygen

Generating public/private key pair.

Enter file in which to save the key (/irteam/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /irteam/.ssh/id_rsa.

Your public key has been saved in /irteam/.ssh/id_rsa.pub.

#ssh-copy-id irteam@ceph1

#ssh-copy-id irteam@ceph2

#ssh-copy-id irteam@ceph3

用户配置,防止部署时输入用户名

#sudo su - irteam #如果当前登录用户是irteam,则忽略该步骤

#vi ~/.ssh/config

StrictHostKeyChecking no

Host ceph1

Hostname ceph1

User irteam

Host ceph2

Hostname ceph2

User irteam

Host ceph3

Hostname ceph3

User irteam

#chmod 600 ~/.ssh/config

安装部署工具

#sudo yum -y install ceph-deploy

#ceph-deploy --version

1.5.34

创建集群

#sudo su - irteam #如果当前登录用户是irteam,则不用执行

#mkdir ~/my-cluster && cd ~/my-cluster

#创建集群:在当前目录下生成ceph.conf以及ceph.mon.keyring

#ceph-deploy new ceph1 ceph2 ceph3

#ls ~/my-cluster #查看生成的文件

ceph.conf ceph-deploy-ceph.log ceph.mon.keyring

修改集群ceph.conf配置,增加public_network,增加monitor之间的时差(默认为0.05s,现改为2s),总共的副本数据调整为2

#vi ceph.conf

[global]

fsid = 7cec0691-c713-46d0-bce8-5cb1d57f051f

mon_initial_members = ceph1, ceph2, ceph3 #也可以用IP,用hostname最佳

mon_host = 192.168.11.119,192.168.11.124,192.168.11.112

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

public_network = 192.168.100.0/24

mon_clock_drift_allowed = 2

osd_pool_default_size = 2

部署monitors

#ceph-deploy mon create-initial

#ll ~/my-cluster

ceph.bootstrap-mds.keyring

ceph.bootstrap-rgw.keyring

ceph.conf

ceph.mon.keyring

ceph.bootstrap-osd.keyring

ceph.client.admin.keyring

ceph-deploy-ceph.log

#sudo ceph -s #查看集群情况

cluster 7cec0691-c713-46d0-bce8-5cb1d57f051f

health HEALTH_ERR

no osds

monmap e1: 3 mons at {ceph1=192.168.100.119:6789/0,ceph2=192.168.100.111:6789/0,ceph3=192.168.100.112:6789/0}

election epoch 4, quorum 0,1,2 ceph3,ceph1,ceph2

osdmap e1: 0 osds: 0 up, 0 in

flags sortbitwise

pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

0 kB used, 0 kB / 0 kB avail

64 creating

部署osds

由于没有足够多的磁盘(如果用磁盘请参考维护章节),用文件夹:

#以下创建文件夹,赋权的动作均在ceph1,ceph2,ceph3上执行

#sudo mkdir /var/local/osd1 && sudo chmod 777 -R /var/local/osd1

osd预处理与激活

#ceph-deploy osd prepare ceph1:/var/local/osd1 ceph2:/var/local/osd1 ceph3:/var/local/osd1

#ceph-deploy osd activate ceph1:/var/local/osd1 ceph2:/var/local/osd1 ceph3:/var/local/osd1

备注:

a.如果你有足够多的磁盘,你也可以直接对磁盘操作

#ceph-deploy osd prepare ceph1:sdb

#ceph-deploy osd activate ceph1:sdb

b.对上述osd prepare & osd activate,也可以一步完成

#ceph-deploy osd create ceph1:sdb

查看集群状态

#sudo ceph -s

cluster 7cec0691-c713-46d0-bce8-5cb1d57f051f

health HEALTH_OK

monmap e1: 3 mons at {ceph1=192.168.100.110:6789/0,ceph2=192.168.100.111:6789/0,ceph3=192.168.100.112:6789/0}

election epoch 4, quorum 0,1,2 ceph3,ceph1,ceph2

osdmap e15: 3 osds: 3 up, 3 in

flags sortbitwise

pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects

29590 MB used, 113 GB / 142 GB avail

64 active+clean

Ceph安装(kolla)

除了官方推荐的ceph-deploy安装方式,你还可以选择如下安装方式:

通过工具ansible,远程登录到各node节点安装,并且让mon,osd,rgw用docker方式来承载

另外:由于我们使用docker的目的是想部署openstack,涉及到openstack部分,则不涉及。

如果不部署openstack,则需要将openstack的所有组件设置为no, 只需要打开ceph部分,如:

vi /git/kolla/etc/kolla/globals.yml

enable_keystone: “no”,

enable_horizon:”no”,

……

enable_${compName}: “no”,

enable_ceph: “yes”

….

环境准备以及各ceph节点初始化部署逻辑架构

节点

安装组件

备注

Kolla节点

ip:192.168.100.144

kolla

OS:Centos7.9

hostname:ceph1

ip:192.168.100.133

mon、osd

hostname:ceph2

ip:192.168.100.117

mon、osd

hostname:ceph3

ip:192.168.100.148

mon、osd

该章节的操作可以通过root用户执行且在各个ceph节点均要执行

修改/etc/hostname

#vi /etc/hostname #如果为其他节点调整为其他节点的名称

ceph{nuber} #如ceph1

#hostname -F /etc/hostname #立即生效,断开shell重新登录

创建安装用户irteam且该用户不需要tty

#useradd -d /home/irteam -k /etc/skel -m irteam

#sudo passwd irteam

#echo " irteam ALL = (root) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/irteam

#chmod 0440 /etc/sudoers.d/irteam

关闭selinux & firewalld

#sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

#setenforce 0

#systemctl stop firewalld

#systemctl disable firewalld

同步时间节点(rdate & ntp均可以)

参考:

#timedatectl set-timezone Asia/Shanghai #设置时区

#yum install -y rdate

#rdate -s tick.greyware.com #选个可用且权威的服务器

#echo "00 0 1 * * root rdate -s tick.greyware.com" >> /etc/crontab #加入调度

安装docker以及docker-py

#curl -sSL | bash

#docker --version

Docker version ${version}, build 20f81dd

#vi /usr/lib/systemd/system/docker.service #添加下面的MountFlags=shared

MountFlags=shared

#systemctl daemon-reload

#systemctl restart docker #重启docker

#yum install -y python-pip

#pip install -U docker-py

部署Ceph集群

以下操作在kolla节点执行,统一用用户:irteam执行

修改/etc/hosts

#sudo vi /etc/hosts

192.168.100.133 ceph1

192.168.100.117 ceph2

192.168.100.148 ceph3

生成密钥对 & 复制秘钥到各节点(ansible通过公钥连接到主机)

#sudo su - irteam

#ssh-keygen

Generating public/private key pair.

Enter file in which to save the key (/irteam/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /irteam/.ssh/id_rsa.

Your public key has been saved in /irteam/.ssh/id_rsa.pub.

#ssh-copy-id irteam@ceph1

#ssh-copy-id irteam@ceph2

#ssh-copy-id irteam@ceph3

下载kolla & 安装ansible & 配置ansible的inventory文件

#sudo mkdir -p /git/ && cd /git/ #目录根据自己的调整

#git clone #也可以从gitHub下载

#pip install -U ansible==1.9.4 #如果从github下载kolla,注意kolla版本与ansible版本的#对应关系

#sudo vi multimode-inventory

….略

[ceph-mon]

ceph1

[ceph-osd]

ceph1 ansible_sudo=True

ceph2 ansible_sudo=True

ceph3 ansible_sudo=True

[ceph-rgw]

…略

备注:irteam必须要有sudo到root用户的权限

将上面[ceph-osd]节点的磁盘标注为kolla可以识别的名称

#登录到各ceph节点,确认哪些盘可以作为osd盘,打上标记,执行完后返回kolla节点

#sudo parted /dev/xvdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1

#sudo parted /dev/xvdb print #查看

Model: Xen Virtual Block Device (xvd)

Disk /dev/xvdb: 53.7GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Disk Flags:

Number Start End Size File system Name Flags

1 1049kB 53.7GB 53.7GB KOLLA_CEPH_OSD_BOOTSTRAP

[可选]设置外置的journal drive

如果不设置,有没有关系,会自动划分5G的journal 空间

设置ceph可部署 & 部署

#vi /git/kolla/etc/kolla/globals.yml

…略

enable_ceph: "yes"

enable_ceph_rgw: "no"

…略

#kolla最终部署

#/git/kolla/tools/kolla-ansible deploy --configdir /git/openstack-deploy/config-test -i \

/git/openstack-deploy/config-test/multinode-inventory

Ceph维护整个集群维护Ceph状态查看

#sudo ceph -s #当前状态查看

#sudo ceph -w #实时状态查看

health HEALTH_OK

monmap e3: 1 mons at {ceph3=192.168.11.112:6789/0}

election epoch 7, quorum 0 ceph3

osdmap e67: 4 osds: 4 up, 4 in

flags sortbitwise

pgmap v60965: 64 pgs, 1 pools, 0 bytes data, 0 objects

57416 MB used, 133 GB / 189 GB avail

64 active+clean

2016-08-19 01:16:01.623581 mon.0 [INF] pgmap v60965: 64 pgs: 64 active+clean; 0 bytes data, 57416 MB used, 133 GB / 189 GB avail

2016-08-19 01:16:05.582505 mon.0 [INF] pgmap v60966: 64 pgs: 64 active+clean; 0 bytes data, 57416 MB used, 133 GB / 189 GB avail

….

#sudo ceph health [detail] #ceph健康状态查看,[详情]

HEALTH_OK

Ceph存储空间查看

#sudo ceph df

GLOBAL:

SIZE AVAIL RAW USED %RAW USED

189G 133G 57411M 29.54

POOLS:

NAME ID USED %USED MAX AVAIL OBJECTS

rbd 0 0 0 64203M 0

OSD查看|增加|删除

备注:OSD数量大于>=2且位于不同的物理节点

查看OSD状态

#sudo ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.13889 root default

-2 0.04630 host ceph1

0 0.04630 osd.0 up 1.00000 1.00000

-3 0.04630 host ceph2

1 0.04630 osd.1 up 1.00000 1.00000

-4 0.04630 host ceph3

2 0.04630 osd.2 up 1.00000 1.00000

#ceph osd dump

epoch 22

fsid ee45dfa5-234d-48f3-a8a5-32e9ca781f47

created 2016-09-21 14:05:24.512685

modified 2016-09-22 15:14:54.317395

flags

pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

max_osd 5

osd.0 up in weight 1 up_from 11 up_thru 21 down_at 0 last_clean_interval [0,0) 192.168.11.112:6800/5903 192.168.11.112:6801/5903 192.168.11.112:6802/5903 192.168.11.112:6803/5903 exists,up 418cbbe0-ea7e-42d8-b43d-e48dd7e53a00

osd.1 up in weight 1 up_from 10 up_thru 21 down_at 0 last_clean_interval [0,0) 192.168.11.134:6800/5639 192.168.11.134:6801/5639 192.168.11.134:6802/5639 192.168.11.134:6803/5639 exists,up f53ec139-9816-46a8-b7d5-41cb5dd57a0f

osd.2 up in weight 1 up_from 11 up_thru 21 down_at 0 last_clean_interval [0,0) 192.168.11.135:6800/5659 192.168.11.135:6801/5659 192.168.11.135:6802/5659 192.168.11.135:6803/5659 exists,up 67ca0418-a495-45a7-914b-197dff732220

osd.3 down out weight 0 up_from 0 up_thru 0 down_at 0 last_clean_interval [0,0) :/0 :/0 :/0 :/0 exists,new

osd.4 down out weight 0 up_from 0 up_thru 0 down_at 0 last_clean_interval [0,0) :/0 :/0 :/0 :/0 exists,new

列出磁盘

#ceph-deploy disk list ceph1

擦除磁盘:删除磁盘的分区

#ceph-deploy disk zap ceph1:sdb

备注:此动作由于删除分区表,会删除所有数据,务必小心

创建OSD

#ceph-deploy osd prepare ceph1:sdb #准备OSD

#ceph-deploy osd activate ceph1:sdb #激活OSD

#ceph-deploy osd create ceph1:sdb #准备+激活=创建,1步到位

移除OSD

参考:

#sudo ceph osd out osd.{number}

#sudo ceph osd down osd.{number} #停止进程,也可以登录到节点停止该进程

#sudo ceph osd crush remove osd.{number}

#sudo ceph auth del osd. {number}

#sudo ceph osd rm {number}

#sudo rm -rf /var/lib/ceph/osd/ceph-{number} #登录到所在节点执行

Monitor增加|删除|查看

说明:Monitor数量要求>=2n+1 (n>=0,整数),生产环境至少3个

移除Monitor

#将ceph1这个monitor节点从~/my-cluster/ceph.conf删除

#cd ~/my-cluster/ && vi ceph.conf

……略

mon_initial_members = ceph2, ceph3

mon_host = 192.168.100.111,192.168.100.112

……略

#推送~/my-cluster/ceph.conf到各节点

#ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3

#删除该monitor[可能要执行两次],最后通过sudo ceph -s查看

#ceph-deploy mon destroy ceph1

添加Monitor

#将ceph1这个monitor节点添加到~/my-cluster/ceph.conf

#vi ~/my-cluster/ceph.conf

……略

mon_initial_members = ceph1,ceph2, ceph3

mon_host = 192.168.100.110,192.168.100.111,192.168.100.112

……略

#推送~/my-cluster/ceph.conf到各节点

#ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3

#增加monitor

#ceph-deploy --overwrite-conf mon create ceph1

查看Monitor

#sudo ceph -s

cluster 773b310b-5faf-4d98-8761-651ba7daecfb

health HEALTH_OK

monmap e8: 2 mons at {ceph2=192.168.100.115:6789/0,ceph3=192.168.100.126:6789/0}

election epoch 42, quorum 0,1 ceph2,ceph3

osdmap e76: 3 osds: 3 up, 3 in

flags sortbitwise

pgmap v30914: 136 pgs, 10 pools, 38194 kB data, 195 objects

21925 MB used, 120 GB / 142 GB avail

136 active+clean

#当添加或删除Monitor后,查看选举状态

#ceph quorum_ status --format json-pretty

MDS维护[暂不考虑]Pools & pg维护

参考:

(pg状态说明)

查看Pools

#sudo ceph osd lspools #或sudo rados lspools #或sudo ceph osd pool ls

rbd

#rados df

pool name KB objects clones degraded unfound rd rd KB wr wr KB

rbd 0 0 0 0 0 0 0 0 0

total used 58986376 0

total avail 139998408

total space 198984784

创建Pools &查看pool的副本&查看pool的详细信息

#sudo ceph osd pool create images 100 #100指pg number

pool 'images' created

默认pool的pg number配置:

#sudo ceph osd pool set images size 3 #设置pool:images的副本数为3

set pool 1 size to 3

#sudo ceph osd dump | grep 'replicated size' #查看所有pool的副本数

#sudo ceph osd dump | grep ‘’${poolName}’ #查看pool的详细情况’

#sudo ceph osd pool set-quota images max_objects 10000 #设置quota:最大对象数

set-quota max_objects = 10000 for pool images

删除Pools

#sudo ceph osd pool delete images images --yes-i-really-really-mean-it

pool 'images' removed

对象存储创建对象

#rados put test-object-1 a.txt --pool=data

查看池子中的对象

#rados -p data ls

test-object-1

#ceph osd map data test-object-1 #确定对象的位置

osdmap e75 pool 'data' (2) object 'test-object-1' -> pg 2.74dc35e2 (2.62) -> up ([4,5], p4) acting ([4,5], p4)

删除对象

#rados rm test-object-1 --pool=data

对象网关(通过web方式访问)

√ 在任意ceph节点安装对象网管

#sudo yum install -y ceph-radosgw radosgw-agent

√ 基本配置设置

创建keyring

#sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring

#sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring

创建key

#sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring \

-n client.radosgw.gateway --gen-key

Add capabilities to Key

#sudo ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' \

--cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring

Add the key to Ceph Storage Cluster

#sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway \

-i /etc/ceph/ceph.client.radosgw.keyring

如果其他节点需要ceph.client.radosgw.keyring,可以推送到/etc/ceph目录下

Add gateway configuration to /etc/ceph/ceph.conf

[client.radosgw.gateway]

host = ceph1

keyring = /etc/ceph/ceph.client.radosgw.keyring

rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock

log file = /var/log/radosgw/client.radosgw.gateway.log

√ 启动radosgw服务(默认监听7480端口)

#/usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway

√ 创建用户

#sudo radosgw-admin user create --uid=ningyougang --display-name=ningyougang --email=ningyougang@navercorp.com

√ S3客户端测试

到该地址下载S3客户端,连接

√ S3运维-radosgw,以下含相关运维命令

请参考:

创建用户

#sudo radosgw-admin user create --uid=newtouchstep --display-name=newtouchstep --email=jingyi.liu@newtouch.cn

修改用户

#sudo radosgw-admin user modify --uid=newtouchstep --display-name=newtouchstep --email=jingyi.liu@newtouch.cn

查看用户信息

#sudo radosgw-admin user info --uid=newtouchone

删除用户

#sudo radosgw-admin user rm --uid=newtouchone #没有数据才可以删除

#sudo radosgw-admin user rm --uid=newtouchone --purge-data #删除用户删除数据

暂停用户使用

#sudo radosgw-admin user suspend --uid=newtouchone

用户生效

#sudo radosgw-admin user enable --uid=newtouchone

用户检查

#sudo radosgw-admin user check --uid=newtouchone

查询bucket

#sudo radosgw-admin bucket list

查询指定bucket的对象

#sudo radosgw-admin bucket list --bucket=images

统计bucket信息

#sudo radosgw-admin bucket stats #查询所有bucket统计信息

#sudo radosgw-admin bucket stats --bucket=images #查询指定bucket统计信息

删除bucket

删除pucket(但是不删除object,加上bucket后恢复)

#sudo radosgw-admin bucket rm --bucket=images

删除bucket后同时删除object

#sudo radosgw-admin bucket rm --bucket=images --purge-objects

检查bucket

#sudo radosgw-admin bucket check

删除object

#sudo radosgw-admin object rm --bucket=attach --object=fanbingbing.jpg

为bucket设置配额

#sudo radosgw-admin quota set --max-objects=200 --max-size=10000000000 --quota-scope=bucket --bucket=images

#sudo radosgw-admin quota enable --quota-scope=bucket --bucket=images

#sudo radosgw-admin quota disable --quota-scope=bucket --bucket=images

为帐号设置配额

#sudo radosgw-admin quota set --max-objects=2 --max-size=100000 --quota-scope=user --uid=newtouchstep

#sudo radosgw-admin quota enable --quota-scope=user --uid=newtouchstep

#sudo radosgw-admin quota disable --quota-scope=user --uid=newtouchstep

块存储

参考:

经典截图:

创建rbd volume

#sudo rbd create foo --size 1024 --pool swimmingpool --image-feature layering

/dev/rbd0

查看rbd volue列表 & 查看指定的某个rbd volume

#sudo rbd ls --pool swimmingpool #块设备列表查看,不指定pool,则默认为rbd

#sudo rbd info foo --pool swimmingpool #查看指定的块设备

Map rbd volume& 查看被映射的rbd volume

#sudo rbd map foo --pool swimmingpool

#sudo rbd showmapped

id pool image snap device

1 rbd foo - /dev/rbd1

备注:上面的rbd create如果不加参数--image-feature layering,则sudo rbd map操作好像有问题:

备注:在rbd map时,会报: rbd: map failed: (6) No such device or address

此时可以在rbd create后加--image-feature layering参数即可解决

#sudo rbd create foo --size 1024 --pool swimmingpool --image-feature layering

/dev/rbd0

格式化

#sudo mkfs.ext4 /dev/rbd0

挂载

#sudo mount /dev/rbd0 /mnt

#ls /mnt #证明被挂载目录是一个分区

Lost+found

umount目录,unmap设备,删除rbd

#sudo unmount /mnt #先umount目录

#sudo rbd unmap /dev/rbd0

#sudo rbd rm foo

文件存储(暂不考虑)推送配置文件 && 重启服务

参考:

推送配置文件到所有节点

#推送配置,从目录~/my-cluster/ceph.conf到各节点的/etc/ceph/ceph.conf

#ceph-deploy --overwrite-conf config push ceph1 ceph2 ceph3

启动|停止|重启所有服务(mon,osd,mds)

#sudo systemctl start ceph.target

#sudo systemctl stop ceph.target

#sudo systemctl restart ceph.target

启动osd,mon,mds服务(停止和重启省略)

#sudo systemctl start ceph-osd.target

#sudo systemctl start ceph-mon.target

#sudo systemctl start ceph-mds.target

启动|停止|重启单个节点上的某个ceph-osd服务

#systemctl start ceph-osd@{id}

#systemctl stop ceph-osd@{id}

#systemctl restart ceph-osd@{id}

环境清理至刚安装ceph时

备注:如果部署ceph失败,不必删除ceph,只需要在当前节点执行如下命令即可,即可将环境还原到刚安装ceph的状态

采用ceph-deploy安装,环境清理

#umount /var/lib/ceph/osd/*

#rm -rf /var/lib/ceph/osd/*

#rm -rf /var/lib/ceph/mon/*

#rm -rf /var/lib/ceph/mds/*

#rm -rf /var/lib/ceph/bootstrap-mds/*

#rm -rf /var/lib/ceph/bootstrap-osd/*

#rm -rf /var/lib/ceph/bootstrap-mon/*

#rm -rf /var/lib/ceph/tmp/*

#rm -rf /etc/ceph/*

#rm -rf /var/run/ceph/*

或(上面命令的简洁版)

#umount /var/lib/ceph/osd/*

#rm -rf /var/lib/ceph

#rm -rf /etc/ceph/*

#rm -rf /var/run/ceph/*

采用kolla安装,环境清理

#删除该节点的ceph相关容器,注意-f name参数,根据需要加入过滤条件

#docker rm -f $(docker ps -f name=ceph -qa)

#删除配置文件

#sudo rm -rf /home/irteam/kolla/*

#[可选]如果是monitor节点,删除ceph_mon, ceph_mon_config

#docker volume rm ceph_mon ceph_mon_config

#[可选]如果是osd节点,umount,删除分区

#sudo umount /var/lib/ceph/osd/*

#sudo rm -rf /var/lib/ceph

#sudo fdisk /dev/xvdb……,具体请参考磁盘操作章节的删除分区章节

#sudo parted /dev/xvdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1

硬盘操作

这里仅列出部分操作,还有更多操作请参考《硬盘操作.docx》

磁盘查看

#sudo fdisk -l

Disk /dev/xvdb: 53.7 GB, 53687091200 bytes, 104857600 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

# Start End Size Type Name

1 10487808 104857566 45G unknown KOLLA_CEPH_DATA_1

2 2048 10485760 5G unknown KOLLA_CEPH_DATA_1_J

Disk /dev/xvda: 53.7 GB, 53687091200 bytes, 104857600 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x000602dc

Device Boot Start End Blocks Id System

/dev/xvda1 * 2048 1026047 512000 83 Linux

/dev/xvda2 1026048 104857599 51915776 8e Linux LVM

磁盘操作

这里仅以删除分区为例,具体请help

#sudo fdisk /dev/xvdb

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

Command (m for help): m

Command action

d delete a partition

g create a new empty GPT partition table

G create an IRIX (SGI) partition table

l list known partition types

m print this menu

n add a new partition

o create a new empty DOS partition table

q quit without saving changes

s create a new empty Sun disklabel

w write table to disk and exit

Command (m for help): d #d表示删除

Partition number (1,2, default 2): 2 #表示第2个分区

Partition 2 is deleted

Command (m for help): d

Selected partition 1

Partition 1 is delete

Command (m for help): w #最后要保存

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

查看文件系统

#sudo df -h

任意节点能执行管理功能

#首先保证该节点安装ceph

#yum makecache

#yum install -y ceph

#ceph --version #版本查看

#将配置文件ceph.conf和ceph.client.admin.keyring推送到/etc/ceph/下

#ceph-deploy admin {node-name}

这样你就可以使用root执行ceph的相关管理操作了

Ceph注意事项/问题排查Raw drive禁用写缓存

# sudo hdparm -W 0 /dev/hda 0

谨慎选择文件系统

开发,测试环境可以选用btrfs;

生成环境选用xfs

cephX认证系统

官网:

中文翻译:

升级时需先关闭认证

跑Fetching Ceph keyrings报错

报错信息:

TASK: [ceph | Fetching Ceph keyrings] *****************************************

…略

template_from_string

res = jinja2.utils.concat(rf)

File "<template>", line 9, in root

File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads

return _default_decoder.decode(s)

File "/usr/lib64/python2.7/json/decoder.py", line 365, in decode

obj, end = self.raw_decode(s, idx=_w(s, 0).end())

File "/usr/lib64/python2.7/json/decoder.py", line 383, in raw_decode

raise ValueError("No JSON object could be decoded")

ValueError: No JSON object could be decoded

FATAL: all hosts have already failed -- aborting

解决办法:

#将Monitor上的volume删除掉

#docker volume rm ceph_mon ceph_mon_config

Monitor时钟问题

报错信息:

#ceph -s

cluster f5a13a56-c9af-4f7b-9ba9-f55de577bafa

health HEALTH_WARN

clock skew detected on mon.192.168.100.134, mon.192.168.100.135

Monitor clock skew detected

monmap e1: 3 mons at {192.168.100.112=192.168.100.112:6789/0,192.168.100.134=192.168.100.134:6789/0,192.168.100.135=192.168.100.135:6789/0}

election epoch 6, quorum 0,1,2 192.168.100.112,192.168.100.134,192.168.100.135

osdmap e12: 5 osds: 5 up, 5 in

pgmap v17: 64 pgs, 1 pools, 0 bytes data, 0 objects

163 MB used, 224 GB / 224 GB avail

64 active+clean

解决办法:

在/etc/ceph/ceph.conf的[global]章节下,将同步时间参数调整:

[global]

mon_clock_drift_allowed = 2 #两分钟误差

标签: #centos73支持asp #centos7卸载xen #centos7配置keystone