Ceph Swift Api 配置与使用(三)【与云原生的故事】
671
2022-05-30
自从要上kubernetes开始一直考虑容器的数据持久化存储是个问题,调研后决定要使用Ceph,但是使用ceph-deploy是能部署上去,但是本地环境和线上环境都是用手动部署的话...... 手动部署是不可能手动部署了!
环境说明
建议系统内核升级到4.x,不然容易出各种问题,不要给自己找不必要的麻烦。
升级内核
# 载入公钥 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org # 安装ELRepo rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm # 载入elrepo-kernel元数据 yum --disablerepo=\* --enablerepo=elrepo-kernel repolist # 查看可用的rpm包 yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel* # 安装最新版本的kernel yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64 #查看默认启动顺序 awk -F\' '=="menuentry " {print }' /etc/grub2.cfg CentOS Linux (3.10.0-862.3.3.el7.x86_64) 7 (Core) CentOS Linux (3.10.0-514.el7.x86_64) 7 (Core) CentOS Linux (0-rescue-cacfb77f99dc43f5a7d9b51bbedf922d) 7 (Core) #默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。 grub2-set-default 0 reboot uname -a # 删除老内核以及内核工具 rpm -qa|grep kernel|grep 3.10 rpm -qa|grep kernel|grep 3.10|xargs yum remove -y # 安装新版本工具包 yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64 rpm -qa|grep kernel
内核版本我这里选的的ml,如果求稳妥可以选择lt版本。
ansible安装
pip install ansible==2.4.2
严格按照官方文档的要求来,ansible版本过新或旧都会有各种报错。 ceph-ansible官方安装文档
免密登录配置 (略)
可参考之前ceph-deploy
安装
wget -c https://github.com/ceph/ceph-ansible/archive/v3.1.7.tar.gz tar xf v3.1.7.tar.gz cd ceph-ansible-3.1.7
vim hosts ### ceph [mons] 192.168.10.2 192.168.10.3 192.168.10.4 [osds] 192.168.10.2 192.168.10.3 192.168.10.4 [mgrs] 192.168.10.2 192.168.10.3 192.168.10.4 [mdss] 192.168.10.2 192.168.10.3 192.168.10.4 [clients] 192.168.10.1 192.168.10.2 192.168.10.3 192.168.10.4 192.168.10.5 192.168.10.6
cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml cp site.yml.sample site.yml vim group_vars/all.yml ceph_origin: repository ceph_repository: community ceph_mirror: http://mirrors.aliyun.com/ceph ceph_stable_key: http://mirrors.aliyun.com/ceph/keys/release.asc ceph_stable_release: luminous ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}" fsid: 54d55c64-d458-4208-9592-36ce881cbcb7 ##通过uuidgen生成 generate_fsid: false cephx: true public_network: 192.168.10.0/24 cluster_network: 192.168.10.0/24 monitor_interface: eth0 ceph_conf_overrides: global: rbd_default_features: 7 auth cluster required: cephx auth service required: cephx auth client required: cephx osd journal size: 2048 osd pool default size: 3 osd pool default min size: 1 mon_pg_warn_max_per_osd: 1024 osd pool default pg num: 128 osd pool default pgp num: 128 max open files: 131072 osd_deep_scrub_randomize_ratio: 0.01 mgr: mgr modules: dashboard mon: mon_allow_pool_delete: true client: rbd_cache: true rbd_cache_size: 335544320 rbd_cache_max_dirty: 134217728 rbd_cache_max_dirty_age: 10 osd: osd mkfs type: xfs osd mount options xfs: "rw,noexec,nodev,noatime,nodiratime,nobarrier" ms_bind_port_max: 7100 osd_client_message_size_cap: 2147483648 osd_crush_update_on_start: true osd_deep_scrub_stride: 131072 osd_disk_threads: 4 osd_map_cache_bl_size: 128 osd_max_object_name_len: 256 osd_max_object_namespace_len: 64 osd_max_write_size: 1024 osd_op_threads: 8 osd_recovery_op_priority: 1 osd_recovery_max_active: 1 osd_recovery_max_single_start: 1 osd_recovery_max_chunk: 1048576 osd_recovery_threads: 1 osd_max_backfills: 4 osd_scrub_begin_hour: 23 osd_scrub_end_hour: 7 # bluestore block create: true # bluestore block db size: 73014444032 # bluestore block db create: true # bluestore block wal size: 107374182400 # bluestore block wal create: true vim group_vars/osds.yml devices: - /dev/vdc - /dev/vdd - /dev/vde osd_scenario: collocated osd_objectstore: bluestore #osd_scenario: non-collocated #osd_objectstore: bluestore #devices: # - /dev/sdc # - /dev/sdd # - /dev/sde #dedicated_devices: # - /dev/sdf # - /dev/sdf # - /dev/sdf #bluestore_wal_devices: # - /dev/sdg # - /dev/sdg # - /dev/sdg # #monitor_address: 192.168.10.125 # 注释不需要的组件 vim site.yml --- # Defines deployment design and assigns role to server groups - hosts: - mons # - agents - osds - mdss # - rgws # - nfss # - restapis # - rbdmirrors - clients - mgrs # - iscsigws # - iscsi-gws # for backward compatibility only! ansible-playbook -i hosts site.yml
至此ceph部署完成,登陆ceph节点检查状态。
cp infrastructure-playbooks/purge-cluster.yml purge-cluster.yml # 必须copy到项目根目录下 ansible-playbook -i hosts purge-cluster.yml
ceph-ansible极大的提升工作效率,并且很好的减少人为的手工操作失误,解放劳动力的利器!
华为云APP
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。