使用Word2019编排公司休假制度文档的方法(word制作放假安排表)
747
2022-05-30
docker的容器编排
容器编排,听上去是一个非常高大尚的词汇,通俗一点,可以理解成“集群管理”。docker的容器编排工具有不少,最出名的三个,被誉为docker三剑客(Compose、Machine、Swarm)。前两者,都是第三方提供的,而Swarm,则是docker官方的容器编排工具,已经被集成在docker中。
Swarm由三大部分组成:
swarm:集群管理
node:节点管理
service:服务管理
集群与节点管理
使用docker swarm命令,可以创建或加入集群,docker集群中的节点分为manager和worker两种。这两种节点,都可以运行docker容器,但只有manager节点,拥有管理功能。
一个集群中,只有manager节点,也可以正常的工作。
创建并加入集群
我测试的环境有两台机器,ip地址分别为192.168.1.220和192.168.1.116。下面在220上创建集群:
# docker swarm init Swarm initialized: current node (ppmurem8j7mdbmgpdhssjh0h9) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-3e4l8crbt04xlqfxwyw6nuf9gtcwpw72zggtayzy8clyqmvb5h-7o6ww4ftwm38dz7ydbolsz3kd 192.168.1.220:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
执行docker swarm init后,集群就被创建好了。当前的机器,自动成为集群的manager节点,并且输出了其他机器加入集群的方式,即:docker swarm join --token SWMTKN-1-3e4l8crbt04xlqfxwyw6nuf9gtcwpw72zggtayzy8clyqmvb5h-7o6ww4ftwm38dz7ydbolsz3kd 192.168.1.220:2377。使用这个token加入的节点,是worker节点,如果想加入一个新的manager节点,可以执行docker swarm join-token manager,它也会输出一串类似的命令,执行就可以以manager的方式加入。如果忘记加入的命令,也可以使用docker swarm join-token worker进行查看。
下面在116上执行加入命令:
# docker swarm join --token SWMTKN-1-12dlq70adr3z38mlkltc288rdzevtjn73xse7d0qndnjmx45zs-b1kwenzmrsqb4o5nvni5rafcr 192.168.1.220:2377 This node joined a swarm as a worker.
这里发生了一个小插曲,在我创建集群的两台机器的时区不一致,导致在加入worker节点时报错:
Error response from daemon: error while validating Root CA Certificate: x509: certificate has expired or is not yet valid
在更新了220的时区后,依然无法加入。于是,我删除了集群又重新创建,就可以了。没有尝试使用docker swarm update是不是也可以。
加入了集群后,可以在manager节点上,查询集群的节点:
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3 hz50cnwrbk4vxa7h0g23ccil9 zhangmh-virtual-machine Ready Active 20.10.1
退出集群
在116上执行下面命令,可以退出集群:
# docker swarm leave Node left the swarm.
再次查看节点:
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3 hz50cnwrbk4vxa7h0g23ccil9 zhangmh-virtual-machine Down Active 20.10.1
发现刚退出的这个节点还在,只是状态变成了Down。需要在manager节点中删除:
# docker node rm hz50cnwrbk4vxa7h0g23ccil9 hz50cnwrbk4vxa7h0g23ccil9 # docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3 xby86ffkqw3axyfkwd4s7nubz zhangmh-virtual-machine Ready Active 20.10.1
这样才真正删除了节点。
如果退出的节点是manager节点,需要强制退出,即:docker swarm leave -f。
将节点提升为manager节点
只有一个manager的集群是不稳定的,当manager节点崩溃时,整个集群就群龙无首了。docker认为,一个集群中应该至少有三个manager节点,并且有一半以上的manager节点是可达的,才能保证集群的正常运行。当集群中只有两个manager节点,且有一个节点出现问题时,整个集群还是处于不可用的状态。
当然,对于我们测试,是没有必要的,我们只需要使用两个manager节点,测试一下是否可以主从切换就可以了。使用下面的命令,可以直接将workder节点提升为manager节点:
# docker node promote xby86ffkqw3axyfkwd4s7nubz Node xby86ffkqw3axyfkwd4s7nubz promoted to a manager in the swarm. # docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3 xby86ffkqw3axyfkwd4s7nubz zhangmh-virtual-machine Ready Active Reachable 20.10.1
OK,现在有两个manager节点了,220的状态为leader,即当前是领导节点,116的状态为Reachable,是可达的。下面关闭220节点的docker服务:
# systemctl stop docker Warning: Stopping docker.service, but it can still be activated by: docker.socket
关闭时输出了一个警告,意思是docker服务已经被关闭了,但它仍然可被docker.socket服务唤醒。再次查看节点状态:
# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Reachable 20.10.3 xby86ffkqw3axyfkwd4s7nubz zhangmh-virtual-machine Ready Active Leader 20.10.1
可以看到116已经成为了Leader,并且,220也以经被唤醒了,是可达的。看来,docker集群的稳定是相当不错的。
服务管理
集群中各节点都配置好后,就可以创建服务了。docker的服务其实就是启动容器,并且赋予了容器副本和负载均衡的能力。以之前创建的ws:1.0为例,创建5个副本:
# docker service create --replicas 5 --name ws -p 80:8000 ws:1.0 image ws:1.0 could not be accessed on a registry to record its digest. Each node will access ws:1.0 independently, possibly leading to different nodes running different versions of the image. 1nj3o38slbo2zwt5p69l1qi5t overall progress: 5 out of 5 tasks 1/5: running [==================================================>] 2/5: running [==================================================>] 3/5: running [==================================================>] 4/5: running [==================================================>] 5/5: running [==================================================>] verify: Service converged
服务已经创建并运行了,使用浏览器访问220和116的80端口都可以访问。
使用docker service ls命令可以查看ws服务:
# docker service ls ID NAME MODE REPLICAS IMAGE PORTS 1nj3o38slbo2 ws replicated 5/5 ws:1.0 *:80->8000/tcp
使用docker service ps ws 命令可查看ws服务的进程:
# docker service ps ws ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS jpckj0mn24ae ws.1 ws:1.0 zhangmh-virtual-machine Running Running 6 minutes ago yrrdn4ntb089 ws.2 ws:1.0 localhost.localdomain Running Running 6 minutes ago mdjxadbmlmhs ws.3 ws:1.0 zhangmh-virtual-machine Running Running 6 minutes ago kqdwfrddbaxd ws.4 ws:1.0 localhost.localdomain Running Running 6 minutes ago is2iimz1v4eb ws.5 ws:1.0 zhangmh-virtual-machine Running Running 6 minutes ago
可以看到有两个进程运行在220上,三个进程运行在116上。我在浏览器上访问了几次之后 ,使用docker service logs ws命令查看服务的日志:
# docker service logs ws ws.5.is2iimz1v4eb@zhangmh-virtual-machine | [I 210219 01:57:23 web:2239] 200 GET / (10.0.0.2) 3.56ms ws.5.is2iimz1v4eb@zhangmh-virtual-machine | [W 210219 01:57:23 web:2239] 404 GET /favicon.ico (10.0.0.2) 0.97ms ws.5.is2iimz1v4eb@zhangmh-virtual-machine | [I 210219 01:57:28 web:2239] 200 GET / (10.0.0.4) 0.82ms ws.5.is2iimz1v4eb@zhangmh-virtual-machine | [W 210219 01:57:28 web:2239] 404 GET /favicon.ico (10.0.0.4) 0.79ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:01:45 web:2239] 304 GET / (10.0.0.2) 1.82ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:01:59 web:2239] 304 GET / (10.0.0.2) 0.49ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:02:01 web:2239] 304 GET / (10.0.0.2) 2.05ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:02:02 web:2239] 304 GET / (10.0.0.2) 0.89ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:02:02 web:2239] 304 GET / (10.0.0.2) 1.13ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:02:03 web:2239] 304 GET / (10.0.0.2) 0.92ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:02:03 web:2239] 304 GET / (10.0.0.2) 2.19ms ws.1.jpckj0mn24ae@zhangmh-virtual-machine | [I 210219 02:02:20 web:2239] 304 GET / (10.0.0.2) 1.00ms
可以看到即使我访问的是220,而实际访问的扔然是116上的进程。
如果把116关机,116上运行的进程会自动转移到220的节点中,因为116现在是manager节点,如果停止,集群会进入不可用的状态,所以,需要先将其降级为worker节点:
# docker node demote xby86ffkqw3axyfkwd4s7nubz Manager xby86ffkqw3axyfkwd4s7nubz demoted in the swarm.
然后,将116关机。
# docker service ps ws ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS jrj9ben9vr5c ws.1 ws:1.0 localhost.localdomain Running Running 57 minutes ago yrrdn4ntb089 ws.2 ws:1.0 localhost.localdomain Running Running about an hour ago opig9zrmp261 ws.3 ws:1.0 localhost.localdomain Running Running 57 minutes ago kqdwfrddbaxd ws.4 ws:1.0 localhost.localdomain Running Running about an hour ago hiz8730pl3je ws.5 ws:1.0 localhost.localdomain Running Running 57 minutes ago
可以看到5个进程都转移到220上运行了。
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bc4c457ce769 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.5.hiz8730pl3je7qvo2lv6k554b c846ac1c4d91 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.3.opig9zrmp2619t4e1o3ntnj2w 214daa36c138 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.1.jrj9ben9vr5c3biuc90xtoffh 17842db9dc47 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.4.kqdwfrddbaxd5z78uo3zsy5sd 47185ba9a4fd ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.2.yrrdn4ntb089t6i66w8xvq8r9 # docker kill bc4c457ce769 bc4c457ce769
杀死第5个进程后,等待几秒再查看进程:
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 416b55e8d174 ws:1.0 "/bin/sh -c 'python …" About a minute ago Up About a minute ws.5.fvpm334t2zqbj5l50tyx5glr6 c846ac1c4d91 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.3.opig9zrmp2619t4e1o3ntnj2w 214daa36c138 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.1.jrj9ben9vr5c3biuc90xtoffh 17842db9dc47 ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.4.kqdwfrddbaxd5z78uo3zsy5sd 47185ba9a4fd ws:1.0 "/bin/sh -c 'python …" 3 hours ago Up 3 hours ws.2.yrrdn4ntb089t6i66w8xvq8r9
第5个进程又被启动。
docker服务的副本数量是可以动态调整的,比如系统负载过高,需要添加副本时,只需要执行:
# docker service scale ws=6 ws scaled to 6 overall progress: 6 out of 6 tasks 1/6: running [==================================================>] 2/6: running [==================================================>] 3/6: running [==================================================>] 4/6: running [==================================================>] 5/6: running [==================================================>] 6/6: running [==================================================>] verify: Service converged
这样,就增加了一个副本。
服务创建好以后,就可以随着docker的系统服务被启动,只要执行:
systemctl enable docker
刚才创建的集群和服务都会开机启动,不用担心机器重启导致程序运行不正常。
共享数据卷
首先,使用docker volume create命令创建一个数据卷:
# docker volume create ws_volume ws_volume
创建完成后,使用docker volume ls命令可查看现有的数据卷:
# docker volume ls DRIVER VOLUME NAME local ws_volume
使用docker inspect命令可查看数据卷的详细信息:
# docker inspect ws_volume [ { "CreatedAt": "2021-02-19T14:09:58+08:00", "Driver": "local", "Labels": {}, "Mountpoint": "/var/lib/docker/volumes/ws_volume/_data", "Name": "ws_volume", "Options": {}, "Scope": "local" } ]
在创建service时,可使用--mount参数将数据卷挂载到service中:
# docker service create --replicas 2 --name ws -p 80:8000 --mount type=volume,src=ws_volume,dst=/volume ws:1.0 image ws:1.0 could not be accessed on a registry to record its digest. Each node will access ws:1.0 independently, possibly leading to different nodes running different versions of the image. iiiit9slq9qqwcdwwi0w0mcz5 overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged
--mount有很多的子参数,把它们写成key=value的形式,然后用逗号隔开即可,最简单的,只需要设置type、src、dst三个参数即可。
Docker 任务调度
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。