工资群发助手在哪里(工资群发助手收费的吗)
696
2022-05-29
写在前面
嗯,准备RHCA,学习整理这部分知识
博文涉及内容:
网络teamd 配置Demo、网卡容灾机制演示
管理网络teamd的常用命令Demo
写在前面
嗯,准备RHCA,学习整理这部分知识
博文涉及内容:
网络teamd 配置Demo、网卡容灾机制演示
管理网络teamd的常用命令Demo
网络teamd 配置Demo、网卡容灾机制演示
管理网络teamd的常用命令Demo
「 我感到难过,不是因为你欺骗了我,而是因为我再也不能相信你了。 ------尼采」
保持服务应用高可用的前提,是要确保网络的可用性。只有在网络可用的前提下,才可以考虑通过应用服务的横向扩展,利用集群负载等方式避免单点故障,那么如何确保网络的高可用性呢?
在链路层的容灾处理中,一般通过冗余的方式,避免单点故障:
交换机之间可以通过链路聚合 Eth-Trunk的方式使多条网线成为一个逻辑链路,实现多条线路的负载均衡,提高带宽,同时提供容错,当一条线路失效时,不会造成全网中断。
华为交换机的Eth-Trunk配置,需要注意两个交换机都需要配置
#交换机之间增加了一条网线,两端接口均为 Ethernet 0/0/8
交换机到服务器之间的,链路层又是如何实现容灾的呢?
这就是今天要和小伙伴分享的,服务器通过冗余网络接口(网卡)的方式避免单点故障,将多个网络接口链接到一起形成一个聚合的逻辑网络接口,然后把流量分散到内部的多个网络接口上,实现容错和吞吐量,其实和线路上的链路聚合类似。
网络接口链路聚合具体的技术有很多,在红帽的发行版中,RHEL5、RHEL6中使用的是Bonding。而RHEL7、RHEL8使用Teaming用来实现网络接口链路聚合的功能,在RHEL7、RHEL8中,teaming和bonding它们是并存的,我们可以选择Teaming,也可以选Bonding。
通过Teaming(网络组)技术把同一台服务器上的多个物理网卡通过软件绑定成一个虚拟网卡(同理虚机中,将多个虚拟网卡绑定为一个逻辑网卡).
对于外部网络而言,这台服务器只有一个可用网络接口。对于任何应用程序和网络,这台服务器只有一个网络链接或者说只有一个可以访问的IP地址。
对于外部网络而言,这台服务器只有一个可用网络接口。对于任何应用程序和网络,这台服务器只有一个网络链接或者说只有一个可以访问的IP地址。
对于服务器内部而言,通过team将网络流量分散在多个网络接口上,从而实现故障转移或提高吞吐量。
对于服务器内部而言,通过team将网络流量分散在多个网络接口上,从而实现故障转移或提高吞吐量。
配置网络Team
需要的软件包teamd
┌──[root@workstation.lab.example.com]-[~/web] └─$yum list teamd Last metadata expiration check: 0:41:27 ago on Sun 17 Apr 2022 08:34:23 PM CST. Installed Packages teamd.x86_64 1.28-4.el8 @anaconda Available Packages teamd.i686 1.28-4.el8 rhel-8.1-for-x86_64-baseos-rpms ┌──[root@workstation.lab.example.com]-[~/web] └─$rpm -qc teamd /etc/dbus-1/system.d/teamd.conf /usr/lib/systemd/system/teamd@.service ┌──[root@workstation.lab.example.com]-[~/web] └─$nmcli connection add type team team team-slave
网络Team是由内核驱动程序和用户空间守护程序(teamd)实现:
内核驱动程序,有效地处理网络数据包。
teamd守护程序,处理逻辑和接口。
既然冗余,那一定会涉及负载分流,team通过runner来定义,支持的负载分流类型:
┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0 ifname team0 type team team.runner activebackup broadcast lacp loadbalance random roundrobin
所有网络交互均通过team接口(或master接口)完成。team interface包涵多个port interface(ports,slave interfaces)。
使用NetworkManager控制team接口时,特别是在查找故障时,请牢记以下几点:
启动team interface,不会自动启动其port interface
停止team interface,始终会停止port interface
启动port interface,始终启动team interface
停止port interface,不会停止team interface
不含port的team interface可以启动静态IP连接
在启动DHCP连接时,不含port interface的team interface将等待port
如果team interface具有DHCP连接且在等待port,则在添加具有载波信号的port interface时,它会完成激活。
如果team interface具有DHCP连接且在等待port,则在添加不具有载波信号的port时它会继续等待。
使用nmcli命令配置和管理team interface和port interfaces,包含4个步骤:
当前的环境
┌──[root@servera.lab.example.com]-[~] └─$nmcli con show NAME UUID TYPE DEVICE ethO-static 7c6d44fe-8349-45ea-beb5-226fe674225b ethernet eth0 Wired connection 1 4ae4bb9e-8f2d-3774-95f8-868d74edcc3c ethernet -- Wired connection 2 c0e6d328-fcb8-3715-8d82-f8c37cb42152 ethernet -- Wired connection 3 9b5ac87b-572c-3632-b8a2-ca242f22733d ethernet -- ┌──[root@servera.lab.example.com]-[~] └─$nmcli dev DEVICE TYPE STATE CONNECTION eth0 ethernet connected ethO-static eth1 ethernet disconnected -- eth2 ethernet disconnected -- lo loopback unmanaged --
我们要创建一个team链路聚合网络接口,然后修改eth1和eth2这两个网络接口master为创建的team接口
创建team类型接口team0,设置负载类型为activebackup(主备模式)
┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0 ifname team0 type team team.runner activebackup Connection 'team0' (2053fe72-6785-4b16-90f0-256c2bf8c4f3) successfully added. ┌──[root@servera.lab.example.com]-[~] └─$nmcli con show NAME UUID TYPE DEVICE team0 2053fe72-6785-4b16-90f0-256c2bf8c4f3 team team0 Wired connection 2 c0e6d328-fcb8-3715-8d82-f8c37cb42152 ethernet eth1 Wired connection 3 9b5ac87b-572c-3632-b8a2-ca242f22733d ethernet eth2 ethO-static 7c6d44fe-8349-45ea-beb5-226fe674225b ethernet eth0 Wired connection 1 4ae4bb9e-8f2d-3774-95f8-868d74edcc3c ethernet --
我们要和serverd做ping测试,所以根据serverd的IP设置一个ip
┌──[root@workstation.lab.example.com]-[~/web] └─$ansible serverd -m shell -a 'ip a' serverd | CHANGED | rc=0 >> 1: lo:
修改team0,添加ipv4的地址和掩码 192.168.0.100/24 (这个地址为创建team链路聚合后服务器的网卡IP),并激活网络接口
┌──[root@servera.lab.example.com]-[~] └─$nmcli con modify team0 ipv4.method manual ipv4.addresses 192.168.0.100/24 ┌──[root@servera.lab.example.com]-[~] └─$nmcli con up team0 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/33)
添加eth1网络接口team0-port1,设置master为team0,并激活
┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0-port1 type team-slave ifname eth1 master team0 Connection 'team0-port1' (fd24db64-6f9a-41d3-87a5-79f825731d7f) successfully added. ┌──[root@servera.lab.example.com]-[~] └─$nmcli con up team0-port1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/29)
添加eth2网络接口team0-port2,设置master为team0,并激活
┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0-port2 type team-slave ifname eth2 master team0 Connection 'team0-port2' (16a95c2a-b581-4b99-ab5a-b9d5ea6b3a87) successfully added. ┌──[root@servera.lab.example.com]-[~] └─$nmcli con up team0-port2 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/32)
查看当前的网络接口信息
┌──[root@servera.lab.example.com]-[~] └─$nmcli con show NAME UUID TYPE DEVICE team0 2053fe72-6785-4b16-90f0-256c2bf8c4f3 team team0 ethO-static 7c6d44fe-8349-45ea-beb5-226fe674225b ethernet eth0 team0-port1 fd24db64-6f9a-41d3-87a5-79f825731d7f ethernet eth1 team0-port2 16a95c2a-b581-4b99-ab5a-b9d5ea6b3a87 ethernet eth2 。。。。。。。。 ┌──[root@servera.lab.example.com]-[~] └─$
使用ping 192.168.0.254(servera)测试,指定网卡,可以看到是通过我们设定的192.168.0.100出去的
┌──[root@servera.lab.example.com]-[~] └─$ping -I team0 -c 4 192.168.0.254 PING 192.168.0.254 (192.168.0.254) from 192.168.0.100 team0: 56(84) bytes of data. 64 bytes from 192.168.0.254: icmp_seq=1 ttl=64 time=0.663 ms 64 bytes from 192.168.0.254: icmp_seq=2 ttl=64 time=0.675 ms 64 bytes from 192.168.0.254: icmp_seq=3 ttl=64 time=0.670 ms 64 bytes from 192.168.0.254: icmp_seq=4 ttl=64 time=0.701 ms --- 192.168.0.254 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 69ms rtt min/avg/max/mdev = 0.663/0.677/0.701/0.023 ms ┌──[root@servera.lab.example.com]-[~] └─$ping -c 4 192.168.0.254 PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data. 64 bytes from 192.168.0.254: icmp_seq=1 ttl=64 time=1.32 ms 64 bytes from 192.168.0.254: icmp_seq=2 ttl=64 time=0.891 ms 64 bytes from 192.168.0.254: icmp_seq=3 ttl=64 time=1.10 ms 64 bytes from 192.168.0.254: icmp_seq=4 ttl=64 time=0.677 ms --- 192.168.0.254 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 8ms rtt min/avg/max/mdev = 0.677/0.997/1.323/0.240 ms ┌──[root@servera.lab.example.com]-[~] └─$
通过teamdctl命令查看team接口信息
┌──[root@servera.lab.example.com]-[~] └─$teamdctl team0 state setup: runner: activebackup ports: eth1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth1 ┌──[root@servera.lab.example.com]-[~] └─$
我们可以看到,当前的负载模式为主备负载,即当前eth1为主网络接口,eth2为备用网络接口。当前的流量都是通过eth1通信。
持续ping 192.168.0.254
┌──[root@servera.lab.example.com]-[~] └─$ping 192.168.0.254 > /dev/null & [1] 3324
监控网络接口的流量信息,会发现eht1持续发送ICMP包
┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 01:55:45.049388 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 47, length 64 01:55:45.050152 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 47, length 64 01:55:45.833842 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:55:46.073251 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 48, length 64 01:55:46.073964 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 48, length 64 01:55:47.097140 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 49, length 64 01:55:47.098096 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 49, length 64 ^C 7 packets captured 7 packets received by filter 0 packets dropped by kernel ┌──[root@servera.lab.example.com]-[~] └─$
eth2没有流量通过,属于空闲状态
┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth2 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes 01:56:11.817347 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 01:56:13.801263 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 01:56:15.121093 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:02:fa:0d (oui Unknown), length 286 01:56:15.849784 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel ┌──[root@servera.lab.example.com]-[~] └─$
使用nmcli con down命令将eth1的连接中断,模拟eth1网络接口异常
┌──[root@servera.lab.example.com]-[~] └─$nmcli con down team0-port1 Connection 'team0-port1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/34)
监控网络接口的流量信息,会发现eht2持续发送ICMP包,eth1接口空闲。
┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 01:57:15.817487 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:57:17.802205 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:57:19.849582 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:57:21.834175 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel
┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth2 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes 01:57:25.721245 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 146, length 64 01:57:25.722066 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 146, length 64 01:57:25.801453 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 01:57:26.722599 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 147, length 64 01:57:26.723161 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 147, length 64 01:57:27.737269 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:01:fa:0a (oui Unknown), length 286 01:57:27.769125 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 148, length 64 01:57:27.769796 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 148, length 64 01:57:27.850081 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel ┌──[root@servera.lab.example.com]-[~] └─$
teamdctl team0 state查看网络接口状态,当前活动接口为eth2,且eth1子接口不在team接口中
┌──[root@servera.lab.example.com]-[~] └─$teamdctl team0 state setup: runner: activebackup ports: eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2 ┌──[root@servera.lab.example.com]-[~] └─$
管理网络Team
网络Team网络配置文件在/etc/sysconfig/network-scripts,包括team interface和port interfaces。
┌──[root@servera.lab.example.com]-[~] └─$cd /etc/sysconfig/network-scripts/ ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$ls ifcfg-ethO-static ifcfg-team0 ifcfg-team0-port1 ifcfg-team0-port2 ifcfg-Wired_connection_1
查看ifcfg-team0配置文件信息
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$cat ifcfg-team0 TEAM_CONFIG="{ \"runner\": { \"name\": \"activebackup\" } }" PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=team0 UUID=2053fe72-6785-4b16-90f0-256c2bf8c4f3 DEVICE=team0 ONBOOT=yes DEVICETYPE=Team IPADDR=192.168.0.100 PREFIX=24
子接口ifcfg-team0-port1配置文件信息
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$cat ifcfg-team0-port1 NAME=team0-port1 UUID=fd24db64-6f9a-41d3-87a5-79f825731d7f DEVICE=eth1 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$
创建team interface时,runner默认使用roundrobin,可以使用team.runner指定其他值。
「命令行修改Team配置」
nmcli con mod CONN_NAME team.config JSON-configuration-file-or-string JSON-configuration-file-or-string格式,参考man teamd.conf中EXAMPLES。
可以修改的字段
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con modify team0 team. team.config team.runner team.runner-sys-prio team.link-watchers team.runner-active team.runner-tx-balancer team.mcast-rejoin-count team.runner-agg-select-policy team.runner-tx-balancer-interval team.mcast-rejoin-interval team.runner-fast-rate team.runner-tx-hash team.notify-peers-count team.runner-hwaddr-policy team.notify-peers-interval team.runner-min-ports
使用JSON串修改
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con modify team0 team.config '{ "runner": { "name": "activebackup" } }' ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con show team0 | grep run team.config: { "runner": { "name": "activebackup" } } team.runner: activebackup team.runner-hwaddr-policy: -- team.runner-tx-hash: -- team.runner-tx-balancer: -- team.runner-tx-balancer-interval: -1 (unset) team.runner-active: yes team.runner-fast-rate: no team.runner-sys-prio: -1 (unset) team.runner-min-ports: -1 (unset) team.runner-agg-select-policy: -- ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$
teamdctl 命令
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl No team device specified. teamdctl [options] teamdevname command [command args] -h --help Show this help -v --verbose Increase output verbosity -o --oneline Force output to one line if possible -D --force-dbus Force to use D-Bus interface -Z --force-zmq=ADDRESS Force to use ZeroMQ interface [-Z[Address]] -U --force-usock Force to use UNIX domain socket interface Commands: config dump config dump noports config dump actual state state dump state view state item get ITEMPATH state item set ITEMPATH VALUE port add PORTDEV port remove PORTDEV port present PORTDEV port config update PORTDEV PORTCONFIG port config dump PORTDEV
「根据team.conf配置文件修改team配置」
导出配置文件
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 config dump >team.conf
修改配置文件
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$cat team.conf { "device": "team0", "mcast_rejoin": { "count": 1 }, "notify_peers": { "count": 1 }, "ports": { "eth1": { "link_watch": { "name": "ethtool" # 网卡监控方式 } }, "eth2": { "link_watch": { "name": "ethtool" } } }, "runner": { "name": "activebackup" } }
加载配置信息
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con mod team0 team.config team.conf
激活team接口
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con up team0 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/51)
通过teamdctl操作team接口,添加删除子接口
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 state setup: runner: activebackup ports: eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2
删除子接口eth2
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 port remove eth2 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 state setup: runner: activebackup runner: active port:
添加子接口eth2
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 port add eth2 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 state setup: runner: activebackup ports: eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$
teamnl 命令操作team
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamnl No team device specified. teamnl [options] teamdevname command [command args] -h --help Show this help -p --port_name team slave port name -a --array_index team option array index Commands: ports options getoption OPT_NAME setoption OPT_NAME OPT_VALUE monitor OPT_STYLE
查看子接口状态,ID
┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamnl team0 port 4: eth2: up 4294967295Mbit FD ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamnl team0 getoption activeport 4 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$
linux 弹性负载均衡 ELB 网络 负载均衡缓存
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。