我的云文档怎么没有了(我的云文档怎么找)
1303
2022-05-29
Zabbix分布式搭建
一.拓扑
所有服务器的硬件指标都是一样的
Centos 7 4核8G内存,需要监控18-19W的监控项
组件说明:
Web server 是前端页面,zabbix有两个选择nginx和apache,默认是apache
Zabbix server是主程序用来管理所有的组件和调度
Zabbix agent 是插件,安装在服务器上可以收集信息反馈给server或者proxy
Zabbix proxy 分担server压力
二.数据库TimesacleDB 10.217.37.146搭建
vim /etc/sysctl.conf
SELINUX=disable
firewall-cmd –state
service firewalld start #开启
service firewalld restart #重启
service firewalld stop #关闭
sudo yum install -y https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo tee /etc/yum.repos.d/timescale_timescaledb.repo < [timescale_timescaledb] name=timescale_timescaledb baseurl=https://packagecloud.io/timescale/timescaledb/el/7/$basearch repo_gpgcheck=1 gpgcheck=0 enabled=1 gpgkey=https://packagecloud.io/timescale/timescaledb/gpgkey sslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt metadata_expire=300 EOL sudo yum install -y timescaledb-postgresql-11 /usr/pgsql-11/bin/postgresql-11-setup initdb systemctl enable postgresql-11 && systemctl start postgresql-11 修改默认用户postgres的密码 su postgres psql ALTER USER postgres WITH PASSWORD 'xxxx'; 配置数据库监听ip和端口号 vim /var/lib/pgsql/11/data/postgresql.conf listen_addresses = '*' port = 5432 允许全部IP可以访问数据库 vim /var/lib/pgsql/11/data/pg_hba.conf host all all 0.0.0.0/0 md5 数据库调优,有个自带的工具timescaledb-tune,自动根据主机性能调整,全部yes就行 su - postgres timescaledb-tune --pg-config=/usr/pgsql-11/bin/pg_config systemctl restart postgresql-11 创建数据库zabbix和用户zabbix,对用户增加最高权限和远程访问权限 sudo -u postgres psql create user zabbix with password 'xxxx'; create database zabbix owner zabbix; grant all privileges on database zabbix to zabbix; ALTER ROLE zabbix CREATEROLE SUPERUSER; 查看数据库是否创建成功 playboy=> \l 增加系统用户和密码 sudo adduser zabbix sudo passwd zabbix 下载源代码把zabbix官网提供的zabbix表进行导入,(因为我服务器下载的慢所以我下载到了我的pc上通过lrzsz 上传到服务器的) wget https://ayera.dl.sourceforge.net/project/zabbix/ZABBIX%20Latest%20Stable/4.4.4/zabbix-4.4.4.tar.gz yum install lrzsz –y tar -xzvf zabbix-4.4.4.tar.gz cd /usr/local/zabbix-4.4.4/database/postgresql echo "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" | sudo -u postgres psql zabbix cat schema.sql | sudo -u postgres psql zabbix cat images.sql | sudo -u postgres psql zabbix cat data.sql | sudo -u postgres psql zabbix cat timescaledb.sql | sudo -u postgres psql zabbix 三.Zabbix Web agent server10.206.230.146搭建 vim /etc/sysctl.conf SELINUX=disable firewall-cmd –state service firewalld start #开启 service firewalld restart #重启 service firewalld stop #关闭 rpm -Uvh https://repo.zabbix.com/zabbix/4.4/rhel/7/x86_64/zabbix-release-4.4-1.el7.noarch.rpm yum clean all yum -y install zabbix-server-pgsql zabbix-web-pgsql zabbix-agent vim /etc/zabbix/zabbix_server.conf DBHost=10.217.37.146 #数据库ip DBName=zabbix #数据库名称 DBUser=zabbix #数据库用户名 DBPassword=xxxx #数据库密码 DBPort=5432 #数据库端口号 预分配子进程数量。数量越多服务端吞吐能力越强,系统资源消耗越大,zabbix server的进程数server端拉取agent端的items StartPollers=500 系统初始化时,预分配的使用ipmi协议获取主机硬件状态的进程数量。 StartIPMIPollers=10 预处理程序启动数目 StartPreprocessors=3 对不可达主机拉取数据 StartPollersUnreachable=20 陷阱机制,即对于中途加入的主机主动发送的数据进行保存 StartTrappers=15 使用ICMP协议的ping对网络中主机进行在线状态检测 StartPingers=20 对网络中的主机主动扫描发现,对中途加入的主机进行上线。注意:主动发现很消耗带宽 系统初始化时,预分配的自动发现主机的线程数量。若单台代理所管理机器超过500台,可以考虑加大此数值(仅适用于AGENT场景) StartDiscoverers=50 timers进程的初始数量,它是一个用于处理时间相关的告警进程 StartTimers=10 检查escalations的进程,这张表好像是告警事件表 StartEscalators=10 所有媒介都是并行处理的,当前最大会话每个媒介自己定义,但是zabbix服务的告警处理总数被参数(zabbix_sever.conf)StartAlerters所限制。同一触发器生成的告警会序列化的处理 StartAlerters=3 分配多少共享内存用于存储配置信息,HOST,ITEM,TRIGGER数据,视监控主机数量和监控项调整 CacheSize=4G Zabbix更新缓存数据的频率,单位为秒 CacheUpdateFrequency=180 数据库同步实例数 将采集数据从CACHE同步到数据库线程数量,视数据库服务器I/O繁忙情况,和数据库写能力调整。数值越大,写能力越强。对数据库服务器I/O压力越大 StartDBSyncers=10 设置划分多少共享内存用于存储采集的历史数据,此数值越大,数据库读压力越小 HistoryCacheSize=512M 历史数据索引缓存 HistoryIndexCacheSize=128M 用于设置划分多少共享内存用于存储计算出来的趋势数据,此参数值从一定程度上可影响数据库读压力 TrendCacheSize=128M 历史数据缓存大小,0为不缓存, ValueCacheSize=128M 与AGNET、SNMP设备和其它外部设备通信超时设置,单位为秒;若采集数据不完整或网络繁忙,或从管理页面发现客户端状态变化频繁,可以考虑加大此数值。 Timeout=60 定义主机不可达的超时时间 UnreachablePeriod=180 在主机不可用期间,定期检查主机的时间 UnavailableDelay=60 用于服务端数据库慢查询功能,单位是毫秒。若有服务端数据库监控慢查询的需求,可以视具体情况调整此数 LogSlowQueries=3000 启用多少子进程与代理端通信,若代理端较多可考虑加大此数值 StartProxyPollers=1 Zabbix服务端将配置文件数据同步到代理端的频率,仅适用于代理端为被动模式情况 ProxyConfigFrequency=300 Zabbix服务端请求代理端采集的数据的频率,仅适用代理端为被动模式情况 ProxyDataFrequency=1 systemctl start zabbix-server && systemctl enable zabbix-server systemctl status zabbix-server vim /etc/httpd/conf.d/zabbix.conf php_value date.timezone Asia/shanghai php_value memory_limit 2048M vim /etc/php.ini memory_limit = 2048M 启动Zabbix server和agent进程,并为它们设置开机自启: systemctl restart zabbix-server zabbix-agent httpd systemctl enable zabbix-server zabbix-agent httpd 四. zabbix proxy 10.217.37.188搭建 vim /etc/sysctl.conf SELINUX=disable firewall-cmd –state service firewalld start #开启 service firewalld restart #重启 service firewalld stop #关闭 wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm yum -y install mysql57-community-release-el7-10.noarch.rpm yum -y install mysql-community-server systemctl start mysqld.service #数据库必须初始化,初始化后才有数据库文件生成 systemctl status mysqld.service # Active: active (running)表示正常 systemctl restart mysqld.service systemctl enable mysqld.service mysql有默认密码,到mysql.log下查找 grep "password" /var/log/mysqld.log 用root账号进入 数据默认密码 mysql –u root -p 修改root的密码,xxxx就是密码 ALTER USER 'root'@'localhost' IDENTIFIED BY 'xxxx'; 创建数据库名称zabbix和zabbix_proxy ,zabbix和proxy数据库编码设置为utf8 create database zabbix character set utf8 collate utf8_bin; create database zabbix_proxy character set utf8 collate utf8_bin; 注意:删除数据库命令 drop database zabbix; 查看创建好的数据库 show databases; 创建用户zabbix和密码开并启远程权限,@表示哪些ip可以访问数据库,这里写的是zabbix server的ip grant all privileges on zabbix_proxy.* to zabbix_proxy@10.206.230.146 identified by 'xxxx'; grant all privileges on zabbix_proxy.* to zabbix_proxy@127.0.0.1 identified by 'xxxx'; grant all privileges on zabbix_proxy.* to zabbix_proxy@10.217.37.188 identified by 'xxxx'; grant all privileges on zabbix_proxy.* to root@localhost identified by 'xxxx'; systemctl restart mysqld 安装下载zabbix-proxy,把数据库表schema.sql.gz导入进zabbix-prxoy数据库 rpm –Uvh https://repo.zabbix.com/zabbix/4.4/rhel/7/x86_64/zabbix-release-4.4-1.el7.noarch.rpm yum clean all yum -y install zabbix-proxy-mysql systemctl restart zabbix-proxy systemctl enable zabbix-proxy find –name schema.sql.gz cd ./usr/share/doc/zabbix-proxy-mysql-4.4.4 zcat schema.sql.gz | mysql -h 127.0.0.1 -u zabbix_proxy -p zabbix_proxy 数据库某些表象存储较大,后期无法进行快速查询,将它分块存储, Zabbix大表有:history,history_log,history_str,history_text,history_uint,trends,trends_uint 官网: https://www.zabbix.org/wiki/Docs/howto/mysql_partitioning 数据库分区有几种方式,本文使用的是range方式, 数据库分区管理有两种方式:mysql管理,外部脚本,官网推荐使用外部脚本+事件管理 root查看mysql当前版本是否支持分区,显示YES or ACTIVE就是支持,新版本都支持 SELECT plugin_status FROM information_schema.plugins WHERE plugin_name = 'partition'; root进入zabbix库查看事件计划程序是否启用 SHOW GLOBAL VARIABLES LIKE 'event_scheduler'; SET GLOBAL event_scheduler = ON; #开启 vim /etc/my.cnf event_scheduler=ON 查询该数据库表的分区字段的最小时钟,以history_uint表示例,如下 select from_unixtime(min(clock)) from history_uint 创建分区表的脚本 vim partition.sql DELIMITER $$ CREATE PROCEDURE `partition_create`(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int) BEGIN /* SCHEMANAME = The DB schema in which to make changes TABLENAME = The table with partitions to potentially delete PARTITIONNAME = The name of the partition to create */ /* Verify that the partition does not already exist */ DECLARE RETROWS INT; SELECT COUNT(1) INTO RETROWS FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK; IF RETROWS = 0 THEN /* Print a message indicating that a partition was created. Create the SQL to create the partition. Execute the SQL from #2. */ SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg; SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' ); PREPARE STMT FROM @sql; EXECUTE STMT; DEALLOCATE PREPARE STMT; END IF; END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT) BEGIN /* SCHEMANAME = The DB schema in which to make changes TABLENAME = The table with partitions to potentially delete DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd) */ DECLARE done INT DEFAULT FALSE; DECLARE drop_part_name VARCHAR(16); /* Get a list of all the partitions that are older than the date in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with a "p", so use SUBSTRING TO get rid of that character. */ DECLARE myCursor CURSOR FOR SELECT partition_name FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; /* Create the basics for when we need to drop the partition. Also, create @drop_partitions to hold a comma-delimited list of all partitions that should be deleted. */ SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION "); SET @drop_partitions = ""; /* Start looping through all the partitions that are too old. */ OPEN myCursor; read_loop: LOOP FETCH myCursor INTO drop_part_name; IF done THEN LEAVE read_loop; END IF; SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name)); END LOOP; IF @drop_partitions != "" THEN /* Build the SQL to drop all the necessary partitions. Run the SQL to drop the partitions. Print out the table partitions that were deleted. */ SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";"); PREPARE STMT FROM @full_sql; EXECUTE STMT; DEALLOCATE PREPARE STMT; SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`; ELSE /* No partitions are being deleted, so print out "N/A" (Not applicable) to indicate that no changes were made. */ SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`; END IF; END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT) BEGIN DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16); DECLARE PARTITION_NAME VARCHAR(16); DECLARE OLD_PARTITION_NAME VARCHAR(16); DECLARE LESS_THAN_TIMESTAMP INT; DECLARE CUR_TIME INT; CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL); SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00')); SET @__interval = 1; create_loop: LOOP IF @__interval > CREATE_NEXT_INTERVALS THEN LEAVE create_loop; END IF; SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600); SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00'); IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP); END IF; SET @__interval=@__interval+1; SET OLD_PARTITION_NAME = PARTITION_NAME; END LOOP; SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000'); CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE); END$$ DELIMITER ; DELIMITER $$ CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11)) BEGIN DECLARE PARTITION_NAME VARCHAR(16); DECLARE RETROWS INT(11); DECLARE FUTURE_TIMESTAMP TIMESTAMP; /* * Check if any partitions exist for the given SCHEMANAME.TABLENAME. */ SELECT COUNT(1) INTO RETROWS FROM information_schema.partitions WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL; /* * If partitions do not exist, go ahead and partition the table */ IF RETROWS = 1 THEN /* * Take the current date at 00:00:00 and add HOURLYINTERVAL to it. This is the timestamp below which we will store values. * We begin partitioning based on the beginning of a day. This is because we don't want to generate a random partition * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000"). */ SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00')); SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00'); -- Create the partitioning query SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)"); SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));"); -- Run the partitioning query PREPARE STMT FROM @__PARTITION_SQL; EXECUTE STMT; DEALLOCATE PREPARE STMT; END IF; END$$ DELIMITER ; 7是数据保存天数,1小时表示每隔多久生成一个分区,48表示本次一共生成多少个分区 DELIMITER $$ CREATE PROCEDURE`partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))BEGIN CALL partition_maintenance(SCHEMA_NAME, 'history', 7, 1, 48); CALL partition_maintenance(SCHEMA_NAME, 'history_log', 7, 1, 48); CALL partition_maintenance(SCHEMA_NAME, 'history_str', 7, 1,4 8); CALL partition_maintenance(SCHEMA_NAME, 'history_text', 7, 1,48); CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 7, 1, 48); CALL partition_maintenance(SCHEMA_NAME, 'trends', 7, 1, 48); CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 7, 1, 48);END$$ DELIMITER ; 对数据库zabbix导入脚本 mysql -uroot -p zabbix_proxy < partition.sql 添加linux脚本的定时任务,每天夜里1点01分开始执行 cd ./usr/share/doc/zabbix-server-mysql-4.4.3目录下有crontab.txt定时任务 根据实际情况填写 Vim crontab.txt #zabbix partition_maintenance 01 01 * * * mysql -uroot -p'xxxx' zabbix -e "CALL partition_maintenance_all('zabbix');" 新创建的定时任务生效,查看定时任务,保证服务器重启后任务可以自动启动 crontab crontab.txt crontab -l systemctl start crond.service systemctl enable crond.service service crond status 进入zabbix数据库查看是否自动添加了分区表,下面以history_uint为例 show create table history_uint; 查zabbix数据库各个表占用情况 select table_name, (data_length+index_length)/1024/1024 as total_mb, table_rows from information_schema.tables where table_schema='zabbix'; 停止mysql service mysqld stop 进入mysql配置表进行配置修改 vim /etc/my.cnf [mysqld]下增加如下参数: innodb 引擎使用独立表项 innodb_file_per_table=1 zabbix在使用数据库的过程中,特别是删除历史数据的过程中,会涉及到大数据操作,如果逻辑日志文件太小,会造成执行不成功,日志回滚的问题 数值参考 show engine innodb status\G; (Log sequence number减去Last checkpoint at )/1024/1024 距离最好不要超过innodb_log_file_size的0.75. innodb_log_file_size=20M InnoDB需要innodb buffer pool中处理缓存,其中包含数据缓存 索引缓存 缓冲数据 内部结构 show global status like 'Innodb_buffer_pool_pages_data'; show global status like 'Innodb_buffer_pool_pages_total'; show global status like 'Innodb_page_size'; 计算Innodb_buffer_pool_pages_data/Innodb_buffer_pool_pages_total*100% 当结果 > 95% 则增加 innodb_buffer_pool_size, 建议使用物理内存的 75% 当结果 < 95% 则减少 innodb_buffer_pool_size, Innodb_buffer_pool_pages_data * Innodb_page_size * 1.05 / (1024*1024*1024) innodb_buffer_pool_size=6G 关闭符号链接 symbolic-links=0 慢查询日志存储 slow_query_log=/var/log/mysql.slow.log back_log值指出在MySQL暂时停止回答新请求之前的短时间内多少个请求可以被存在堆栈中。也就是说,如果MySql的连接数达到max_connections时,新来的请求将会被存在堆栈中,以等待某一连接释放资源,该堆栈的数量即back_log,如果等待连接的数量超过back_log,将不被授予连接资源。 back_log=500 数据库连接超时时间 connect_timeout=5 多个表关联的时候 减少参与被驱动表的join操作 join_buffer_size=2M 它决定索引处理的速度,尤其是索引读的速度。 key_reads /key_read_requests应该尽可能的低,至少是1:100,1:1000更好 show global status like 'key_read%'; key_buffer_size=16M 开启慢查询,时间超过1秒就算慢 long_query_time=1 ”MySQL: ERROR 1040: Too many connections”的异常情况,造成这种情况的一种原因是访问量过高,MySQL服务器抗不住,这个时候就要考虑增加从服务器分散读压力,另一种原因就是MySQL配置文件中max_connections值过小 show variables like '%max_connections%'; show global status like 'Max_used_connections'; Max_used_connections / max_connections * 100%>10% max_connections=1000 接受的数据包大小,默认40M max_allowed_packet=40M 安全参数防止暴力破解密码 max_connect_errors=10 这个参数的意思就是是否开启合并insert,默认是开启的,用白话讲就是用mysqldump导出生成的insert数据合并成一条,如果不开启就是一条数据一个insert.,最大值是16M net_buffer_length=16M 配置MySql日志何时写入硬盘的参数, 当设置为0,该模式速度最快,但不太安全,mysqld进程的崩溃会导致上一秒钟所有事务数据的丢失。 当设置为1,该模式是最安全的,但也是最慢的一种方式。在mysqld 服务崩溃或者服务器主机crash的情况下,binary log 只有可能丢失最多一个语句或者一个事务。。 当设置为2,该模式速度较快,也比0安全,只有在操作系统崩溃或者系统断电的情况下,上一秒钟所有事务数据才可能丢失。 查找资料时候看到其他文章说innodb_flush_log_at_trx_commit和sync_binlog 两个参数是控制MySQL 磁盘写入策略以及数据安全性的关键参数,当两个参数都设置为1的时候写入性能最差,推荐做法是innodb_flush_log_at_trx_commit=2,sync_binlog=500 或1000 innodb_flush_log_at_trx_commit=2 sync_binlog=1000 推荐参数 max_join_size=4294967295 参数来控制写的优先级,当一个表的读锁达到这个数的时候,MySQL会暂时的,降低写的优先级.MySQL尽量少用一些看似精妙但复杂的语句,因为你复杂可能导致给表加的读锁时间就长,会导致写的操作无法进行. 数值为1就是让读和写交替进行 show status like "innodb_row_lock%"; max_write_lock_count=1 是MySQL读入缓冲区大小。对表进行顺序扫描的请求将分配一个读入缓冲区,MySQL会为它分配一段内存缓冲区。 read_buffer_size=1M 这个变量定义了用户可以创建的内存表(memory table)的大小,但是对于已经存在的内存表就没有什么用了,除非这个表被重新创建(create table)或者修改(alter table)或者truncate table。服务重启也会设置已经存在的内存表为全局max_heap_table_size的值。 这个变量和tmp_table_size一起限制了内部内存表的大小。 暂不关注 max_heap_table_size=256M tmp_table_size=1048576 max_tmp_tables=32 在排序BLOB或TEXT值时使用的字节数(每个值仅头max_sort_length个字节被使用;其余的被忽略)。 max_sort_length=1024 使用CREATE TABLE或CREATE DATABASE语句指定的大小写字母在硬盘上保存表名和数据库名。名称比较对大小写敏感,默认0 lower_case_table_names=0 insert delayed这个特性,异步插入到数据库,MySQL的这个特性,是MySQL对标准SQL的一个扩展,从MySQL 3.22.15 引入,5.6已经不推荐使用,5.7已经不支持了,虽然能识别,但是已经被忽略掉 delayed_insert_timeout=300 delayed_insert_limit =100 delayed_queue_size=5000 max_delayed_threads=20 为非0值时,MySQL服务器会将所有打开的表每隔flush_time指定的时长进行关闭,默认是0 flush_time=0 interactive_timeout针对交互式连接,wait_timeout针对非交互式连接 通过mysql客户端连接数据库是交互式连接,通过jdbc连接数据库是非交互式连接。 默认都是28800 wait_timeout=28800 interactive_timeout=28800 每个连接线程被创建时,MySQL给它分配的内存大小。当MySQL创建一个新的连接线程时,需要给它分配一定大小的内存堆栈空间,以便存放客户端的请求的Query及自身的各种状态和处理信息推荐使用默认值 show variables like 'thread%'; thread_stack=256K [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid systemctl restart mysqld yum -y install zabbix-agent systemctl restart zabbix-agent.service systemctl enable zabbix-agent.service Zabbix 数据库
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。