redis.conf翻译与配置(六)【redis6.0.6】

网友投稿 746 2022-05-29

学习redis的途中,碰上了redis.conf,突发奇想,想着来进行一波翻译输出。

源码之前,了无秘密。

文章目录

高级配置

原文

译文

活动碎片整理

原文

译文

高级配置

原文

############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Lists are also encoded in a special way to save a lot of space. # The number of entries allowed per internal list node can be specified # as a fixed maximum size or a maximum number of elements. # For a fixed maximum size, use -5 through -1, meaning: # -5: max size: 64 Kb <-- not recommended for normal workloads # -4: max size: 32 Kb <-- not recommended # -3: max size: 16 Kb <-- probably not recommended # -2: max size: 8 Kb <-- good # -1: max size: 4 Kb <-- good # Positive numbers mean store up to _exactly_ that number of elements # per list node. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), # but if your use case is unique, adjust the settings as necessary. list-max-ziplist-size -2 # Lists may also be compressed. # Compress depth is the number of quicklist ziplist nodes from *each* side of # the list to *exclude* from compression. The head and tail of the list # are always uncompressed for fast push/pop operations. Settings are: # 0: disable all list compression # 1: depth 1 means "don't start compressing until after 1 node into the list, # going from either the head or tail" # So: [head]->node->node->...->node->[tail] # [head], [tail] will always be uncompressed; inner nodes will compress. # 2: [head]->[next]->node->node->...->node->[prev]->[tail] # 2 here means: don't compress head or head->next or tail->prev or tail, # but compress all nodes between them. # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] # etc. list-compress-depth 0 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Streams macro node max size / items. The stream data structure is a radix # tree of big nodes that encode multiple items inside. Using this configuration # it is possible to configure how big a single node can be in bytes, and the # maximum number of items it may contain before switching to a new node when # appending new stream entries. If any of the following settings are set to # zero, the limit is ignored, so for instance it is possible to set just a # max entires limit by setting max-bytes to 0 and max-entries to the desired # value. stream-node-max-bytes 4096 stream-node-max-entries 100 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # replica -> replica clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and replica clients, since # subscribers and replicas receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Client query buffers accumulate new commands. They are limited to a fixed # amount by default in order to avoid that a protocol desynchronization (for # instance due to a bug in the client) will lead to unbound memory usage in # the query buffer. However you can configure it here if you have very special # needs, such us huge multi/exec requests or alike. # # client-query-buffer-limit 1gb # In the Redis protocol, bulk requests, that are, elements representing single # strings, are normally limited ot 512 mb. However you can change this limit # here. # # proto-max-bulk-len 512mb # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # Normally it is useful to have an HZ value which is proportional to the # number of clients connected. This is useful in order, for instance, to # avoid too many clients are processed for each background task invocation # in order to avoid latency spikes. # # Since the default HZ value by default is conservatively set to 10, Redis # offers, and enables by default, the ability to use an adaptive HZ value # which will temporary raise when there are many connected clients. # # When dynamic HZ is enabled, the actual configured HZ will be used # as a baseline, but multiples of the configured HZ value will be actually # used as needed once more clients are connected. In this way an idle # instance will use very little CPU time while a busy instance will be # more responsive. dynamic-hz yes # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes # When redis saves RDB file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. rdb-save-incremental-fsync yes # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good # idea to start with the default settings and only change them after investigating # how to improve the performances and how the keys LFU change over time, which # is possible to inspect via the OBJECT FREQ command. # # There are two tunable parameters in the Redis LFU implementation: the # counter logarithm factor and the counter decay time. It is important to # understand what the two parameters mean before changing them. # # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis # uses a probabilistic increment with logarithmic behavior. Given the value # of the old counter, when a key is accessed, the counter is incremented in # this way: # # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P. # # The default lfu-log-factor is 10. This is a table of how the frequency # counter changes with a different number of accesses with different # logarithmic factors: # # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+ # # NOTE: The above table was obtained by running the following commands: # # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo # # NOTE 2: The counter initial value is 5 in order to give new objects a chance # to accumulate hits. # # The counter decay time is the time, in minutes, that must elapse in order # for the key counter to be divided by two (or decremented if it has a value # less <= 10). # # The default value for the lfu-decay-time is 1. A Special value of 0 means to # decay the counter every time it happens to be scanned. # # lfu-log-factor 10 # lfu-decay-time 1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

当散列有少量条目且最大条目不超过给定的阈值时,使用内存高效数据结构对其进行编码。可以使用以下指令配置这些阈值:

hash-max-ziplist-entries 512 hash-max-ziplist-value 64

1

2

redis.conf翻译与配置(六)【redis6.0.6】

列表也以一种特殊的方式编码,以节省大量空间。每个内部列表节点允许的条目数可以指定为固定的最大大小或元素的最大数量。

对于固定的最大尺寸,使用-5到-1,代表:

# -5: 最大尺寸: 64 Kb <-- 不建议用于正常工作负载 # -4: 最大尺寸: 32 Kb <-- 不推荐 # -3: 最大尺寸: 16 Kb <-- 可能不推荐 # -2: 最大尺寸: 8 Kb <-- 好 # -1: 最大尺寸: 4 Kb <-- 好

1

2

3

4

5

正数表示每个列表节点最多存储该数目的元素。

最高的执行选项通常是-2 (8 Kb大小)或-1 (4 Kb大小),但是如果您的用例是唯一的,则根据需要调整设置。

list也可以被压缩。

压缩深度是指压缩包中从*each*到*exclude*的快速链表节点的数量

对于快速的推/弹出操作,列表的头和尾总是被解压。配置如下:

0:禁用所有列表压缩 1:在列表中有1个节点之后才开始压缩,无论是从头部还是尾部 所以:`[head]->node->node->...->node->[tail]`,[head], [tail]总会被解压;而内在节点将被压缩。 2:[head]->[next]->node->node->...->node->[prev]->[tail] 2在这里的意思是:不要压缩head 或 head->next 或 tail->prev 或 tail 但是要把中间部分压缩了。 3:[head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] 你懂得

1

2

3

4

5

6

7

8

sets仅在一种情况下有特殊的编码方式:当一个集合由恰好是整数(以10为基数,范围为64位有符号整数)的字符串组成。

为了使用这种特殊的内存节省编码,下面的配置设置设置了集合大小的限制:

set-max-intset-entries 512

1

和hash、list差不多,sorted sets也有特定编码方式,也是为了节省空间。

这种编码只在排序集的长度和元素低于以下限制时使用:

zset-max-ziplist-entries 128 zset-max-ziplist-value 64

1

2

HyperLogLog稀疏表示字节限制。该限制包括16字节头。当使用稀疏表示的超日志超过此限制时,它将转换为密集表示。

大于16000的值是完全无用的,因为在这一点上密集表示更节省内存。

建议的值为~ 3000,以便在不降低PFADD太多的情况下获得空间高效编码的好处,而对于稀疏编码则是O(N)。如果不关心CPU,但是关心空间,并且数据集由基数在0 - 15000范围内的许多超loglog组成,那么这个值可以提高到~ 10000。

Streams宏节点最大大小/项。

流数据结构是一个大节点的基数树,其中编码多个项目。使用此配置,可以配置单个节点的字节大小,以及在附加新流项时切换到新节点之前节点可能包含的最大项数。如果将以下设置中的任何一个设置设置为零,则会忽略该限制,因此,可以通过将最大字节设置为0,将最大条目设置为所需的值来设置最大条目限制。

stream-node-max-bytes 4096 stream-node-max-entries 100

1

2

每100毫秒CPU时间使用1毫秒主动重新缓存,以帮助重新缓存主Redis哈希表(将顶级键映射到值的表)。Redis使用的哈希表实现(参见dict.c)执行延迟重散列:在重新散列表中运行的操作越多,执行的重新散列“步骤”就越多,因此,如果服务器处于空闲状态,则重新缓存永远不会完成,哈希表将使用更多内存。

默认情况下,每秒钟使用这个毫秒10次,以便主动地重新散列主字典,尽可能释放内存。

如果不确定:

如果您有严格的延迟要求,并且Redis可以不时地以2毫秒的延迟响应查询,那么在您的环境中,使用“activerehashing no”。

如果您没有这样的硬性要求,但希望尽快释放内存,请使用“activerehashingyes”。

客户机输出缓冲区限制可用于强制断开由于某些原因没有足够快地从服务器读取数据的客户机(一个常见的原因是发布/订阅客户机不能像发布者生成消息那样快地使用消息)。

对于三种不同类型的客户端,可以不同地设置限制:

# normal -> 正常客户端,包括监视客户端 # replica -> 副本客户端 # pubsub -> 客户端订阅了至少一个pubsub频道或模式

1

2

3

每个客户端输出缓冲区限制指令的语法如下

# client-output-buffer-limit

1

一旦达到硬限制,或者达到软限制并(连续地)保持达到指定的秒数,客户机就会立即断开连接。

因此,例如,如果硬限制为32兆字节,软限制为16兆字节/10秒,则当输出缓冲区的大小达到32兆字节时,客户端将立即断开连接,但如果客户端达到16兆字节并连续超过限制10秒,客户端也将断开连接。

默认情况下,普通客户端不受限制,因为它们不会在没有请求的情况下接收数据(以push方式),而是在请求之后接收数据,因此,只有异步客户机可能会出现这样一种情况,即请求数据的速度比读取数据的速度快。

相反,pubsub和replica客户端有一个默认限制,因为订阅服务器和副本以推送方式接收数据。

可以通过将硬限制或软限制设置为零来禁用它们。

client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60

1

2

3

客户端查询缓冲区累积新命令。默认情况下,它们被限制为固定数量,以避免协议取消同步(例如由于客户端中的错误)将导致查询缓冲区中未绑定内存的使用。但是,如果您有非常特殊的需要,比如我们巨大的multi/exec请求或类似的请求,您可以在这里配置它。

# client-query-buffer-limit 1gb

1

在Redis协议中,批量请求,也就是代表单个字符串的元素,通常被限制为512 mb,但是你可以在这里修改这个限制。

# proto-max-bulk-len 512mb

1

Redis调用一个内部函数来执行许多后台任务,比如在超时时关闭客户端连接,清除从未被请求的过期键,等等。

并不是所有的任务都以相同的频率执行,但是Redis会根据指定的“hz”值来检查要执行的任务。

默认情况下,“hz”设置为10。提高这个值会在Redis空闲的时候使用更多的CPU,但同时会在有很多键同时过期的时候让Redis反应更快,并且可以更精确的处理超时。

范围在1到500之间,但是超过100的值通常不是一个好主意。大多数用户应该使用默认值10,并且只有在需要非常低延迟的环境中才将此值提高到100。

hz 10

1

通常,有一个与连接的客户机数量成比例的赫兹值是有用的。例如,为了避免每次后台任务调用处理过多的客户机,以避免延迟峰值,这很有用。

由于默认的HZ值保守地设置为10,Redis提供并在默认情况下启用使用自适应HZ值的功能,当有许多连接的客户端时,该值将临时提高。

当启用dynamic HZ时,实际配置的HZ将用作基线,但是在连接更多客户端时,实际将根据需要使用配置的HZ值的倍数。通过这种方式,空闲实例将使用非常少的CPU时间,而忙碌的实例将具有更好的响应能力。

dynamic-hz yes

1

当子程序重写AOF文件时,如果启用了以下选项,则该文件将每生成32 MB数据进行fsync。这对于以更增量的方式将文件提交到磁盘并避免较大的延迟峰值非常有用。

aof-rewrite-incremental-fsync yes

1

Redis LFU驱逐(见maxmemory设置)可以调优。但是,最好从默认设置开始,在研究了如何改进性能以及键LFU如何随时间变化之后才更改它们,这可以通过OBJECT FREQ命令进行检查。

Redis-LFU实现中有两个可调参数:计数器对数因子和计数器衰减时间。在改变这两个参数之前,了解它们的含义是很重要的。

LFU计数器每密钥只有8位,它的最大值是255,所以Redis使用对数行为的概率增量。给定旧计数器的值,当访问密钥时,计数器按以下方式递增:

1、提取0到1之间的随机数R。 2、概率P计算为1/(old_value*lfu_log_factor+1)。 3、只有当R

1

2

3

默认的LFU日志系数为10。以下是频率计数器如何随不同对数因子的不同访问次数而变化的表:

# +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+

1

2

3

4

5

6

7

8

9

10

11

注:上表是通过运行以下命令获得的:

# redis-benchmark -n 1000000 incr foo # redis-cli object freq foo

1

2

注2:计数器初始值为5,以便给新对象一个累积命中的机会。

计数器衰变时间是按键计数器除以2(如果其值小于等于10,则递减)必须经过的时间(以分钟为单位)。

LFU衰退时间的默认值为1。一个特殊值0意味着每次扫描计数器时都会衰减计数器。

活动碎片整理

原文

########################### ACTIVE DEFRAGMENTATION ####################### # # What is active defragmentation? # ------------------------------- # # Active (online) defragmentation allows a Redis server to compact the # spaces left between small allocations and deallocations of data in memory, # thus allowing to reclaim back memory. # # Fragmentation is a natural process that happens with every allocator (but # less so with Jemalloc, fortunately) and certain workloads. Normally a server # restart is needed in order to lower the fragmentation, or at least to flush # away all the data and create it again. However thanks to this feature # implemented by Oran Agra for Redis 4.0 this process can happen at runtime # in an "hot" way, while the server is running. # # Basically when the fragmentation is over a certain level (see the # configuration options below) Redis will start to create new copies of the # values in contiguous memory regions by exploiting certain specific Jemalloc # features (in order to understand if an allocation is causing fragmentation # and to allocate it in a better place), and at the same time, will release the # old copies of the data. This process, repeated incrementally for all the keys # will cause the fragmentation to drop back to normal values. # # Important things to understand: # # 1. This feature is disabled by default, and only works if you compiled Redis # to use the copy of Jemalloc we ship with the source code of Redis. # This is the default with Linux builds. # # 2. You never need to enable this feature if you don't have fragmentation # issues. # # 3. Once you experience fragmentation, you can enable this feature when # needed with the command "CONFIG SET activedefrag yes". # # The configuration parameters are able to fine tune the behavior of the # defragmentation process. If you are not sure about what they mean it is # a good idea to leave the defaults untouched. # Enabled active defragmentation # activedefrag no # Minimum amount of fragmentation waste to start active defrag # active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag # active-defrag-threshold-lower 10 # Maximum percentage of fragmentation at which we use maximum effort # active-defrag-threshold-upper 100 # Minimal effort for defrag in CPU percentage, to be used when the lower # threshold is reached # active-defrag-cycle-min 1 # Maximal effort for defrag in CPU percentage, to be used when the upper # threshold is reached # active-defrag-cycle-max 25 # Maximum number of set/hash/zset/list fields that will be processed from # the main dictionary scan # active-defrag-max-scan-fields 1000 # Jemalloc background thread for purging will be enabled by default jemalloc-bg-thread yes # It is possible to pin different threads and processes of Redis to specific # CPUs in your system, in order to maximize the performances of the server. # This is useful both in order to pin different Redis threads in different # CPUs, but also in order to make sure that multiple Redis instances running # in the same host will be pinned to different CPUs. # # Normally you can do this using the "taskset" command, however it is also # possible to this via Redis configuration directly, both in Linux and FreeBSD. # # You can pin the server/IO threads, bio threads, aof rewrite child process, and # the bgsave child process. The syntax to specify the cpu list is the same as # the taskset command: # # Set redis server/io threads to cpu affinity 0,2,4,6: # server_cpulist 0-7:2 # # Set bio threads to cpu affinity 1,3: # bio_cpulist 1,3 # # Set aof rewrite child process to cpu affinity 8,9,10,11: # aof_rewrite_cpulist 8-11 # # Set bgsave child process to cpu affinity 1,10,11 # bgsave_cpulist 1,10-11

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

什么是活动碎片整理?

活动(在线)碎片整理允许Redis服务器压缩空间之间的小的分配和回收的数据在内存,从而允许回收内存。

碎片化是一个自然的过程,它发生在每个分配器(幸运的是Jemalloc)和某些工作负载上。通常需要重新启动服务器以降低碎片,或者至少刷新所有数据并重新创建。不过,多亏了OranAgra for Redis 4.0实现的这一特性,这个过程可以在服务器运行时以“热”的方式在运行时发生。

基本上,当碎片超过某个级别(参见下面的配置选项)时,Redis将开始利用某些特定的Jemalloc特性在连续内存区域中创建值的新副本(以便了解某个分配是否导致碎片并将其分配到更好的位置),同时,会发布旧的数据拷贝。对所有键增量重复此过程将导致碎片返回到正常值。

需要了解的重要事情:

1、此功能在默认情况下是禁用的,并且只有在您编译Redis以使用Redis源代码附带的Jemalloc副本时才起作用。 这是Linux版本的默认设置。 2. 如果没有碎片问题,就不需要启用这个特性。 3. 一旦你经历了碎片化,你可以在需要的时候使用命令“CONFIG SET activedefrag yes”来启用这个特性。

1

2

3

4

配置参数能够微调碎片整理进程的行为。如果你不确定它们的意思,保持默认值不变是个好主意。

# Enabled active defragmentation # activedefrag no

1

2

可以将Redis的不同线程和进程固定到系统中的特定cpu上,以最大限度地提高服务器的性能。

这有助于将不同的Redis线程固定在不同的cpu上,也可以确保在同一主机上运行的多个Redis实例被固定到不同的cpu上。

通常,您可以使用“taskset”命令来完成此操作,但是在Linux和FreeBSD中,也可以通过Redis配置直接实现。

您可以固定服务器/IO线程、bio线程、aof重写子进程和bgsave子进程。指定cpu列表的语法与taskset命令相同:

# Set redis server/io threads to cpu affinity 0,2,4,6: # server_cpulist 0-7:2 # # Set bio threads to cpu affinity 1,3: # bio_cpulist 1,3 # # Set aof rewrite child process to cpu affinity 8,9,10,11: # aof_rewrite_cpulist 8-11 # # Set bgsave child process to cpu affinity 1,10,11 # bgsave_cpulist 1,10-11

1

2

3

4

5

6

7

8

9

10

11

告一段落咯

Redis 机器翻译

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:1小时学会P4-16编程基础
下一篇:虫子 归并 计数 内核必备,基本算法,linux二次发育,项目远见
相关文章