{"id":1388,"date":"2017-04-27T16:06:25","date_gmt":"2017-04-27T08:06:25","guid":{"rendered":"https:\/\/www.strongd.net\/?p=1388"},"modified":"2017-04-27T16:06:25","modified_gmt":"2017-04-27T08:06:25","slug":"dba%e4%b8%8d%e5%8f%af%e4%b8%8d%e7%9f%a5%e7%9a%84%e6%93%8d%e4%bd%9c%e7%b3%bb%e7%bb%9f%e5%86%85%e6%a0%b8%e5%8f%82%e6%95%b0","status":"publish","type":"post","link":"https:\/\/www.strongd.net\/?p=1388","title":{"rendered":"DBA\u4e0d\u53ef\u4e0d\u77e5\u7684\u64cd\u4f5c\u7cfb\u7edf\u5185\u6838\u53c2\u6570"},"content":{"rendered":"<h2>\u80cc\u666f<\/h2>\n<p>\u64cd\u4f5c\u7cfb\u7edf\u4e3a\u4e86\u9002\u5e94\u66f4\u591a\u7684\u786c\u4ef6\u73af\u5883\uff0c\u8bb8\u591a\u521d\u59cb\u7684\u8bbe\u7f6e\u503c\uff0c\u5bbd\u5bb9\u5ea6\u90fd\u5f88\u9ad8\u3002<\/p>\n<p>\u5982\u679c\u4e0d\u7ecf\u8c03\u6574\uff0c\u8fd9\u4e9b\u503c\u53ef\u80fd\u65e0\u6cd5\u9002\u5e94HPC\uff0c\u6216\u8005\u786c\u4ef6\u7a0d\u597d\u4e9b\u7684\u73af\u5883\u3002<\/p>\n<p>\u65e0\u6cd5\u53d1\u6325\u66f4\u597d\u7684\u786c\u4ef6\u6027\u80fd\uff0c\u751a\u81f3\u53ef\u80fd\u5f71\u54cd\u67d0\u4e9b\u5e94\u7528\u8f6f\u4ef6\u7684\u4f7f\u7528\uff0c\u7279\u522b\u662f\u6570\u636e\u5e93\u3002<\/p>\n<h2><a id=\"user-content-\u6570\u636e\u5e93\u5173\u5fc3\u7684os\u5185\u6838\u53c2\u6570\" class=\"anchor\" href=\"https:\/\/github.com\/digoal\/blog\/blob\/master\/201608\/20160803_01.md#%E6%95%B0%E6%8D%AE%E5%BA%93%E5%85%B3%E5%BF%83%E7%9A%84os%E5%86%85%E6%A0%B8%E5%8F%82%E6%95%B0\"><\/a>\u6570\u636e\u5e93\u5173\u5fc3\u7684OS\u5185\u6838\u53c2\u6570<\/h2>\n<p>512GB \u5185\u5b58\u4e3a\u4f8b<\/p>\n<p>1.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>fs.aio-max-nr  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7       \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>aio-nr &amp; aio-max-nr:    \r\n.  \r\naio-nr is the running total of the number of events specified on the    \r\nio_setup system call for all currently active aio contexts.    \r\n.  \r\nIf aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN.    \r\n.  \r\nNote that raising aio-max-nr does not result in the pre-allocation or re-sizing    \r\nof any kernel data structures.    \r\n.  \r\naio-nr &amp; aio-max-nr:    \r\n.  \r\naio-nr shows the current system-wide number of asynchronous io requests.    \r\n.  \r\naio-max-nr allows you to change the maximum value aio-nr can grow to.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>fs.aio-max-nr = 1xxxxxx  \r\n.  \r\nPostgreSQL, Greenplum \u5747\u672a\u4f7f\u7528io_setup\u521b\u5efaaio contexts. \u65e0\u9700\u8bbe\u7f6e\u3002    \r\n\u5982\u679cOracle\u6570\u636e\u5e93\uff0c\u8981\u4f7f\u7528aio\u7684\u8bdd\uff0c\u9700\u8981\u8bbe\u7f6e\u5b83\u3002    \r\n\u8bbe\u7f6e\u5b83\u4e5f\u6ca1\u4ec0\u4e48\u574f\u5904\uff0c\u5982\u679c\u5c06\u6765\u9700\u8981\u9002\u5e94\u5f02\u6b65IO\uff0c\u53ef\u4ee5\u4e0d\u9700\u8981\u91cd\u65b0\u4fee\u6539\u8fd9\u4e2a\u8bbe\u7f6e\u3002   \r\n<\/code><\/pre>\n<p>2.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>fs.file-max  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7       \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>file-max &amp; file-nr:    \r\n.  \r\nThe value in file-max denotes the maximum number of file handles that the Linux kernel will allocate.   \r\n.  \r\nWhen you get lots of error messages about running out of file handles,   \r\nyou might want to increase this limit.    \r\n.  \r\nHistorically, the kernel was able to allocate file handles dynamically,   \r\nbut not to free them again.     \r\n.  \r\nThe three values in file-nr denote :      \r\nthe number of allocated file handles ,     \r\nthe number of allocated but unused file handles ,     \r\nthe maximum number of file handles.     \r\n.  \r\nLinux 2.6 always reports 0 as the number of free    \r\nfile handles -- this is not an error, it just means that the    \r\nnumber of allocated file handles exactly matches the number of    \r\nused file handles.    \r\n.  \r\nAttempts to allocate more file descriptors than file-max are reported with printk,   \r\nlook for \"VFS: file-max limit &lt;number&gt; reached\".    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>fs.file-max = 7xxxxxxx  \r\n.  \r\nPostgreSQL \u6709\u4e00\u5957\u81ea\u5df1\u7ba1\u7406\u7684VFS\uff0c\u771f\u6b63\u6253\u5f00\u7684FD\u4e0e\u5185\u6838\u7ba1\u7406\u7684\u6587\u4ef6\u6253\u5f00\u5173\u95ed\u6709\u4e00\u5957\u6620\u5c04\u7684\u673a\u5236\uff0c\u6240\u4ee5\u771f\u5b9e\u60c5\u51b5\u4e0d\u9700\u8981\u4f7f\u7528\u90a3\u4e48\u591a\u7684file handlers\u3002     \r\nmax_files_per_process \u53c2\u6570\u3002     \r\n\u5047\u8bbe1GB\u5185\u5b58\u652f\u6491100\u4e2a\u8fde\u63a5\uff0c\u6bcf\u4e2a\u8fde\u63a5\u6253\u5f001000\u4e2a\u6587\u4ef6\uff0c\u90a3\u4e48\u4e00\u4e2aPG\u5b9e\u4f8b\u9700\u8981\u6253\u5f0010\u4e07\u4e2a\u6587\u4ef6\uff0c\u4e00\u53f0\u673a\u5668\u6309512G\u5185\u5b58\u6765\u7b97\u53ef\u4ee5\u8dd1500\u4e2aPG\u5b9e\u4f8b\uff0c\u5219\u9700\u89815000\u4e07\u4e2afile handler\u3002     \r\n\u4ee5\u4e0a\u8bbe\u7f6e\u7ef0\u7ef0\u6709\u4f59\u3002     \r\n<\/code><\/pre>\n<p>3.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>kernel.core_pattern  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7       \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>core_pattern:    \r\n.  \r\ncore_pattern is used to specify a core dumpfile pattern name.    \r\n. max length 128 characters; default value is \"core\"    \r\n. core_pattern is used as a pattern template for the output filename;    \r\n  certain string patterns (beginning with '%') are substituted with    \r\n  their actual values.    \r\n. backward compatibility with core_uses_pid:    \r\n        If core_pattern does not include \"%p\" (default does not)    \r\n        and core_uses_pid is set, then .PID will be appended to    \r\n        the filename.    \r\n. corename format specifiers:    \r\n        %&lt;NUL&gt;  '%' is dropped    \r\n        %%      output one '%'    \r\n        %p      pid    \r\n        %P      global pid (init PID namespace)    \r\n        %i      tid    \r\n        %I      global tid (init PID namespace)    \r\n        %u      uid    \r\n        %g      gid    \r\n        %d      dump mode, matches PR_SET_DUMPABLE and    \r\n                \/proc\/sys\/fs\/suid_dumpable    \r\n        %s      signal number    \r\n        %t      UNIX time of dump    \r\n        %h      hostname    \r\n        %e      executable filename (may be shortened)    \r\n        %E      executable path    \r\n        %&lt;OTHER&gt; both are dropped    \r\n. If the first character of the pattern is a '|', the kernel will treat    \r\n  the rest of the pattern as a command to run.  The core dump will be    \r\n  written to the standard input of that program instead of to a file.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>kernel.core_pattern = \/xxx\/core_%e_%u_%t_%s.%p    \r\n.  \r\n\u8fd9\u4e2a\u76ee\u5f55\u8981777\u7684\u6743\u9650\uff0c\u5982\u679c\u5b83\u662f\u4e2a\u8f6f\u94fe\uff0c\u5219\u771f\u5b9e\u76ee\u5f55\u9700\u8981777\u7684\u6743\u9650  \r\nmkdir \/xxx  \r\nchmod 777 \/xxx  \r\n\u7559\u8db3\u591f\u7684\u7a7a\u95f4  \r\n<\/code><\/pre>\n<p>4.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>kernel.sem   \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7       \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>kernel.sem = 4096 2147483647 2147483646 512000    \r\n.  \r\n4096 \u6bcf\u7ec4\u591a\u5c11\u4fe1\u53f7\u91cf (&gt;=17, PostgreSQL \u6bcf16\u4e2a\u8fdb\u7a0b\u4e00\u7ec4, \u6bcf\u7ec4\u9700\u898117\u4e2a\u4fe1\u53f7\u91cf) ,     \r\n2147483647 \u603b\u5171\u591a\u5c11\u4fe1\u53f7\u91cf (2^31-1 , \u4e14\u5927\u4e8e4096*512000 ) ,     \r\n2147483646 \u6bcf\u4e2asemop()\u8c03\u7528\u652f\u6301\u591a\u5c11\u64cd\u4f5c (2^31-1),     \r\n512000 \u591a\u5c11\u7ec4\u4fe1\u53f7\u91cf (\u5047\u8bbe\u6bcfGB\u652f\u6301100\u4e2a\u8fde\u63a5, 512GB\u652f\u630151200\u4e2a\u8fde\u63a5, \u52a0\u4e0a\u5176\u4ed6\u8fdb\u7a0b, &gt; 51200*2\/16 \u7ef0\u7ef0\u6709\u4f59)     \r\n.  \r\n# sysctl -w kernel.sem=\"4096 2147483647 2147483646 512000\"    \r\n.  \r\n# ipcs -s -l    \r\n  ------ Semaphore Limits --------    \r\nmax number of arrays = 512000    \r\nmax semaphores per array = 4096    \r\nmax semaphores system wide = 2147483647    \r\nmax ops per semop call = 2147483646    \r\nsemaphore max value = 32767    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>kernel.sem = 4096 2147483647 2147483646 512000    \r\n.  \r\n4096\u53ef\u80fd\u80fd\u591f\u9002\u5408\u66f4\u591a\u7684\u573a\u666f, \u6240\u4ee5\u5927\u70b9\u65e0\u59a8\uff0c\u5173\u952e\u662f512000 arrays\u4e5f\u591f\u4e86\u3002    \r\n<\/code><\/pre>\n<p>5.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>kernel.shmall = 107374182    \r\nkernel.shmmax = 274877906944    \r\nkernel.shmmni = 819200    \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7        \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>\u5047\u8bbe\u4e3b\u673a\u5185\u5b58 512GB    \r\n.  \r\nshmmax \u5355\u4e2a\u5171\u4eab\u5185\u5b58\u6bb5\u6700\u5927 256GB (\u4e3b\u673a\u5185\u5b58\u7684\u4e00\u534a\uff0c\u5355\u4f4d\u5b57\u8282)      \r\nshmall \u6240\u6709\u5171\u4eab\u5185\u5b58\u6bb5\u52a0\u8d77\u6765\u6700\u5927 (\u4e3b\u673a\u5185\u5b58\u768480%\uff0c\u5355\u4f4dPAGE)      \r\nshmmni \u4e00\u5171\u5141\u8bb8\u521b\u5efa819200\u4e2a\u5171\u4eab\u5185\u5b58\u6bb5 (\u6bcf\u4e2a\u6570\u636e\u5e93\u542f\u52a8\u9700\u89812\u4e2a\u5171\u4eab\u5185\u5b58\u6bb5\u3002  \u5c06\u6765\u5141\u8bb8\u52a8\u6001\u521b\u5efa\u5171\u4eab\u5185\u5b58\u6bb5\uff0c\u53ef\u80fd\u9700\u6c42\u91cf\u66f4\u5927)     \r\n.  \r\n# getconf PAGE_SIZE    \r\n4096    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>kernel.shmall = 107374182    \r\nkernel.shmmax = 274877906944    \r\nkernel.shmmni = 819200    \r\n.  \r\n9.2\u4ee5\u53ca\u4ee5\u524d\u7684\u7248\u672c\uff0c\u6570\u636e\u5e93\u542f\u52a8\u65f6\uff0c\u5bf9\u5171\u4eab\u5185\u5b58\u6bb5\u7684\u5185\u5b58\u9700\u6c42\u975e\u5e38\u5927\uff0c\u9700\u8981\u8003\u8651\u4ee5\u4e0b\u51e0\u70b9  \r\nConnections:\t(1800 + 270 * max_locks_per_transaction) * max_connections  \r\nAutovacuum workers:\t(1800 + 270 * max_locks_per_transaction) * autovacuum_max_workers  \r\nPrepared transactions:\t(770 + 270 * max_locks_per_transaction) * max_prepared_transactions  \r\nShared disk buffers:\t(block_size + 208) * shared_buffers  \r\nWAL buffers:\t(wal_block_size + 8) * wal_buffers  \r\nFixed space requirements:\t770 kB  \r\n.  \r\n\u4ee5\u4e0a\u5efa\u8bae\u53c2\u6570\u6839\u636e9.2\u4ee5\u524d\u7684\u7248\u672c\u8bbe\u7f6e\uff0c\u540e\u671f\u7684\u7248\u672c\u540c\u6837\u9002\u7528\u3002  \r\n<\/code><\/pre>\n<p>6.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.core.netdev_max_backlog  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7     \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>netdev_max_backlog    \r\n  ------------------    \r\nMaximum number  of  packets,  queued  on  the  INPUT  side,    \r\nwhen the interface receives packets faster than kernel can process them.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.core.netdev_max_backlog=1xxxx    \r\n.  \r\nINPUT\u94fe\u8868\u8d8a\u957f\uff0c\u5904\u7406\u8017\u8d39\u8d8a\u5927\uff0c\u5982\u679c\u7528\u4e86iptables\u7ba1\u7406\u7684\u8bdd\uff0c\u9700\u8981\u52a0\u5927\u8fd9\u4e2a\u503c\u3002    \r\n<\/code><\/pre>\n<p>7.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.core.rmem_default  \r\nnet.core.rmem_max  \r\nnet.core.wmem_default  \r\nnet.core.wmem_max  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7     \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>rmem_default    \r\n  ------------    \r\nThe default setting of the socket receive buffer in bytes.    \r\n.  \r\nrmem_max    \r\n  --------    \r\nThe maximum receive socket buffer size in bytes.    \r\n.  \r\nwmem_default    \r\n  ------------    \r\nThe default setting (in bytes) of the socket send buffer.    \r\n.  \r\nwmem_max    \r\n  --------    \r\nThe maximum send socket buffer size in bytes.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.core.rmem_default = 262144    \r\nnet.core.rmem_max = 4194304    \r\nnet.core.wmem_default = 262144    \r\nnet.core.wmem_max = 4194304    \r\n<\/code><\/pre>\n<p>8.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.core.somaxconn   \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7        \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>somaxconn - INTEGER    \r\n        Limit of socket listen() backlog, known in userspace as SOMAXCONN.    \r\n        Defaults to 128.    \r\n\tSee also tcp_max_syn_backlog for additional tuning for TCP sockets.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.core.somaxconn=4xxx    \r\n<\/code><\/pre>\n<p>9.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_max_syn_backlog  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_max_syn_backlog - INTEGER    \r\n        Maximal number of remembered connection requests, which have not    \r\n        received an acknowledgment from connecting client.    \r\n        The minimal value is 128 for low memory machines, and it will    \r\n        increase in proportion to the memory of machine.    \r\n        If server suffers from overload, try increasing this number.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_max_syn_backlog=4xxx    \r\npgpool-II \u4f7f\u7528\u4e86\u8fd9\u4e2a\u503c\uff0c\u7528\u4e8e\u5c06\u8d85\u8fc7num_init_child\u4ee5\u5916\u7684\u8fde\u63a5queue\u3002     \r\n\u6240\u4ee5\u8fd9\u4e2a\u503c\u51b3\u5b9a\u4e86\u6709\u591a\u5c11\u8fde\u63a5\u53ef\u4ee5\u5728\u961f\u5217\u91cc\u9762\u7b49\u5f85\u3002    \r\n<\/code><\/pre>\n<p>10.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_keepalive_intvl=20    \r\nnet.ipv4.tcp_keepalive_probes=3    \r\nnet.ipv4.tcp_keepalive_time=60     \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7        \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_keepalive_time - INTEGER    \r\n        How often TCP sends out keepalive messages when keepalive is enabled.    \r\n        Default: 2hours.    \r\n.  \r\ntcp_keepalive_probes - INTEGER    \r\n        How many keepalive probes TCP sends out, until it decides that the    \r\n        connection is broken. Default value: 9.    \r\n.  \r\ntcp_keepalive_intvl - INTEGER    \r\n        How frequently the probes are send out. Multiplied by    \r\n        tcp_keepalive_probes it is time to kill not responding connection,    \r\n        after probes started. Default value: 75sec i.e. connection    \r\n        will be aborted after ~11 minutes of retries.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_keepalive_intvl=20    \r\nnet.ipv4.tcp_keepalive_probes=3    \r\nnet.ipv4.tcp_keepalive_time=60    \r\n.  \r\n\u8fde\u63a5\u7a7a\u95f260\u79d2\u540e, \u6bcf\u969420\u79d2\u53d1\u5fc3\u8df3\u5305, \u5c1d\u8bd53\u6b21\u5fc3\u8df3\u5305\u6ca1\u6709\u54cd\u5e94\uff0c\u5173\u95ed\u8fde\u63a5\u3002 \u4ece\u5f00\u59cb\u7a7a\u95f2\uff0c\u5230\u5173\u95ed\u8fde\u63a5\u603b\u5171\u5386\u65f6120\u79d2\u3002    \r\n<\/code><\/pre>\n<p>11.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_mem=8388608 12582912 16777216    \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7    \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_mem - vector of 3 INTEGERs: min, pressure, max    \r\n\u5355\u4f4d page    \r\n        min: below this number of pages TCP is not bothered about its    \r\n        memory appetite.    \r\n.  \r\n        pressure: when amount of memory allocated by TCP exceeds this number    \r\n        of pages, TCP moderates its memory consumption and enters memory    \r\n        pressure mode, which is exited when memory consumption falls    \r\n        under \"min\".    \r\n.  \r\n        max: number of pages allowed for queueing by all TCP sockets.    \r\n.  \r\n        Defaults are calculated at boot time from amount of available    \r\n        memory.    \r\n64GB \u5185\u5b58\uff0c\u81ea\u52a8\u8ba1\u7b97\u7684\u503c\u662f\u8fd9\u6837\u7684    \r\nnet.ipv4.tcp_mem = 1539615      2052821 3079230    \r\n.  \r\n512GB \u5185\u5b58\uff0c\u81ea\u52a8\u8ba1\u7b97\u5f97\u5230\u7684\u503c\u662f\u8fd9\u6837\u7684    \r\nnet.ipv4.tcp_mem = 49621632     66162176        99243264    \r\n.  \r\n\u8fd9\u4e2a\u53c2\u6570\u8ba9\u64cd\u4f5c\u7cfb\u7edf\u542f\u52a8\u65f6\u81ea\u52a8\u8ba1\u7b97\uff0c\u95ee\u9898\u4e5f\u4e0d\u5927  \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_mem=8388608 12582912 16777216    \r\n.  \r\n\u8fd9\u4e2a\u53c2\u6570\u8ba9\u64cd\u4f5c\u7cfb\u7edf\u542f\u52a8\u65f6\u81ea\u52a8\u8ba1\u7b97\uff0c\u95ee\u9898\u4e5f\u4e0d\u5927  \r\n<\/code><\/pre>\n<p>12.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_fin_timeout  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7        \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_fin_timeout - INTEGER    \r\n        The length of time an orphaned (no longer referenced by any    \r\n        application) connection will remain in the FIN_WAIT_2 state    \r\n        before it is aborted at the local end.  While a perfectly    \r\n        valid \"receive only\" state for an un-orphaned connection, an    \r\n        orphaned connection in FIN_WAIT_2 state could otherwise wait    \r\n        forever for the remote to close its end of the connection.    \r\n        Cf. tcp_max_orphans    \r\n        Default: 60 seconds    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_fin_timeout=5    \r\n.  \r\n\u52a0\u5feb\u50f5\u5c38\u8fde\u63a5\u56de\u6536\u901f\u5ea6   \r\n<\/code><\/pre>\n<p>13.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_synack_retries  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_synack_retries - INTEGER    \r\n        Number of times SYNACKs for a passive TCP connection attempt will    \r\n        be retransmitted. Should not be higher than 255. Default value    \r\n        is 5, which corresponds to 31seconds till the last retransmission    \r\n        with the current initial RTO of 1second. With this the final timeout    \r\n        for a passive TCP connection will happen after 63seconds.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_synack_retries=2    \r\n.  \r\n\u7f29\u77edtcp syncack\u8d85\u65f6\u65f6\u95f4  \r\n<\/code><\/pre>\n<p>14.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_syncookies  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_syncookies - BOOLEAN    \r\n        Only valid when the kernel was compiled with CONFIG_SYN_COOKIES    \r\n        Send out syncookies when the syn backlog queue of a socket    \r\n        overflows. This is to prevent against the common 'SYN flood attack'    \r\n        Default: 1    \r\n.  \r\n        Note, that syncookies is fallback facility.    \r\n        It MUST NOT be used to help highly loaded servers to stand    \r\n        against legal connection rate. If you see SYN flood warnings    \r\n        in your logs, but investigation shows that they occur    \r\n        because of overload with legal connections, you should tune    \r\n        another parameters until this warning disappear.    \r\n        See: tcp_max_syn_backlog, tcp_synack_retries, tcp_abort_on_overflow.    \r\n.  \r\n        syncookies seriously violate TCP protocol, do not allow    \r\n        to use TCP extensions, can result in serious degradation    \r\n        of some services (f.e. SMTP relaying), visible not by you,    \r\n        but your clients and relays, contacting you. While you see    \r\n        SYN flood warnings in logs not being really flooded, your server    \r\n        is seriously misconfigured.    \r\n.  \r\n        If you want to test which effects syncookies have to your    \r\n        network connections you can set this knob to 2 to enable    \r\n        unconditionally generation of syncookies.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_syncookies=1    \r\n.  \r\n\u9632\u6b62syn flood\u653b\u51fb   \r\n<\/code><\/pre>\n<p>15.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_timestamps  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_timestamps - BOOLEAN    \r\n        Enable timestamps as defined in RFC1323.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_timestamps=1    \r\n.  \r\ntcp_timestamps \u662f tcp \u534f\u8bae\u4e2d\u7684\u4e00\u4e2a\u6269\u5c55\u9879\uff0c\u901a\u8fc7\u65f6\u95f4\u6233\u7684\u65b9\u5f0f\u6765\u68c0\u6d4b\u8fc7\u6765\u7684\u5305\u4ee5\u9632\u6b62 PAWS(Protect Against Wrapped  Sequence numbers)\uff0c\u53ef\u4ee5\u63d0\u9ad8 tcp \u7684\u6027\u80fd\u3002  \r\n<\/code><\/pre>\n<p>16.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_tw_recycle  \r\nnet.ipv4.tcp_tw_reuse  \r\nnet.ipv4.tcp_max_tw_buckets  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_tw_recycle - BOOLEAN    \r\n        Enable fast recycling TIME-WAIT sockets. Default value is 0.    \r\n        It should not be changed without advice\/request of technical    \r\n        experts.    \r\n.  \r\ntcp_tw_reuse - BOOLEAN    \r\n        Allow to reuse TIME-WAIT sockets for new connections when it is    \r\n        safe from protocol viewpoint. Default value is 0.    \r\n        It should not be changed without advice\/request of technical    \r\n        experts.    \r\n.  \r\ntcp_max_tw_buckets - INTEGER  \r\n        Maximal number of timewait sockets held by system simultaneously.  \r\n        If this number is exceeded time-wait socket is immediately destroyed  \r\n        and warning is printed.   \r\n\tThis limit exists only to prevent simple DoS attacks,   \r\n\tyou _must_ not lower the limit artificially,   \r\n        but rather increase it (probably, after increasing installed memory),    \r\n        if network conditions require more than default value.   \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_tw_recycle=0    \r\nnet.ipv4.tcp_tw_reuse=1    \r\nnet.ipv4.tcp_max_tw_buckets = 2xxxxx    \r\n.  \r\nnet.ipv4.tcp_tw_recycle\u548cnet.ipv4.tcp_timestamps\u4e0d\u5efa\u8bae\u540c\u65f6\u5f00\u542f    \r\n<\/code><\/pre>\n<p>17.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.tcp_rmem  \r\nnet.ipv4.tcp_wmem  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>tcp_wmem - vector of 3 INTEGERs: min, default, max    \r\n        min: Amount of memory reserved for send buffers for TCP sockets.    \r\n        Each TCP socket has rights to use it due to fact of its birth.    \r\n        Default: 1 page    \r\n.  \r\n        default: initial size of send buffer used by TCP sockets.  This    \r\n        value overrides net.core.wmem_default used by other protocols.    \r\n        It is usually lower than net.core.wmem_default.    \r\n        Default: 16K    \r\n.  \r\n        max: Maximal amount of memory allowed for automatically tuned    \r\n        send buffers for TCP sockets. This value does not override    \r\n        net.core.wmem_max.  Calling setsockopt() with SO_SNDBUF disables    \r\n        automatic tuning of that socket's send buffer size, in which case    \r\n        this value is ignored.    \r\n        Default: between 64K and 4MB, depending on RAM size.    \r\n.  \r\ntcp_rmem - vector of 3 INTEGERs: min, default, max    \r\n        min: Minimal size of receive buffer used by TCP sockets.    \r\n        It is guaranteed to each TCP socket, even under moderate memory    \r\n        pressure.    \r\n        Default: 1 page    \r\n.  \r\n        default: initial size of receive buffer used by TCP sockets.    \r\n        This value overrides net.core.rmem_default used by other protocols.    \r\n        Default: 87380 bytes. This value results in window of 65535 with    \r\n        default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit    \r\n        less for default tcp_app_win. See below about these variables.    \r\n.  \r\n        max: maximal size of receive buffer allowed for automatically    \r\n        selected receiver buffers for TCP socket. This value does not override    \r\n        net.core.rmem_max.  Calling setsockopt() with SO_RCVBUF disables    \r\n        automatic tuning of that socket's receive buffer size, in which    \r\n        case this value is ignored.    \r\n        Default: between 87380B and 6MB, depending on RAM size.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.tcp_rmem=8192 87380 16777216    \r\nnet.ipv4.tcp_wmem=8192 65536 16777216    \r\n.  \r\n\u8bb8\u591a\u6570\u636e\u5e93\u7684\u63a8\u8350\u8bbe\u7f6e\uff0c\u63d0\u9ad8\u7f51\u7edc\u6027\u80fd  \r\n<\/code><\/pre>\n<p>18.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.nf_conntrack_max  \r\nnet.netfilter.nf_conntrack_max  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6    \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>nf_conntrack_max - INTEGER    \r\n        Size of connection tracking table.    \r\n\tDefault value is nf_conntrack_buckets value * 4.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.nf_conntrack_max=1xxxxxx    \r\nnet.netfilter.nf_conntrack_max=1xxxxxx    \r\n<\/code><\/pre>\n<p>19.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.dirty_background_bytes   \r\nvm.dirty_expire_centisecs   \r\nvm.dirty_ratio   \r\nvm.dirty_writeback_centisecs   \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7        \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>==============================================================    \r\n.  \r\ndirty_background_bytes    \r\n.  \r\nContains the amount of dirty memory at which the background kernel    \r\nflusher threads will start writeback.    \r\n.  \r\nNote: dirty_background_bytes is the counterpart of dirty_background_ratio. Only    \r\none of them may be specified at a time. When one sysctl is written it is    \r\nimmediately taken into account to evaluate the dirty memory limits and the    \r\nother appears as 0 when read.    \r\n.  \r\n==============================================================    \r\n.  \r\ndirty_background_ratio    \r\n.  \r\nContains, as a percentage of total system memory, the number of pages at which    \r\nthe background kernel flusher threads will start writing out dirty data.    \r\n.  \r\n==============================================================    \r\n.  \r\ndirty_bytes    \r\n.  \r\nContains the amount of dirty memory at which a process generating disk writes    \r\nwill itself start writeback.    \r\n.  \r\nNote: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be    \r\nspecified at a time. When one sysctl is written it is immediately taken into    \r\naccount to evaluate the dirty memory limits and the other appears as 0 when    \r\nread.    \r\n.  \r\nNote: the minimum value allowed for dirty_bytes is two pages (in bytes); any    \r\nvalue lower than this limit will be ignored and the old configuration will be    \r\nretained.    \r\n.  \r\n==============================================================    \r\n.  \r\ndirty_expire_centisecs    \r\n.  \r\nThis tunable is used to define when dirty data is old enough to be eligible    \r\nfor writeout by the kernel flusher threads.  It is expressed in 100'ths    \r\nof a second.  Data which has been dirty in-memory for longer than this    \r\ninterval will be written out next time a flusher thread wakes up.    \r\n.  \r\n==============================================================    \r\n.  \r\ndirty_ratio    \r\n.  \r\nContains, as a percentage of total system memory, the number of pages at which    \r\na process which is generating disk writes will itself start writing out dirty    \r\ndata.    \r\n.  \r\n==============================================================    \r\n.  \r\ndirty_writeback_centisecs    \r\n.  \r\nThe kernel flusher threads will periodically wake up and write `old' data    \r\nout to disk.  This tunable expresses the interval between those wakeups, in    \r\n100'ths of a second.    \r\n.  \r\nSetting this to zero disables periodic writeback altogether.    \r\n.  \r\n==============================================================    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.dirty_background_bytes = 4096000000    \r\nvm.dirty_expire_centisecs = 6000    \r\nvm.dirty_ratio = 80    \r\nvm.dirty_writeback_centisecs = 50    \r\n.  \r\n\u51cf\u5c11\u6570\u636e\u5e93\u8fdb\u7a0b\u5237\u810f\u9875\u7684\u9891\u7387\uff0cdirty_background_bytes\u6839\u636e\u5b9e\u9645IOPS\u80fd\u529b\u4ee5\u53ca\u5185\u5b58\u5927\u5c0f\u8bbe\u7f6e    \r\n<\/code><\/pre>\n<p>20.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.extra_free_kbytes  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6    \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>extra_free_kbytes    \r\n.  \r\nThis parameter tells the VM to keep extra free memory   \r\nbetween the threshold where background reclaim (kswapd) kicks in,   \r\nand the threshold where direct reclaim (by allocating processes) kicks in.    \r\n.  \r\nThis is useful for workloads that require low latency memory allocations    \r\nand have a bounded burstiness in memory allocations,   \r\nfor example a realtime application that receives and transmits network traffic    \r\n(causing in-kernel memory allocations) with a maximum total message burst    \r\nsize of 200MB may need 200MB of extra free memory to avoid direct reclaim    \r\nrelated latencies.    \r\n.  \r\n\u76ee\u6807\u662f\u5c3d\u91cf\u8ba9\u540e\u53f0\u8fdb\u7a0b\u56de\u6536\u5185\u5b58\uff0c\u6bd4\u7528\u6237\u8fdb\u7a0b\u63d0\u65e9\u591a\u5c11kbytes\u56de\u6536\uff0c\u56e0\u6b64\u7528\u6237\u8fdb\u7a0b\u53ef\u4ee5\u5feb\u901f\u5206\u914d\u5185\u5b58\u3002    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.extra_free_kbytes=4xxxxxx    \r\n<\/code><\/pre>\n<p>21.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.min_free_kbytes  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>min_free_kbytes:    \r\n.  \r\nThis is used to force the Linux VM to keep a minimum number    \r\nof kilobytes free.  The VM uses this number to compute a    \r\nwatermark[WMARK_MIN] value for each lowmem zone in the system.    \r\nEach lowmem zone gets a number of reserved free pages based    \r\nproportionally on its size.    \r\n.  \r\nSome minimal amount of memory is needed to satisfy PF_MEMALLOC    \r\nallocations; if you set this to lower than 1024KB, your system will    \r\nbecome subtly broken, and prone to deadlock under high loads.    \r\n.  \r\nSetting this too high will OOM your machine instantly.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.min_free_kbytes = 2xxxxxx    \r\n.  \r\n\u9632\u6b62\u5728\u9ad8\u8d1f\u8f7d\u65f6\u7cfb\u7edf\u65e0\u54cd\u5e94\uff0c\u51cf\u5c11\u5185\u5b58\u5206\u914d\u6b7b\u9501\u6982\u7387\u3002    \r\n<\/code><\/pre>\n<p>22.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.mmap_min_addr  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7       \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>mmap_min_addr    \r\n.  \r\nThis file indicates the amount of address space  which a user process will    \r\nbe restricted from mmapping.  Since kernel null dereference bugs could    \r\naccidentally operate based on the information in the first couple of pages    \r\nof memory userspace processes should not be allowed to write to them.  By    \r\ndefault this value is set to 0 and no protections will be enforced by the    \r\nsecurity module.  Setting this value to something like 64k will allow the    \r\nvast majority of applications to work correctly and provide defense in depth    \r\nagainst future potential kernel bugs.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.mmap_min_addr=6xxxx    \r\n.  \r\n\u9632\u6b62\u5185\u6838\u9690\u85cf\u7684BUG\u5bfc\u81f4\u7684\u95ee\u9898  \r\n<\/code><\/pre>\n<p>23.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.overcommit_memory   \r\nvm.overcommit_ratio   \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>==============================================================    \r\n.  \r\novercommit_kbytes:    \r\n.  \r\nWhen overcommit_memory is set to 2, the committed address space is not    \r\npermitted to exceed swap plus this amount of physical RAM. See below.    \r\n.  \r\nNote: overcommit_kbytes is the counterpart of overcommit_ratio. Only one    \r\nof them may be specified at a time. Setting one disables the other (which    \r\nthen appears as 0 when read).    \r\n.  \r\n==============================================================    \r\n.  \r\novercommit_memory:    \r\n.  \r\nThis value contains a flag that enables memory overcommitment.    \r\n.  \r\nWhen this flag is 0,   \r\nthe kernel attempts to estimate the amount    \r\nof free memory left when userspace requests more memory.    \r\n.  \r\nWhen this flag is 1,   \r\nthe kernel pretends there is always enough memory until it actually runs out.    \r\n.  \r\nWhen this flag is 2,   \r\nthe kernel uses a \"never overcommit\"    \r\npolicy that attempts to prevent any overcommit of memory.    \r\nNote that user_reserve_kbytes affects this policy.    \r\n.  \r\nThis feature can be very useful because there are a lot of    \r\nprograms that malloc() huge amounts of memory \"just-in-case\"    \r\nand don't use much of it.    \r\n.  \r\nThe default value is 0.    \r\n.  \r\nSee Documentation\/vm\/overcommit-accounting and    \r\nsecurity\/commoncap.c::cap_vm_enough_memory() for more information.    \r\n.  \r\n==============================================================    \r\n.  \r\novercommit_ratio:    \r\n.  \r\nWhen overcommit_memory is set to 2,   \r\nthe committed address space is not permitted to exceed   \r\n      swap + this percentage of physical RAM.    \r\nSee above.    \r\n.  \r\n==============================================================    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.overcommit_memory = 0    \r\nvm.overcommit_ratio = 90    \r\n.  \r\nvm.overcommit_memory = 0 \u65f6 vm.overcommit_ratio\u53ef\u4ee5\u4e0d\u8bbe\u7f6e   \r\n<\/code><\/pre>\n<p>24.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.swappiness   \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>swappiness    \r\n.  \r\nThis control is used to define how aggressive the kernel will swap    \r\nmemory pages.    \r\nHigher values will increase agressiveness, lower values    \r\ndecrease the amount of swap.    \r\n.  \r\nThe default value is 60.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.swappiness = 0    \r\n<\/code><\/pre>\n<p>25.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>vm.zone_reclaim_mode   \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>zone_reclaim_mode:    \r\n.  \r\nZone_reclaim_mode allows someone to set more or less aggressive approaches to    \r\nreclaim memory when a zone runs out of memory. If it is set to zero then no    \r\nzone reclaim occurs. Allocations will be satisfied from other zones \/ nodes    \r\nin the system.    \r\n.  \r\nThis is value ORed together of    \r\n.  \r\n1       = Zone reclaim on    \r\n2       = Zone reclaim writes dirty pages out    \r\n4       = Zone reclaim swaps pages    \r\n.  \r\nzone_reclaim_mode is disabled by default.  For file servers or workloads    \r\nthat benefit from having their data cached, zone_reclaim_mode should be    \r\nleft disabled as the caching effect is likely to be more important than    \r\ndata locality.    \r\n.  \r\nzone_reclaim may be enabled if it's known that the workload is partitioned    \r\nsuch that each partition fits within a NUMA node and that accessing remote    \r\nmemory would cause a measurable performance reduction.  The page allocator    \r\nwill then reclaim easily reusable pages (those page cache pages that are    \r\ncurrently not used) before allocating off node pages.    \r\n.  \r\nAllowing zone reclaim to write out pages stops processes that are    \r\nwriting large amounts of data from dirtying pages on other nodes. Zone    \r\nreclaim will write out dirty pages if a zone fills up and so effectively    \r\nthrottle the process. This may decrease the performance of a single process    \r\nsince it cannot use all of system memory to buffer the outgoing writes    \r\nanymore but it preserve the memory on other nodes so that the performance    \r\nof other processes running on other nodes will not be affected.    \r\n.  \r\nAllowing regular swap effectively restricts allocations to the local    \r\nnode unless explicitly overridden by memory policies or cpuset    \r\nconfigurations.    \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>vm.zone_reclaim_mode=0    \r\n.  \r\n\u4e0d\u4f7f\u7528NUMA  \r\n<\/code><\/pre>\n<p>26.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>net.ipv4.ip_local_port_range  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7         \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>ip_local_port_range - 2 INTEGERS  \r\n        Defines the local port range that is used by TCP and UDP to  \r\n        choose the local port. The first number is the first, the  \r\n        second the last local port number. The default values are  \r\n        32768 and 61000 respectively.  \r\n.  \r\nip_local_reserved_ports - list of comma separated ranges  \r\n        Specify the ports which are reserved for known third-party  \r\n        applications. These ports will not be used by automatic port  \r\n        assignments (e.g. when calling connect() or bind() with port  \r\n        number 0). Explicit port allocation behavior is unchanged.  \r\n.  \r\n        The format used for both input and output is a comma separated  \r\n        list of ranges (e.g. \"1,2-4,10-10\" for ports 1, 2, 3, 4 and  \r\n        10). Writing to the file will clear all previously reserved  \r\n        ports and update the current list with the one given in the  \r\n        input.  \r\n.  \r\n        Note that ip_local_port_range and ip_local_reserved_ports  \r\n        settings are independent and both are considered by the kernel  \r\n        when determining which ports are available for automatic port  \r\n        assignments.  \r\n.  \r\n        You can reserve ports which are not in the current  \r\n        ip_local_port_range, e.g.:  \r\n.  \r\n        $ cat \/proc\/sys\/net\/ipv4\/ip_local_port_range  \r\n        32000   61000  \r\n        $ cat \/proc\/sys\/net\/ipv4\/ip_local_reserved_ports  \r\n        8080,9148  \r\n.  \r\n        although this is redundant. However such a setting is useful  \r\n        if later the port range is changed to a value that will  \r\n        include the reserved ports.  \r\n.  \r\n        Default: Empty  \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>net.ipv4.ip_local_port_range=40000 65535    \r\n.  \r\n\u9650\u5236\u672c\u5730\u52a8\u6001\u7aef\u53e3\u5206\u914d\u8303\u56f4\uff0c\u9632\u6b62\u5360\u7528\u76d1\u542c\u7aef\u53e3\u3002  \r\n<\/code><\/pre>\n<p>27.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>  vm.nr_hugepages  \r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7  \r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>==============================================================  \r\nnr_hugepages  \r\nChange the minimum size of the hugepage pool.  \r\nSee Documentation\/vm\/hugetlbpage.txt  \r\n==============================================================  \r\nnr_overcommit_hugepages  \r\nChange the maximum size of the hugepage pool. The maximum is  \r\nnr_hugepages + nr_overcommit_hugepages.  \r\nSee Documentation\/vm\/hugetlbpage.txt  \r\n.  \r\nThe output of \"cat \/proc\/meminfo\" will include lines like:  \r\n......  \r\nHugePages_Total: vvv  \r\nHugePages_Free:  www  \r\nHugePages_Rsvd:  xxx  \r\nHugePages_Surp:  yyy  \r\nHugepagesize:    zzz kB  \r\n.  \r\nwhere:  \r\nHugePages_Total is the size of the pool of huge pages.  \r\nHugePages_Free  is the number of huge pages in the pool that are not yet  \r\n                allocated.  \r\nHugePages_Rsvd  is short for \"reserved,\" and is the number of huge pages for  \r\n                which a commitment to allocate from the pool has been made,  \r\n                but no allocation has yet been made.  Reserved huge pages  \r\n                guarantee that an application will be able to allocate a  \r\n                huge page from the pool of huge pages at fault time.  \r\nHugePages_Surp  is short for \"surplus,\" and is the number of huge pages in  \r\n                the pool above the value in \/proc\/sys\/vm\/nr_hugepages. The  \r\n                maximum number of surplus huge pages is controlled by  \r\n                \/proc\/sys\/vm\/nr_overcommit_hugepages.  \r\n.  \r\n\/proc\/filesystems should also show a filesystem of type \"hugetlbfs\" configured  \r\nin the kernel.  \r\n.  \r\n\/proc\/sys\/vm\/nr_hugepages indicates the current number of \"persistent\" huge  \r\npages in the kernel's huge page pool.  \"Persistent\" huge pages will be  \r\nreturned to the huge page pool when freed by a task.  A user with root  \r\nprivileges can dynamically allocate more or free some persistent huge pages  \r\nby increasing or decreasing the value of 'nr_hugepages'.  \r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>\u5982\u679c\u8981\u4f7f\u7528PostgreSQL\u7684huge page\uff0c\u5efa\u8bae\u8bbe\u7f6e\u5b83\u3002    \r\n\u5927\u4e8e\u6570\u636e\u5e93\u9700\u8981\u7684\u5171\u4eab\u5185\u5b58\u5373\u53ef\u3002    \r\n<\/code><\/pre>\n<p>28.<\/p>\n<p>\u53c2\u6570<\/p>\n<pre><code>  fs.nr_open\r\n<\/code><\/pre>\n<p>\u652f\u6301\u7cfb\u7edf<\/p>\n<pre><code>CentOS 6, 7\r\n<\/code><\/pre>\n<p>\u53c2\u6570\u89e3\u91ca<\/p>\n<pre><code>nr_open:\r\n\r\nThis denotes the maximum number of file-handles a process can\r\nallocate. Default value is 1024*1024 (1048576) which should be\r\nenough for most machines. Actual limit depends on RLIMIT_NOFILE\r\nresource limit.\r\n\r\n\u5b83\u8fd8\u5f71\u54cdsecurity\/limits.conf \u7684\u6587\u4ef6\u53e5\u67c4\u9650\u5236\uff0c\u5355\u4e2a\u8fdb\u7a0b\u7684\u6253\u5f00\u53e5\u67c4\u4e0d\u80fd\u5927\u4e8efs.nr_open\uff0c\u6240\u4ee5\u8981\u52a0\u5927\u6587\u4ef6\u53e5\u67c4\u9650\u5236\uff0c\u9996\u5148\u8981\u52a0\u5927nr_open\r\n<\/code><\/pre>\n<p>\u63a8\u8350\u8bbe\u7f6e<\/p>\n<pre><code>\u5bf9\u4e8e\u6709\u5f88\u591a\u5bf9\u8c61\uff08\u8868\u3001\u89c6\u56fe\u3001\u7d22\u5f15\u3001\u5e8f\u5217\u3001\u7269\u5316\u89c6\u56fe\u7b49\uff09\u7684PostgreSQL\u6570\u636e\u5e93\uff0c\u5efa\u8bae\u8bbe\u7f6e\u4e3a2000\u4e07\uff0c\r\n\u4f8b\u5982fs.nr_open=20480000\r\n<\/code><\/pre>\n<h2><a id=\"user-content-\u6570\u636e\u5e93\u5173\u5fc3\u7684\u8d44\u6e90\u9650\u5236\" class=\"anchor\" href=\"https:\/\/github.com\/digoal\/blog\/blob\/master\/201608\/20160803_01.md#%E6%95%B0%E6%8D%AE%E5%BA%93%E5%85%B3%E5%BF%83%E7%9A%84%E8%B5%84%E6%BA%90%E9%99%90%E5%88%B6\"><\/a>\u6570\u636e\u5e93\u5173\u5fc3\u7684\u8d44\u6e90\u9650\u5236<\/h2>\n<p>1. \u901a\u8fc7\/etc\/security\/limits.conf\u8bbe\u7f6e\uff0c\u6216\u8005ulimit\u8bbe\u7f6e<\/p>\n<p>2. \u901a\u8fc7\/proc\/$pid\/limits\u67e5\u770b\u5f53\u524d\u8fdb\u7a0b\u7684\u8bbe\u7f6e<\/p>\n<pre><code>#        - core - limits the core file size (KB)  \r\n#        - memlock - max locked-in-memory address space (KB)  \r\n#        - nofile - max number of open files  \u5efa\u8bae\u8bbe\u7f6e\u4e3a1000\u4e07 , \u4f46\u662f\u5fc5\u987b\u8bbe\u7f6esysctl, fs.nr_open\u5927\u4e8e\u5b83\uff0c\u5426\u5219\u4f1a\u5bfc\u81f4\u7cfb\u7edf\u65e0\u6cd5\u767b\u9646\u3002\r\n#        - nproc - max number of processes  \r\n\u4ee5\u4e0a\u56db\u4e2a\u662f\u975e\u5e38\u5173\u5fc3\u7684\u914d\u7f6e  \r\n....  \r\n#        - data - max data size (KB)  \r\n#        - fsize - maximum filesize (KB)  \r\n#        - rss - max resident set size (KB)  \r\n#        - stack - max stack size (KB)  \r\n#        - cpu - max CPU time (MIN)  \r\n#        - as - address space limit (KB)  \r\n#        - maxlogins - max number of logins for this user  \r\n#        - maxsyslogins - max number of logins on the system  \r\n#        - priority - the priority to run user process with  \r\n#        - locks - max number of file locks the user can hold  \r\n#        - sigpending - max number of pending signals  \r\n#        - msgqueue - max memory used by POSIX message queues (bytes)  \r\n#        - nice - max nice priority allowed to raise to values: [-20, 19]  \r\n#        - rtprio - max realtime priority  \r\n<\/code><\/pre>\n<h2><a id=\"user-content-\u6570\u636e\u5e93\u5173\u5fc3\u7684io\u8c03\u5ea6\u89c4\u5219\" class=\"anchor\" href=\"https:\/\/github.com\/digoal\/blog\/blob\/master\/201608\/20160803_01.md#%E6%95%B0%E6%8D%AE%E5%BA%93%E5%85%B3%E5%BF%83%E7%9A%84io%E8%B0%83%E5%BA%A6%E8%A7%84%E5%88%99\"><\/a>\u6570\u636e\u5e93\u5173\u5fc3\u7684IO\u8c03\u5ea6\u89c4\u5219<\/h2>\n<p>1. \u76ee\u524d\u64cd\u4f5c\u7cfb\u7edf\u652f\u6301\u7684IO\u8c03\u5ea6\u7b56\u7565\u5305\u62eccfq, deadline, noop \u7b49\u3002<\/p>\n<pre><code>\/kernel-doc-xxx\/Documentation\/block  \r\n-r--r--r-- 1 root root   674 Apr  8 16:33 00-INDEX  \r\n-r--r--r-- 1 root root 55006 Apr  8 16:33 biodoc.txt  \r\n-r--r--r-- 1 root root   618 Apr  8 16:33 capability.txt  \r\n-r--r--r-- 1 root root 12791 Apr  8 16:33 cfq-iosched.txt  \r\n-r--r--r-- 1 root root 13815 Apr  8 16:33 data-integrity.txt  \r\n-r--r--r-- 1 root root  2841 Apr  8 16:33 deadline-iosched.txt  \r\n-r--r--r-- 1 root root  4713 Apr  8 16:33 ioprio.txt  \r\n-r--r--r-- 1 root root  2535 Apr  8 16:33 null_blk.txt  \r\n-r--r--r-- 1 root root  4896 Apr  8 16:33 queue-sysfs.txt  \r\n-r--r--r-- 1 root root  2075 Apr  8 16:33 request.txt  \r\n-r--r--r-- 1 root root  3272 Apr  8 16:33 stat.txt  \r\n-r--r--r-- 1 root root  1414 Apr  8 16:33 switching-sched.txt  \r\n-r--r--r-- 1 root root  3916 Apr  8 16:33 writeback_cache_control.txt  \r\n<\/code><\/pre>\n<p>\u5982\u679c\u4f60\u8981\u8be6\u7ec6\u4e86\u89e3\u8fd9\u4e9b\u8c03\u5ea6\u7b56\u7565\u7684\u89c4\u5219\uff0c\u53ef\u4ee5\u67e5\u770bWIKI\u6216\u8005\u770b\u5185\u6838\u6587\u6863\u3002<\/p>\n<p>\u4ece\u8fd9\u91cc\u53ef\u4ee5\u770b\u5230\u5b83\u7684\u8c03\u5ea6\u7b56\u7565<\/p>\n<pre><code>cat \/sys\/block\/vdb\/queue\/scheduler   \r\nnoop [deadline] cfq   \r\n<\/code><\/pre>\n<p>\u4fee\u6539<\/p>\n<pre><code>echo deadline &gt; \/sys\/block\/hda\/queue\/scheduler  \r\n<\/code><\/pre>\n<p>\u6216\u8005\u4fee\u6539\u542f\u52a8\u53c2\u6570<\/p>\n<pre><code>grub.conf  \r\nelevator=deadline  \r\n<\/code><\/pre>\n<p>\u4ece\u5f88\u591a\u6d4b\u8bd5\u7ed3\u679c\u6765\u770b\uff0c\u6570\u636e\u5e93\u4f7f\u7528deadline\u8c03\u5ea6\uff0c\u6027\u80fd\u4f1a\u66f4\u7a33\u5b9a\u4e00\u4e9b\u3002<\/p>\n<h2><a id=\"user-content-\u5176\u4ed6\" class=\"anchor\" href=\"https:\/\/github.com\/digoal\/blog\/blob\/master\/201608\/20160803_01.md#%E5%85%B6%E4%BB%96\"><\/a>\u5176\u4ed6<\/h2>\n<p>1. \u5173\u95ed\u900f\u660e\u5927\u9875<\/p>\n<p>2. \u7981\u7528NUMA<\/p>\n<p>3. SSD\u7684\u5bf9\u9f50<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u80cc\u666f \u64cd\u4f5c\u7cfb\u7edf\u4e3a\u4e86\u9002\u5e94\u66f4\u591a\u7684\u786c\u4ef6\u73af\u5883\uff0c\u8bb8\u591a\u521d\u59cb\u7684\u8bbe\u7f6e\u503c\uff0c\u5bbd\u5bb9\u5ea6\u90fd\u5f88\u9ad8\u3002 \u5982\u679c\u4e0d\u7ecf\u8c03\u6574\uff0c\u8fd9\u4e9b\u503c\u53ef\u80fd\u65e0\u6cd5\u9002\u5e94HPC\uff0c\u6216\u8005\u786c\u4ef6\u7a0d\u597d\u4e9b\u7684\u73af\u5883\u3002 \u65e0\u6cd5\u53d1\u6325\u66f4\u597d\u7684\u786c\u4ef6\u6027\u80fd\uff0c\u751a\u81f3\u53ef\u80fd\u5f71\u54cd\u67d0\u4e9b\u5e94\u7528\u8f6f\u4ef6\u7684\u4f7f\u7528\uff0c\u7279\u522b\u662f\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u5173\u5fc3\u7684OS\u5185\u6838\u53c2\u6570 512GB \u5185\u5b58\u4e3a\u4f8b 1. \u53c2\u6570 fs.aio-max-nr \u652f\u6301\u7cfb\u7edf CentOS 6, 7 \u53c2\u6570\u89e3\u91ca aio-nr &amp; aio-max-nr: . aio-nr is the running total of the number of events specified on the io_setup system call for all currently active aio contexts. . If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN. . Note that &hellip; <a href=\"https:\/\/www.strongd.net\/?p=1388\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">DBA\u4e0d\u53ef\u4e0d\u77e5\u7684\u64cd\u4f5c\u7cfb\u7edf\u5185\u6838\u53c2\u6570<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16,20],"tags":[213],"class_list":["post-1388","post","type-post","status-publish","format-standard","hentry","category-nginx-2","category-20","tag-dba"],"_links":{"self":[{"href":"https:\/\/www.strongd.net\/index.php?rest_route=\/wp\/v2\/posts\/1388","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.strongd.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.strongd.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.strongd.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.strongd.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1388"}],"version-history":[{"count":1,"href":"https:\/\/www.strongd.net\/index.php?rest_route=\/wp\/v2\/posts\/1388\/revisions"}],"predecessor-version":[{"id":1389,"href":"https:\/\/www.strongd.net\/index.php?rest_route=\/wp\/v2\/posts\/1388\/revisions\/1389"}],"wp:attachment":[{"href":"https:\/\/www.strongd.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1388"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.strongd.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1388"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.strongd.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1388"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}