| 注册
请输入搜索内容

热门搜索

Java Linux MySQL PHP JavaScript Hibernate jQuery Nginx
jopen
10年前发布

Nginx 性能调优

原文地址:http://nginx.com/blog/tuning-nginx/

Tuning NGINX for Performance

Nginx 性能调优

NGINX is well known as a high performance load balancercache and web server, powering over 40% of the busiest websites in the world.  Most of the default NGINX and Linux settings work well for most use cases, but it can be necessary to do some tuning to achieve optimal performance.  This blog post will discuss some of the NGINX and Linux settings to consider when tuning a system.  There are many settings available, but for this post we will cover the few settings recommended for most users to consider adjusting.  The settings not covered in this post are ones that should only be considered by those with a deep understanding of NGINX and Linux, or after a recommendation by the NGINX support or professional services teams.  NGINX professional services has worked with some of the world’s busiest websites to tune NGINX to get the maximum level of performance and are available to work with any customer who needs to get the most out of their system.

Nginx闻名于高性能负载均衡,缓存和web服务器,为全世界40%最繁忙的网站提供支持。在我们大多数使用情况下,默认的 Nginx 和 Linux 配置能得到满足。但是有时候调试出更优的性能是很有必要的。本文将讨论调试一个系统时需要考虑的Nginx 和 Linux 设置。有很多的设置可用,但是本博中我们只涉及到少数几个大多数用户调试时推荐过的设置项。本文没有提及的配置项通常是那些对Nginx 和 Linux 有着深入理解的用户会使用到,或者是在 Nginx 官方或专业服务团队推荐才会使用。Nginx 专业服务帮助那些世界上访问量最大的网站调试 Nginx 以达到最高性能,还有那些需要想要充分利用他们系统的顾客。

Introduction

简介

A basic understanding of the NGINX architecture and configuration concepts is assumed.  This post will not attempt to duplicate the NGINX documentation, but will provide an overview of the various options with links to the relevant documentation.

本文假设读者已经对Nginx的架构和配置的概念有了基本的理解,因此不是去简单的复制一份 Nginx 文档,但是会提供各种选项的概述以及相关文档的链接。

A good rule to follow when doing tuning is to change one setting at a time and if it does not result in a positive change in performance, then to set it back to the default value.

一个很好的原则是调优时每次只修改一个配置,如果对配置的修改不能提高性能的话,改回默认值。

We will start with a discussion of Linux tuning since some of these values can impact some of the values you will use for your NGINX configuration.

我们将从Linux调优开始因为有些值会影响到你调优Nginx时用到的一些配置参数。

Linux Configuration

Linux 配置

Modern Linux kernels (2.6+) do a good job in sizing the various settings but there are some settings that you may want to change.  If the operation system settings are too low then you will see errors in the kernel log to help indicate that you should adjust them.  There are many possible Linux settings but we will cover those settings that are most likely in need of tuning for normal workloads.  Please refer to Linux documentation for details on adjusting these settings.

流行的 Linux 内核(2.6以后)在各种设置的大小调整上做得很好了但是同样有一些设置是你可能想要修改的。如果你的操作系统设置太低导致你在内核日志里看到错误信息了,那表明你应该调整配置了。可能的Linux 配置有很多但是我们只讨论几个在普通工作负载调优下需要用到的。请参考 Linux 文档获取这些调整到的配置项的详情。

The Backlog Queue

积压队列

The following settings relate directly to connections and how they are queued.  If you have high rate of incoming connections and you are setting uneven levels of performance, for example some connections appear to be stalling, then running these settings may help.

下面这些配置直接与连接和连接如何排队相关。如果你高速率的接入并且你的性能配置不均衡,例如一些连接出现延时的情况,那么下面的调优配置将起到作用。

net.core.somaxconnThis sets the size of the queue for connections waiting for NGINX to accept them.  Since NGINX accepts connections very quickly, this value does not usually need to be very large, but the default can be very low, so increasing can be a good idea if you have a high traffic website.  If the setting is too low then you should see error message in the kernel log and increase this value until the errors stop.  Note: if you set this to a value greater then 512, you should change your NGINX configuration using the backlog parameter of the listen directive to match this number.

net.core.somaxconn:这一项设置了等待 Nginx 接收的连接队列的大小。因为 Nginx 接受连接非常快,所以这个值一般不需要太大,但是默认值很低,所以如果你的网站流量很高的话把这个值加大是很好的办法。如果这个值过低,你会在内核日志里看到错误信息,那就要一直增大这个值直到不再报错。注意:如果你设置此值大于512,你需要把Nginx 配置中的 listen 指令的 backlog 参数修改与此数相等。(译者注:listen 指令的 backlog 这个参数设置了等待边接的队列的最大长度。默认情况下,backlog 在FreeDSB 和 Mac OS X 下设置为 -1,其他平台为511)

net.core.netdev_max_backlogThis sets the rate at which packets can be buffered by the network card before being handed off the the CPU.  For machines with a high amount of bandwidth this value may need to increased.  Check the documentation for your network card for advice on this setting or check the kernel log for errors relating to this setting.

net.core.netdev_max_backlog:这一项设置包在哪个速率下会被网卡在移交CPU之前缓冲。 当主机需要非常大的流量的时候这个值需要增加。查看一下你的网卡的文档对这个设置的建议,或者看看内核日志中与该项曙光的错误。

File Descriptors

文件描述符

File descriptors are operating system resources used to handle things such as connections and open files.  NGINX can use up to two file descriptors per connection, for example if it is proxying, then it can have one for the client connection and another for the connection to the proxied server, although if HTTP keepalives are used this ratio will be much lower.  For a system that will see a large number of connections, these settings may need to be adjusted:

文件描述符是用来处理如连接或者打开的文件等的操作系统资源。Nginx 每个连接可以建立两个描述符,例如它在进行代理 ,那么它有一个指向客户端连接,一个指向代理服务器连接,尽管在使用 HTTP  长连接的情况下这个比率很低。

sys.fs.file_maxThis is the system wide limit for file descriptors.

sys.fs.file_max:这一项是文件描述符的系统范围限制。

nofile: This is the user file descriptor limit and is set in the /etc/security/limits.conf file.

nofile是用户文件描述符的限制,在/etc/security/limits.conf文件中设置。

Ephemeral ports

临时端口

When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral port.

当Nginx作为一个代理时,每一个到上游服务器的连接都使用临时端口。

net.ipv4.ip_local_port_rangeThis specifies the starting and ending port value to use.  If you see that you are running out of ports, you can increase this range.  A common setting it use ports 1024 to 65000.

net.ipv4.ip_local_port_range:这一项指定了可用端口的范围。如果你发现你运行在这些端口以外,那么你需要增大这个范围了。通常的设置范围为1024 到 65000。

net.ipv4.tcp_fin_timeoutThis specifies how long after port is no being used that it can be used again for another connection.  This usually defaults to 60 seconds but can usually be safely reduced to 30 or even 15 seconds.

net.ipv4.tcp_fin_timeout:这一项指定一个端口多久没有被使用之后可以被其他连接使用。通常默认的默认为60秒,不过减到30秒甚至15秒会更安全。

NGINX Configuration

Nginx 配置

The following are some NGINX directives that can impact performance.  As stated above, we will only be discussing those directives that we recommend most users look at adjusting.  Any directive not mentioned here is one that we recommend not to be changed without direction from the Nginx team.

接下来是一些会影响性能的 Nginx 指令。如上所述,我们只讨论大多数用户调试时推荐的指令。本文没有提到的指令,我们建议在没有Nginx 团队的指导下不要随便改动。

Worker Processes

工作进程

NGINX can run multiple worker processes, each capable of processing a large number of connections. You can control how many worker processes are run and how connections are handled with the following directives:

Nginx能运行多个工作进程,每一个工作进程能处理很大量的连接。你可以通过下面的指令集控制运行多少个工作进程,以及如果处理连接:

worker_processes:  This controls the number of worker processes that NGINX will run.  In most cases, running one worker process per CPU core works well.  This can be achieved by setting this directive to “auto”.   There are times when you may want to increase this number, such as when the work processes have to do a lot of disk I/O.  The default is 1.

worker processes:这一项是Nginx运行的工作进程。在多数情况下,有几个CPU内核就运行几个工作进程。这个值也可以通过将该指令设置为 ”auto” 取得。也有些时间你需要调大这个数,比如当工作进程需要从大量磁盘读写的时候。该值默认为1.

worker_connections: This is the maximum number of connections that can be processed at one time by each worker process.  The default is 512, but most systems can handle a larger number.   What this number should be set to will depend on the size of the server and the nature of the traffic and can be discovered through testing.

worker_connections:这是一个工作进程可以同时处理的最大连接数。默认为512,但是多数系统可以处理更大的数。这一项的取值取决于服务器和大小和流量的性质,这些都可以通过测试得到。

Keepalives

Keepalives

Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed for opening and closing connections.  NGINX terminates all client connections and has separate and independent connections to the upstream servers.  NGINX supports keepalives for the client and upstream servers.  The following directives deal with client keepalives:

Keepalive 连接可以通过降低CPU和网络在打开和关闭连接时需要的开销来影响性能。Nginx终止了所有客户端请求并且分离和独立了所有连接上游服务器的连接。Nginx支持客户端和上游服务器的长连接。接下来这些指令处理客户端 keepalives:

keepalive_requests:  This is the number of requests a client can make over a single keepalive connection.  The default is 100, but can be set to a much higher value and this can be especially useful for testing where the load generating tool is sending many requests from a single client.

keepalive_requests:这是一个客户端可以通过一个keepalive连接的请求次数。缺省值是100,但是也可以调得很高,而且这对于测试负载生成工具从哪里使用一个客户端发送这么多请求非常有用。

keepalive_timeout:  How long a keepalive connection will remain open once it becomes idle.

The following directives deal with upstream keepalives:

keepalive_tiimeout:一个keepalive 连接被闲置以后还能保持多久打开状态。

下面这些指令处理upstream keepalives:

keepaliveThis specifies the number of idle keepalive connections to an upstream server that remain open for each worker process.  There is no default value for this directive.

keepalive:这一项指定一个工作进程保持打开状态的闲置的上游服务器的连接数。这一项没有缺省值。

To enable keepalive connections to the upstream you must add the following directives:

proxy_http_version 1.1;
proxy_set_header Connection “”;

要启用upstream的keepalive 连接,你必须加入下面的指令:

proxy_http_version 1.1;
proxy_set_header Connection “”;

Access Logging

访问日志记录

Logging each requests takes both CPU and I/O cycles and one way to reduce this impact is to enable access log buffering.  This will cause NGINX to buffer a series of log entries and write them to the file at one time rather then as separate write operation.  Access log buffering is enabled by specifying the “buffer=size” option of the access_log directive.  This sets the size of the buffer to be used.  You can also use the “flush=time” option to tell NGINX to write the entries in the buffer after this amount of time.  With these two options defined, NGINX will write entries to the log file when the next log entry will not fit into the buffer or if the entries in the buffer are older than the time specified for the flush parameter.  Log entries will also be written when a worker process is re-opening log files or is shutting down.   It is also possible to disable access logging completely.

记录每一个请求同时需要CPU和I/O周期,减少这一影响的一个方法就是启用访问日志缓存。打开后能使Nginx一次性缓冲一堆日志内容到文件里而不每一条日志做一次写操作。访问日志的缓冲是通过设置 access_log 指的 “buffer=size”  选项来启用的。这一项设置缓冲区的大小。也可以通过 “flush=time” 一项设置 Nginx 将缓冲区中所有数据写到文件的间隔时间。这两项都定义了以后,Nginx将在缓冲区满了或者缓冲区里的条目生成时间比 flush 参数指定的时间更早的情况下把缓冲区里的全部条目写入日志文件。日志记录还会在工作进程重新打开或者关闭日志文件时写入。这也可能彻底地彬访问日志。

Sendfile

Sendfile is an operating system feature that can be enabled on NGINX.  It can provide for faster tcp data transfers by doing in-kernel copying of data from one file descriptor to another, often achieving zero-copy. NGINX can use it to write cached or on-disk content down a socket, without any context switching to user space, making it extremely fast and using less CPU overhead. Because the data never touches user space, it’s not possible to insert filters that need to access the data into the processing chain, so you cannot use any of the NGINX filters that change the content, e.g. the gzip filter.  It is disabled by default.

Sendfile 是Nginx可以启用的一个操作系统功能。它能通过在内核中从一个文件描述符拷贝数据到另一个文件描述符来提供更快的 tcp 数据传输,通常能实现零拷贝。Nginx 能使用这个功能在没有任何上下文切换到用户空间的情况下,通过套接字写缓存或者磁盘里的内容,能免速度极快且使用更少的CPU开销。因为数据进不了用户空间,所以也不可能插入进程链中需要到的过滤器,所以你不能够使用任何的Nginx过滤器来修改这些内容,例如gzip过滤器,默认是禁用的。

Limits

NGINX and NGINX Plus allow you to set various limits that can be used to help control the resources consumed by clients and therefore impact the performance of your system and also affect user experience and security.  The following are some of these directives:

Nginx 和 Nginx 加可以设置各种限制来帮助控制来自客户端的资源消耗,提升系统性能,提升用户体验和安全性。下面就是些想着的指令:

limit_conn/limit_conn_zone:  These directives can be used to limit the number of connections NGINX will allow, for example from a single client IP address.  This can help prevent individual clients from opening too many connections and consuming too many resources.

linut_conn/limit_conn_zone:这两个指令用于限制Nginx允许的连接数量,例如从一个IP地址来的连接数量。能够帮助阻止个别的客户端利用打开许多的连接来消耗过多的资源。

limit_rate: This will limit the amount of bandwidth allowed for a client on a single connection. This can prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.

limit_rate:限制单个连接的带宽量。能够防止系统因一些客户端而超载,能确保所有用户都享用质量的服务。

limit_req/limit_req_zone: These directives can be used to limit the rate of requests being processed by NGINX.  As with limit_rate this can help prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.  They can also be used to improve security, especially for login pages, by limiting the requests rate so that it is adequate for a human user but one that will slow programs trying to access your application.

limit_req/limit_req_zone:这些指令用于限制正在被Nginx 处理的请求的比率。使用 limit_rate 能防止系统某几个客户端超载并且能确保所有客户获取高质量的服务。这些指令同样能提高安全性,尤其是登录页,通过限制请求率来做更适合人类用户的请求,减慢试图访问你应用的程序用户。

max_conns: This is set for a server in an upstream group and is the maximum number of simultaneous connections allowed to that server.  This can help prevent the upstream servers from being overloaded.  The default is zero, meaning that there is no limit.

max_conns:该项设置上游分组里的服务器允许同时连接的最大数目。能限制上游服务器的的超载。缺省值为0,即无限制。

queue: If max_conns is set for any upstream servers, then the queue directive governs what happens when a request cannot be processed because there are no available servers in the upstream group and some of those servers have reached the max_conns limit.  This directive can be set to the number of requests to queue and for how long.  If this directive is not set, then no queueing will occur.

queue:如果有上游服务器配置的max_conns一项,当有请求因为没有可用的上游分组中的服务器并且有些服务器达到max_conns的限制时,queue指令便起使用。queue 指令能决定请求队列的大小和时长。如果该值没有配置,那么不会有队列产生。

Additional considerations

其他注意事项

There are additional features of NGINX that can be used to increase the performance of a web application that don’t really fall under the heading of tuning but are worth mentioning because their impact can be considerable.  We will discuss two of these features.

还有一些不是非要放到调优这个标题下的Nginx功能能够提高一个网站应用的性能,但是依然要提一下因为他们的影响是值得注意的。我们讨论这其中的两个功能。

Caching

缓存

By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically increase the response time to the client while at the same time dramatically reducing the load on the backend servers.  Caching is a subject of its own and will not be covered here.  For more information on configurating NGINX for caching please see: NGINX Admin Guide – Caching.

在一组做了负载均衡的网站或应用服务器上开启缓存 ,能够戏剧性地在减轻后端服务器负载的同时增加(译者注:怎么觉得应该是优化或者减少的意思,作者写错了?)到客户端的响应时间。缓存是Nginx 自己的主题在这时太不多言。更多信息情看:NGINX 管理指南——缓存

Compression

压缩

Compressing the responses to clients can greatly reduce the size of the responses, requiring less bandwidth, however it does require CPU resources to do the compression so is best used when there is value to reducing bandwidth.  It is important to note that you should not enable compression for objects that are already compressed, such as jpegs.   For more information on configuring NGINX for compression please see: NGINX Admin Guide – Compression and Decompression

压缩到客户端的响应能够很显著的减少响应大小,减少所需带宽,然后需要耗费CPU资源来进行压缩所以最后是在减少带宽 有价值的时候再使用。特别注意的是如果你的对象已经压缩过了如jpegs,那就不需要再启用压缩了。更多关于配置Nginx压缩的信息请看: NGINX 管理指南——压缩与解压缩。

来自:http://blog.csdn.net/agangdi/article/details/40838499

 本文由用户 jopen 自行上传分享,仅供网友学习交流。所有权归原作者,若您的权利被侵害,请联系管理员。
 转载本站原创文章,请注明出处,并保留原始链接、图片水印。
 本站是一个以用户分享为主的开源技术平台,欢迎各类分享!
 本文地址:https://www.open-open.com/lib/view/open1415260425558.html
Nginx Web服务器