Have you ever tried to raise the maximum number of open files for NGINX on CentOS 7 or Debian 8.3 Jessie?

I’m referring to the worker_rlimit_nofile directive, which NGINX docs explain as follows:

If so, and if you’ve tried a substantially high value, you’ve probably seen this error log:

2015/11/25 04:47:46 [alert] 8843#8843: setrlimit(RLIMIT_NOFILE, 16384)

First, we need to take note of the maximum number of files available to the entire system.  Whichever maximum you set for NGINX, you should consider whether the NGINX worker processes are the only file handle-hungry processes running on the system.  Either way, you probably want to make sure that the open files limit for NGINX is well below the system maximum.

Also worth noting, this setting isn’t just a maximum for the number of file handles for disk access.  In Linux, file handles are also used to open network connections.  So we want to set worker_rlimit_nofile to a value that will allow for the combined maximum.

Here’s the global value that’s set on an AWS EC2 t2.medium (2 vCPU and 4GB RAM) running CentOS 7:

…  about 373K, which is great because it’s way above my goal of setting the worker_rlimit_nofile directive to 16K.

Finally, keep in mind that the worker_rlimit_nofile directive applies to each NGINX worker process.  The number of worker processes can be controlled by the worker_processes directive, which I usually like to set to auto so that one worker process is started for each cpu core (2 in the case of my t2.medium).  So by setting my worker_rlimit_nofile directive to 16K, I’m opening the door for NGINX to consume up to 32K file handles (networks connections + files).

Normally on CentOS 7 and Debian 8.3 Jessie, the NGINX installation creates an NGINX service managed by systemd.  systemd is the background process that controls all services, including starting and stopping actions, and can also give services extra permissions.  So we’re going to put systemd to work for us to increase the number of open files for NGINX.

Before making the changes, let’s take a look at the current maximum that the system allows for NGINX. In the command below, the /run/nginx.pid file might also be located at /var/run/nginx.pid. You can check your nginx configuration, or try running nginx -V and look for the –pid-path setting):

Notice the Max open files line that indicates a soft limit of 1024 and a hard limit of 4096 – this is the problem.

Create the /etc/systemd/system/nginx.service.d directory. Then create a new file named worker_files_limit.conf in that directory, and enter the following content, with the maximum value you plan to use for worker_rlimit_nofile (or perhaps the maximum value you’d ever plan to use):

Contents of /lib/systemd/system/nginx.service.d/worker_files_limit.conf:

Next, make sure you’ve set the worker_rlimit_nofile value to your desired value in your NGINX configuration file. Then you can reload systemd and restart nginx:

The nginx error should now be absent. Finally, check the new limits that the system has set for NGINX:

The Maximum open files is now showing the 16K value, which in my case will allow that t2.medium to theoretically run up to 32K files/connections!!!  Realistically, I don’t think a t2.medium can handle that kind of traffic – perhaps more like 8K, and not in a sustained way, and not for large data transfers, since the T2 EC2 types are capped in terms of CPU burst performance.  However, this approach should also work on more capable EC2 instances.


Pin It on Pinterest

Share This