All notes

Walk through

Command line

# -t: Test.
# -c: Config.
nginx -t -c pathology.conf

# Check the config first:
/etc/init.d/nginx configtest
# Then restart:
nginx restart
# Start

nginx -s reload
# -s signal      Send signal to the master process.
# 	stop    SIGTERM - fast shutdown
# 	quit    SIGQUIT - graceful shutdown
# 	reopen  SIGUSR1 - reopening the log files
# 	reload  SIGHUP - reloading the configuration file

# 1628 is the main process ID.
kill -s QUIT 1628

# Run in the foreground.
nginx -g 'daemon off;'

Master and worker processes

nginx has one master process and several worker processes. The main purpose of the master process is to read and evaluate configuration, and maintain worker processes.

The process ID of the nginx master process is written, by default, to the in the directory /usr/local/nginx/logs or /var/run.

Once the master process receives the signal to reload configuration, it checks the syntax validity of the new configuration file and tries to apply the configuration provided in it.

Old worker processes, receiving a command to shut down, stop accepting new connections and continue to service current requests until all such requests are serviced. After that, the old worker processes exit.


nginx consists of modules which are controlled by directives specified in the configuration file.

Directives are divided into:

Directives placed in the configuration file outside of any contexts are considered to be in the main context.

The events and http directives reside in the main context, server in http, and location in server.

# Main context

events {

http {
  server {
    location {

The rest of a line after the # sign is considered a comment.

Example configuration

location /some/path/ {

location / {
    root /var/www/html;
    index index.php;
location ~ \.php {

location /share/ {
    alias html/share/;
    autoindex on;

PHP-fpm fastcgi support


php-fpm作为一种fast-cgi进程管理服务,会监听端口,一般默认监听9000端口,并且是监听本机,也就是只接收来自本机的端口请求,所以我们通常输入命令 netstat -nlpt|grep php-fpm

Example (usually you can find one in the nginx.conf):

server {
    listen       8011;
    location ~ \.php?.*$ {
        # fastcgi_pass unix:/run/php-fpm/php-fpm.sock;

        # For example, for the “/info/” request with the following directives
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME /home/www/scripts/php$fastcgi_script_name;
        # the SCRIPT_FILENAME parameter will be equal to “/home/www/scripts/php/info/index.php”.

        include        fastcgi_params;


#---------- Requests related

  “Content-Length” request header field
  “Content-Type” request header field

  full original request line
  request method, usually “GET” or “POST”
  same as $args
  “on” if connection operates in SSL mode, or an empty string otherwise

  name of the server which accepted a request
  port of the server which accepted a request
  request protocol, usually “HTTP/1.0”, “HTTP/1.1”, or “HTTP/2.0”

  the "name" cookie

  response status (1.3.2, 1.2.2)

#---------- Settings

  root or alias directive’s value for the current request
  same as $uri

#---------- Info

  in this order of precedence: host name from the request line, or host name from the “Host” request header field, or the server name matching a request

  nginx version
  PID of the worker process

Module HTTP


# Wildecard.
server {

#---------- Regexp.

server {
    server_name ~^(www\.)?(.+)$;

    location / {
        root /sites/$2; # Reuse the capture.

server {
    server_name ~^(www\.)?(?<domain>.+)$;

    location / {
        root /sites/$domain; # Named capture.

empty server_name

It is also possible to specify an empty server name (0.7.11).

It allows this server to process requests without the “Host” header field — instead of the default server — for the given address:port pair. This is the default setting.

server {
    server_name "";

How nginx processes a request


Name-based virtual servers

# "server_name" could point to multiple names.
server {
    listen      80;

# This is the default server.
server {
    listen      80 default_server;

server {
    listen      80;
  1. nginx first tests the IP address and port of the request against the listen directives of the server blocks.
  2. Then tests only the request's header field "Host" to determine which server the request should be routed to.
  3. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port, set by "default_server".
  4. If no "default_server" is set, the default server is the first one.

A default server is a property of the listen port, and different default servers may be defined for different ports:

server {
    listen default_server;

server {
    listen default_server;

Prevent processing requests with undefined server names

server {
    listen      80;

    # Since version 0.8.48, this is the default setting, so can be omitted.
    # In earlier versions, the machine's hostname was used as a default server name.
    server_name "";

    # 444 is a special nginx's non-standard code
    return      444;

server {
    listen      80;

    # Path to local file system.
    root        /data/www;

    location / {
        index   index.html index.php;

    # case-insensitive regexp
    location ~* \.(gif|jpg|png)$ {
        expires 30d;

    # case-sensitive regexp
    location ~ \.php$ {
        fastcgi_pass  localhost:9000;
        fastcgi_param SCRIPT_FILENAME
        include       fastcgi_params;

Server names

Using nginx as HTTP load balancer

Configuring HTTPS servers


Auto start on CentOS 7

Add /usr/local/nginx/sbin/nginx to /etc/rc.local.

Disable cache

ngix.conf 配置

When you Ctrl+Shift+R in Firefox but still receive old JS or CSS files, it may be because Nginx is caching them.

SO: clear the cache of nginx suggests to turn off the "sendfile" option to clear the cache.

413 Request Entity Too Large Error

# Add the following line to http or server or location context:
client_max_body_size 2M;

What is the different usages for sites-available vs the conf.d directory for nginx


The sites-* folders are managed by nginx_ensite and nginx_dissite. For httpd users who find this with a search, the equivalents is a2ensite/a2dissite.

The sites-available folder is for storing all of your vhost configurations, whether or not they're currently enabled.
The sites-enabled folder contains symlinks to files in the sites-available folder. This allows you to selectively disable vhosts by removing the symlink.

conf.d does the job, but you have to move something out of the folder, delete it, or make changes to it when you need to disable something.


Reverse proxy


# Set up nginx to listen on 80, and proxy all queries on "/" to port 5010.
server {
	listen       80;
	# Set sever_name only for virtual host.
	# If "I don't want to put a name on this", use "server_name _;" or just comment it.
	server_name  YOUR_HOSTNAME;
	location / {
		proxy_redirect     off;
		proxy_set_header   Host             $host;
		proxy_set_header   X-Real-IP        $remote_addr;
		proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
		client_max_body_size       10m;
		client_body_buffer_size    128k;
		proxy_connect_timeout      90;
		proxy_send_timeout         90;
		proxy_read_timeout         90;
		proxy_buffer_size          4k;
		proxy_buffers              4 32k;
		proxy_busy_buffers_size    64k;
		proxy_temp_file_write_size 64k;

	location /certs {
		alias /etc/wcf/certs/;
	# Suppress error log caused by missing favicon.ico.
	location = /favicon.ico {
		log_not_found off;


Nginx solution for Apache ProxyPassReverse like apache.

// Establish simple proxy between myhost:80 and myapp:8080.

<VirtualHost myhost:80>
  ServerName myhost
  DocumentRoot /path/to/myapp/public
  ProxyPass / http://myapp:8080/
  ProxyPassReverse / http://myapp:8080/


server {
    listen myhost:80;
    server_name  myhost;
    location / {
        root /path/to/myapp/public;
        proxy_set_header X-Forwarded-Host $host:$server_port;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://myapp:8080;
        proxy_redirect default; # May be omitted.


# The default number of buffers is increased and the size of the buffer for the first portion of the response is made smaller than the default.
location /some/path/ {
	proxy_buffers 16 4k;
	proxy_buffer_size 2k;
	proxy_pass http://localhost:8000;

# For fast interactive clients, disable buffering:
location /some/path/ {
	proxy_buffering off;
	proxy_pass http://localhost:8000;


nginx http proxying, load balancing, buffering and caching.


NginxDoc. "log_format" must be in "http" context. "$http_authorization" will output the http header authorization to access.log. You can log request headers "$http_<header>" too and sent headers with "$sent_http_<header>", see this stackoverflow.

monitoring: logging.

#---------- error_log

# The default setting of the error log works globally.
# The error_log directive can be also specified at the http, stream, server and location levels.
# log from error to warn:
error_log /var/log/nginx/error.log warn;

#----- Logging to Syslog

error_log server=unix:/var/log/nginx.sock debug;
access_log syslog:server=[2001:db8::1]:1234,facility=local7,tag=nginx,severity=info;

#---------- log_format

http {
	include		  /etc/nginx/mime.types;

	log_format	main  '$http_authorization $remote_addr - $remote_user [$time_local] "$request" '
					  '$status $body_bytes_sent "$http_referer" '
					  '"$http_user_agent" "$http_x_forwarded_for"';

#---------- Conditional logging

# excludes requests with HTTP status codes 2xx (Success) and 3xx (Redirection):
map $status $loggable {
    ~^[23]  0;
    default 1;
access_log /path/to/access.log combined if=$loggable;

Configure HTTPS


The "ssl" parameter must be enabled on listening sockets, and certificates must be provided.

# Enable multi-processing:
worker_processes auto;

# For optimization. See below: h3 - optimization.
http {
	# Default 1Mb cache contains about 4000 sessions. Here use 10Mb shared cache:
	ssl_session_cache	shared:SSL:10m;
	# The default cache timeout is 5:
	ssl_session_timeout 10m;

server {
	# HTTP/HTTPS at the same time.
	listen				80;
	listen				443 ssl;
	# This is also important for optimization:
	keepalive_timeout	70;

	ssl_protocols		TLSv1 TLSv1.1 TLSv1.2;
	ssl_ciphers			HIGH:!aNULL:!MD5;

cp ca.crt /etc/pki/tls/certs
cp ca.key /etc/pki/tls/private/ca.key
cp ca.csr /etc/pki/tls/private/ca.csr

# /etc/nginx/conf.d/ssl.conf
server {
    listen 443 http2 ssl;
    listen [::]:443 http2 ssl;

    server_name server_IP_address;

    ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;

Create self-signed SSL

digitalOcean: how to create a self signed SSL.

sudo mkdir /etc/ssl/private
sudo chmod 700 /etc/ssl/private

# Now, we can create a self-signed key and certificate pair with OpenSSL in a single command by typing:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
The entirety of the prompts will look something like this:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:server_IP_address
Email Address []:[email protected]

Security enhancement


# Disable SSLv2 and SSLv3
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.
# The ordering of a ciphersuite is very important.
# The recommended cipher suite:

# The recommended cipher suite for backwards compatibility (IE6/WinXP):

Good examples strong SSL security.

server {
    listen 443 ssl spdy;


    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;

    add_header Strict-Transport-Security max-age=63072000;
    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";


The most CPU-intensive operation is the SSL handshake. Two ways to minimize its effect:

The authority provides a bundle of chained certificates which should be concatenated to the signed server certificate. The server certificate must appear before the chained certificates in the combined file:

# The resulting file should be used in the ssl_certificate directive:
cat bundle.crt >

To ensure the server sends the complete certificate chain:

openssl s_client -connect
# Certificate chain
# 0 s:/C=US/ST=Arizona/L=Scottsdale/
#     /, Inc
#     /OU=MIS Department/
#     /serialNumber=0796928-7/, Clause 5.(b)
#   i:/C=US/ST=Arizona/L=Scottsdale/, Inc.
#     /OU=
#     /CN=Go Daddy Secure Certification Authority
#     /serialNumber=07969287
# 1 s:/C=US/ST=Arizona/L=Scottsdale/, Inc.
#     /OU=
#     /CN=Go Daddy Secure Certification Authority
#     /serialNumber=07969287
#   i:/C=US/O=The Go Daddy Group, Inc.
#     /OU=Go Daddy Class 2 Certification Authority
# 2 s:/C=US/O=The Go Daddy Group, Inc.
#     /OU=Go Daddy Class 2 Certification Authority
#   i:/L=ValiCert Validation Network/O=ValiCert, Inc.
#     /OU=ValiCert Class 2 Policy Validation Authority
#     /CN=[email protected]

# Chains:
# In this example the subject (s) of the server certificate #0 is signed by an issuer (i)
# The issuer itself is the subject of the certificate #1, which is signed by an issuer which itself is the subject of the certificate #2, which signed by the well-known issuer ValiCert, Inc. whose certificate is stored in the browsers' built-in certificate base.

An SSL certificate with several names

The SSL connection is established before the browser sends an HTTP request and nginx does not know the name of the requested server. Therefore, it may only offer the default server’s certificate.

Solution 1:
A certificate contains exact and wildcard names in the SubjectAltName field, for example, and *
It is better to place a certificate file with several names and its private key file at the http level of configuration:

ssl_certificate     common.crt;
ssl_certificate_key common.key;

server {
	listen			443 ssl;

server {
	listen			443 ssl;

Solution 2:
A more generic solution for running several HTTPS servers on a single IP address is TLS Server Name Indication extension (SNI, RFC 6066), which allows a browser to pass a requested server name during the SSL handshake and, therefore, the server will know which certificate it should use for the connection.



location / {
  include uwsgi_params



为了让多个站点共享一个uwsgi服务,必须把uwsgi运行成虚拟站点:去掉“-w myapp”加上”–vhost”:

uwsgi -s :9090 -M -p 4 -t 30 --limit-as 128 -R 10000 -d uwsgi.log --vhost


apt-get install Python-setuptools
easy_install virtualenv


virtualenv /var/www/myenv


source /var/www/myenv/bin/activate  
pip install django  
pip install mako  


server {
	listen       80;
	location / {
		include uwsgi_params;
		uwsgi_param UWSGI_PYHOME /var/www/myenv;
		uwsgi_param UWSGI_SCRIPT myapp1;
		uwsgi_param UWSGI_CHDIR /var/www/myappdir1;

server {
	listen       80;
	location / {
		include uwsgi_params;
		uwsgi_param UWSGI_PYHOME /var/www/myenv;
		uwsgi_param UWSGI_SCRIPT myapp2;
		uwsgi_param UWSGI_CHDIR /var/www/myappdir2;



server {
	listen 80;
	root   /var/www/django1.23;
	index  index.html index.htm;
	access_log	/var/log/nginx/django.access.log;
	location /media/ {
		root /var/www/django1.23/adminmedia;
		rewrite ^/media/(.*)$ /$1 break;
	location / {
		include uwsgi_params;
		uwsgi_param UWSGI_PYHOME /var/www/django1.23/vtenv;
		uwsgi_param UWSGI_CHDIR /var/www/django1.23/uwsgiadmin;
		uwsgi_param UWSGI_SCRIPT uwsgiadmin_wsgi;

于是uwsgi的监控信息可以在 http://uwsgiadmin.django.obmem.info看到,用户名密码都是admin。




wcfNote: the best way is to set the prefix in the application itself instead of in Nginx. Take Jenkins for example, see SO: jenkins website root path.

so: rewrite URLs in a proxy response.

location /admin/ {
    proxy_pass http://localhost:8080/;
    sub_filter "http://your_server/" "http://your_server/admin/";
    sub_filter_once off;

location / {
    sub_filter '<a href="'  '<a href="https://$host/';
    sub_filter '<img src="' '<img src="https://$host/';
    sub_filter_once on;
} http sub module.


SO: nginx alias/location.

Both the alias and root directives are best used with absolute paths. You can use relative paths, but they are relative to the prefix config option used to compile nginx.

You can see this by executing nginx -V and finding the value following --prefix=. alias.

Syntax: 	alias path;
Default: 	—
Context: 	location

The path value can contain variables, except $document_root and $realpath_root.

If alias is used inside a location defined with a regular expression:

location ~ ^/users/(.+\.(?:gif|jpe?g|png))$ {
    alias /data/w3/images/$1;

However in this case, it is better to use the root directive instead:

location /images/ {
    root /data/w3;


nginxOrg: proxy_pass.

How request URI is passed to the server

wcf Question: how to make Nginx to remove prefix when doing reverse proxy? The anwser is: add trailing "/"! See explanation below. http proxy module - proxy_pass. A request URI is passed to the server as follows:

# Specified with a URI:
location /name/ {

# Even adding one trailing '/' makes it a URI:
location /name2/ {
### In previous cases, "/name/" and "/name2/" will both be replaced from request.

# Specified without a URI:
location /some/path/ {
### In this case, "/some/path/" will not be removed from request.

wcfNote: the Nginx adopts the format "schema://$host$uri;". So "" is only a "host" while in "" the trailing "/" alone is the "uri" part. You can simply skip the following subsection, which help little in undertanding previous knowledge.


A generic URI is of the form:
 scheme:[//[user:[email protected]]host[:port]][/]path[?query][#fragment]
or simply

  abc://username:[email protected]:123/path/data?key=value&key2=value2#fragid1
  └┬┘   └───────┬───────┘ └────┬────┘ └┬┘           └─────────┬─────────┘ └──┬──┘
scheme  user information     host     port                  query         fragment

  └┬┘ └──────────────┬───────────────┘
scheme              path
Examples of absolute URIs

Examples of URI references


# Syntax:	proxy_redirect default;
#         proxy_redirect off;
#         proxy_redirect redirect replacement;
# Default:	
#         proxy_redirect default;
# Context:	http, server, location

# Suppose a proxied server returned the header field “Location: http://localhost:8000/two/some/uri/”. The directive
proxy_redirect http://localhost:8000/two/ http://frontend/one/;
# will rewrite this string to “Location: http://frontend/one/some/uri/”.

# A server name may be omitted in the replacement string:
proxy_redirect http://localhost:8000/two/ /;

# The default parameter uses the parameters of the "location" and "proxy_pass" directives. 
# The two configurations below are equivalent:
location /one/ {
    proxy_pass     http://upstream:port/two/;
    proxy_redirect default;
location /one/ {
    proxy_pass     http://upstream:port/two/;
    proxy_redirect http://upstream:port/two/ /one/;
# NOTE: The default parameter is not permitted if proxy_pass is specified using variables.



Syntax: 	gzip on | off;
Default: 	gzip off;
Context: 	http, server, location, if in location

gzip on;
# Sets the minimum length of a response that will be gzipped:
gzip_min_length 1k;
# gzip_buffers number size:
gzip_buffers 16 64k;
# Sets the minimum HTTP version of a request required to compress a response:
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
# Enables or disables inserting the “Vary: Accept-Encoding” response header field:
gzip_vary on;


The default root for nginx on Linux is: /usr/share/nginx/html/.

location / {
	index index.php;

# Turn on directory indexing:
location  /  {
	autoindex  on;


# In response to the http://localhost/images/example.png request nginx will send the /data/images/example.png file. Requests with URIs not starting with /images/ will be mapped onto the /data/www directory.
server {
	location / {
		root /data/www;

	location /images/ {
		root /data;


# Disallow the display of the original jpg files and allowing access to the rest:
location /pictures {
	location ~ \.jpg$ {
	deny all;


StackOverflow: nginx location priority.


location optional_modifier location_match {
	. . .
  1. Directives with the "=" prefix that match the query exactly. If found, searching stops.
  2. All remaining directives with conventional strings. If this match used the "^~" prefix, searching stops.
  3. Regular expressions, in the order they are defined in the configuration file.
  4. If #3 yielded a match, that result is used. Otherwise, the match from #2 is used.

location  = / {
  # matches the query / only.
  [ configuration A ] 
location  / {
  # matches any query, since all queries begin with /, but regular
  # expressions and any longer conventional blocks will be
  # matched first.
  [ configuration B ] 
location /documents/ {
  # matches any query beginning with /documents/ and continues searching,
  # so regular expressions will be checked. This will be matched only if
  # regular expressions don't find a match.
  [ configuration C ] 
location ^~ /images/ {
  # matches any query beginning with /images/ and halts searching,
  # so regular expressions will not be checked.
  [ configuration D ] 
location ~* \.(gif|jpg|jpeg)$ {
  # matches any request ending in gif, jpg, or jpeg. However, all
  # requests to the /images/ directory will be handled by
  # Configuration D.   
  [ configuration E ] 

HTTP proxy setting

upstream app_servers {

# Configuration for the server
server {

    # Running port
    listen 80;

    # Proxying the connections connections
    location / {

        proxy_pass         http://app_servers;
        proxy_redirect     off;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;



Syntax: user user [group];
Default: user nobody nobody;
Context: 	main

Defines user and group credentials used by worker processes. If group is omitted, a group whose name equals that of user is used.



# Choose one below:
charset utf-8;
charset off;


NginxDocs. Defines a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX-domain sockets can be mixed.

upstream backend {
	server weight=5;
	server		max_fails=3 fail_timeout=30s;
	server unix:/tmp/backend3;
	server	backup;

By default, requests are distributed between the servers using a weighted round-robin balancing method.