Speed up website response times with nginx
The why, when, and how of using nginx to cache a CMS's output.
Prior to worrying about nginx...
Nginx can't do much to help make a slow design and inefficient front-end code feel fast.
A lot of what makes a web page fast or slow is down to design considerations and front end techniques. To that end I've so far implemented the following with my new website:
- Used a performance conscious design.
- Kept the core CSS relatively lean (32Kb before gzip).
- Minified the JS and CSS.
- Used Gzip to compress all appropriate files over the wire.
- Set appropriate cache headers for all content types.
- Used SPDY3 instead of HTTP1.
- Created image assets optimised via ImageAlpha and ImageOptim.
- Ensured that JS, CSS, and fonts are loaded asynchronously.
The goal is to minimise the amount of 'stuff' needed on a page, and to stop any of that stuff from blocking page render. I do have a couple of things that are not quite optimal:
- I load three font files, which is a bit excessive - but I'm willing to pay that price for the design.
- I load jQuery because I'm not good enough with pure JS yet to ditch it.
I'm not worrying about the number of HTTP requests because I'm using SPDY and will soon switch to HTTP2. I've written about why that makes a difference in another article: HTTP2 for front end developers.
That left only one problem area...
Time To First Byte
Having to run any request through a CMS is inevitably slower than serving a static file.
TTFB is a measure of how long it takes for the server to begin responding to a request by sending data to the client. For static files on this website that number is typically in the 20 to 40 millisecond range, which is essentially imperceptible.
However, when requesting a page that's routed through the CMS, such as the homepage, the Time To First Byte is much larger - and becomes noticeable.
This is because the CMS must generate the HTML of the page being requested. For my homepage it's gathering all the articles that I've written, sorting and filtering them into groups, extracting certain fields, creating pagination, and then spitting it all out as HTML. All of that can take a second or so, depending on the amount of content being manipulated and complexity of the relations between the content types.
Using Craft's cache feature
Craft is a great CMS, and so it has techniques to help mitigate complex queries impacting performance - specifically it has cache tags. These can be wrapped around expensive bits of code, and Craft will then store the results of that code in the database; the next time the page is requested Craft will use the previous result instead of doing all the work again. Using these tags I was able to get the homepage TTFB down to about 0.3 seconds - just using that tag lopped about a second off the TTFB.
That's great, but still not on the same order as fetching a static file. Despite the cache tag removing a lot of computation, it still has to execute a bunch of database calls in order to fetch the content of the tag.
To be clear; I'm being very fussy by bothering about a 0.3s TTFB; but I want to see how much I can push things on my site...
Using nginx's fastcgi_cache
With this technique, we can essentially skip the CMS entirely for front-end page requests.
Nginx has a built in way to store the results of a PHP call, so the next time it's needed it can pull the stored result from memory, rather than have PHP do the work again. This is a bit like Craft's cache tag, only even more efficient.
Things to be aware of
Nginx doesn't provide a way to clear its cache when something in the CMS changes.
That ability is kept for the commercial Nginx Plus product. However, there are two options available to those of us not wanting to pay $1,350 per year for this feature.
Option one is to manually delete the cache when we change something. As the cache is stored as files in a location you specify, you can use SSH or SFTP to delete those files when you make a change in the CMS. That works but is a bit clunky, so you could write a little script that listens on a particular URL and executes a bash script to do that for you.
Option two is to not worry about it. Instead set the cache period to something small but useful, like half an hour. That means when you make a change in your CMS it might take up to half an hour to be reflected on the front end of your site. No big deal for my use case, and likely not for most people's blogs either.
Secret option number three is to use a third-party nginx module to manage cache invalidation. I've chosen not to do this: I'm wary of third-party modules, especially ones with little documentation, and given my lack of knowledge in this area I'd rather not go down that route yet.
I'll be going with Option 2 - let my cache age out over a short period of time.
Setting up fastcgi_cache
The first thing we need to do is decide where we're going to store the cache in the filesystem. That folder also needs to be owned by whichever user is running nginx - typically that's
www-data
. Create a folder wherever you want it, for example:mkdir /etc/nginx-cache
chown www-data /etc/nginx-cache
Now we need to define a cache key-zone in nginx, this is done inside the
http { ... }
block, because it will be accessible by any of the servers that may be defined later inside of server { ... }
blocks.
Open your
/etc/nginx/nginx.conf
file and inside of the http { ... }
block add the following:
Setting up a cache key-zone called 'phpcache'
fastcgi_cache_path /etc/nginx-cache levels=1:2 keys_zone=phpcache:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
Now all we need to do is configure the domain we're interested in to use it. You should have an entry in your
/etc/nginx/sites-available/
folder which defines your website, such as mysite.conf
. Open that, and inside the server { ... }
block add:set $no_cache 0;
# Don't cache the CMS admin area
location /admin {
set $no_cache 1;
}
Next, you need to modify the block you have for handling php files, so it looks like this:
location ~ [^/]\.php(/|$) {
fastcgi_cache phpcache; # The name of the cache key-zone to use
fastcgi_cache_valid 200 30m; # What to cache: 'code 200' responses, for half an hour
fastcgi_cache_methods GET HEAD; # What to cache: only GET and HEAD requests (ot POST)
add_header X-Fastcgi-Cache $upstream_cache_status; # Allow us to see if the cache was HIT, MISS, or BYPASSED inside a browser's Inspector panel
fastcgi_cache_bypass $no_cache; # Dont pull from the cache if true
fastcgi_no_cache $no_cache; # Dont save to the cache if true
# the rest of your existing stuff to handle PHP files here
}
That's it, done. You just need to reload the configuration in nginx (on Debian that's a case of running
/etc/init.d/nginx reload
).
My TTFB is now down in the 0.04 second range on any page which has been cached. That's pretty much instant.
You can learn a lot more about what the various options and parts do at the official documentation. This should be enough to get things working for you though.
Nginx as a reverse proxy
This was the first thing I tried, before realising it wasn't actually right for what I needed - my site just doesn't have enough traffic to warrant a reverse proxy approach.
A reverse proxy can do a number of things, but I was interested in using one just for caching. This is where you put a proxy cache server in front of the web server - the proxy stores cached versions of the whole web server's output, so most requests to your website never get to your web server - they get served by the proxy instead, which isn't having to do any CMS processing at all. This set up is a lot like fastcgi_cache - except it's for entire sites and all their files, not just the PHP pages.
I realised this wasn't what I needed, but if you're running a larger site with a lot of traffic, an nginx reverse proxy could be perfect for you - and the set up is almost the same as for fastcgi_cache.
Certificate Installation : NGINX
| |
Prerequisites: Concatenate the CAbundle and the certificate file which we sent you using the following command.
> cat domain_com.crt domain_com.ca-bundle > ssl-bundle.crt If you are Using GUI Text Editor (Ex: Notepad):
(i) To concatenate the certificate files into single bundle file, first open domainname.crt and domainname.ca-bundlefiles using any text editor.
(ii) Now copy all the content of domainname.crt and paste it on the top of domainname.ca-bundle file.
(iii) Now save the file name as ‘ssl-bundle.crt’.
Note: If you have not the received the 'ca-bundle' file in the ZIP that we sent you, you can download it from this article's attachments. (End of this page)
For our CCM "Comodo Certificate Manager" customers: since you receive multiple downloadable links you must make sure that you download the x.509 base64 encoded "Certificate Only" as well as the Root/Intermediate "Certificate only" files. You will be presented with .cer formatted files which you can change the file extension to .crt to complete the process above.
Installation:
1. Store the bundle in the appropriate nginx ssl folder
EX :
> mkdir -p /etc/nginx/ssl/example_com/ > mv ssl-bundle.crt /etc/nginx/ssl/example_com/ 2. Store your private key in the appropriate nginx ssl folder,
EX :
> mv example_com.key /etc/nginx/ssl/example_com/
3. Make sure your nginx config points to the right cert file and to the private key you generated earlier:
Note: If you are using a multi-domain or wildcard certificate, it is necessary to modify the configuration files for each domain/subdomain included in the certificate. You would need to specify the domain/subdomain you need to secure and refer to the same certificate files in the VirtualHost record the way described above.
4. OCSP Stapling Support:
Although optional, it is highly recommended to enable OCSP Stapling which will improve the SSL handshake speed of your website. NginX has OCSP Stapling functionality enabled since version 1.3.7.
In order to use OCSP Stapling in NginX, you must set the following in your configuration:
Where <file> is the location and filename (path) of the ca certificate bundle.
Note 1: For ssl_stapling_verify and ssl_stapling to work, you must ensure that all necessary intermediates and rootcertificates are installed.
Note 2: The resolver name may change based on your environment.
5. After making changes to your config file check the file for syntax errors before attempting to use it. The following command will check for errors:
> sudo nginx -t -c /etc/nginx/nginx.conf
6. Restart your server. Run the following command to do it:
> sudo /etc/init.d/nginx restart
7. To verify if your certificate is installed correctly, use our SSL Analyzer.
Example Virtual Host Configuration:
server { listen 80 default_server; listen [::]:80 default_server; # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response. return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate ssl_certificate /path/to/signed_cert_plus_root_plus_intermediates; ssl_certificate_key /path/to/private_key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits ssl_dhparam /path/to/dhparam.pem; # intermediate configuration. ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS'; ssl_prefer_server_ciphers on; # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; # OCSP Stapling --- # fetch OCSP records from URL in ssl_certificate and cache them ssl_stapling on; ssl_stapling_verify on; ## verify chain of trust of OCSP response using Root CA and Intermediate certs ssl_trusted_certificate /path/to/root_CA_cert_plus_root_plus_intermediates; resolver <IP DNS resolver>; .... } |
How to properly configure your nginx for TLS
It’s quite easy to get nginx configured to use TLS. It’s a little bit more difficult to configure it to do it properly. In this article I will try to explain what different configuration options are and give you an example configuration that you should be able to adjust to your needs.
Nginx does a great job as a “TLS termination” server. TLS termination means that nginx is the “other” end of your TLS connection — the one to which your browser talks. Establishing a TLS connection requires a handshake which can be quite lengthy. Having said that, there is one really good reason why you want your nginx server to be as performant as possible: Your users initial page load is directly impacted by this. This is the most critical point in time for your users. This is the time when the user paints his/hers impression of your company (and not just your product). You want the first impression to be as good as possible.
0. Get TLS certificates
The rest of this walkthrough assumes that you already have your TLS certificates (or know where/how to get them).
1. Enable TLS and HTTP2
To get your nginx to server to use TLS we first need to tell it to use it. To do that, add
ssl
and http2
parameters to listen directive.
server {
listen 443 ssl http2;
...
}
2. Disable SSL
Before we forget, let’s disable SSL. SSL is very old and it has some serious issues.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
3. Optimise cipher suites
Cipher suites are the core of TLS. This is where encryption happens.
First we need to configure nginx to tell clients that we have a preferred list of ciphers that we want to use.
ssl_prefer_server_ciphers on;
Cipher suite can have profound implications on both performance and security of the connection. Choosing which ones to enable or disable is a whole new game. Following is a list of good cipher suites you can start with:
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
4. DH Params
You should also specify your own Diffie-Hellman (DH) key exchange parameters. I won’t go into too much details what DH key exchange is. What you should know about it is that it is a protocol which allows two parties to negotiate a secret without ever putting that secret on the wire. It is pretty impressive piece of “artwork”.
Tell nginx to use DH params:
ssl_dhparam /etc/nginx/certs/dhparam.pem;
You can use
openssl dhparam
to generate parameters:openssl dhparam 2048 -out /etc/nginx/certs/dhparam.pem
Generate DH parameters with at least 2048 bits. If you use 4096 bits for your TLS certificate you should match it in DH parameters too.
5. Enable OCSP stapling
To have a secure connection to a server client needs to verify certificate which server presented. In order to verify that certificate is not revoked client (browser) will contact issuer of the certificate. This adds a bit more overhead to connection initialisation (and thus our page load time).
We can tell our nginx server to get a signed message from OCSP server and then, when initialising a connection with some client, staple it to the initial handshake. This way client can be confident that certificate is not revoked and does not need to explicitly ask OCSP server.
It is also necessary to verify that OCSP response is not tampered with. For OCSP verification to work, the certificate of the certificate issuer, the root certificate, and all intermediate certificates should be configured as trusted using the ssl_trusted_certificate directive. As an example, if you’re using Let’s encrypt certificates you should download their certificate in “pem” format from https://letsencrypt.org/certificates/. You can use
openssl x509
command to check who is the Issuer of the certificate:openssl x509 -in /etc/nginx/certs/example.crt -text -noout
In case of the Issuer above you can use the following command to download correct certificate:
wget -O /etc/nginx/certs/lets-encrypt-x3-cross-signed.pem \
"https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem"
Now you have all the pieces needed to enable OCSP stapling on nginx. All you need to do is add the following to your server configuration:
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/lets-encrypt-x3-cross-signed.pem
;
6. Enable HSTS
In order to achieve the best performance and be able to consume benefits of HTTP2 it is mandatory to use TLS. HSTS is a feature which allows a server to tell clients that they should only use secure protocol (HTTPS) in order to communicate with it. When a (complying) browser receives HSTS header it will not try to contact the server using HTTP for a specified period of time.
To enable HSTS add the following headers to your nginx configuration file:
add_header Strict-Transport-Security "max-age=31536000" always;
If you want to include all subdomains as well add the following line too:
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
7. Optimise SSL session cache
Creating a cache of TLS connection parameters reduces the number of handshakes and thus can improve the performance of your application. Caching is configured using
ssl_session_cache
directive. Default, “built-in” session cache is not optimal as it can be used by only one worker process and can cause memory fragmentation. It is much better to use shared
cache.
Another parameter that effects number of handshakes that happen throughout lifetime of a server is
ssl_session_timeout
. By default it is set to 5 minutes. You should set it to something like 4hrs. Doing this will require you to increase the size of cache (as more information will need to be stored in it).
As a reference, a 1-MB shared cache can hold approximately 4,000 sessions.
Add the following to your nginx server config in order to set TLS session timeout to 4hrs and increase size of TLS session cache to 40MB:
server {
ssl_session_cache shared:SSL:40m;
ssl_session_timeout 4h;
}
8. Enable session tickets
Session tickets are an alternative to session cache. In case of session cache information about session is stored on the server. In case of session tickets, information about session is given to the client. If a client has a session ticket, it can present it to the server and re-negotiation is not necessary. Set
ssl_session_tickets
directive to on:
server { ... ssl_session_tickets on; }
9. Conclusion
If you followed the steps above you should end up with a configuration like the following one:
server { # Enable TLS and HTTP2 listen 443 ssl http2; # Use only TLS ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Tell client which ciphers are available ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
# Use our own DH params ssl_dhparam /etc/nginx/certs/dhparam.pem;
# Enable OCSP stapling ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/ssl/lets-encrypt-x3-cross-signed.pem;
# Enable HSTS add_header Strict-Transport-Security "max-age=31536000" always;
# Optimize session cache ssl_session_cache shared:SSL:40m; ssl_session_timeout 4h;
# Enable session tickets ssl_session_tickets on;
... }
0 Comments