[Хд] logo

TTFB analysis and optimization

In a most broad sense, TTFB is a metric, that indicates the time needed for web page to load first byte (network packet) after sending a request from the client. TTFB scheme

Measurement includes a DNS request, it is connected to the server and the standby time of the processed request (processing, repacking, sending the page). The term is often confused with server response time — this term allows to estimate response rate to the HTTP-request in the absence of network latency. Almost everything affects TTFB: the network problems and delays, volume of incoming traffic, configuration of web server, volume and optimization of content (image quality, size of css / js / html).

It is obvious that you can’t change alll of the above points. You will hardly get an opportunity to improve network quality and high traffic to the web site can be a bad thing only in case of DDoS-attacks. The only thing that really affects performance is a backend, so let’s tune Nginx.

Analysis of TTFB

There is a large amount of resources devoted to checking the web page loading speed in the vastness of the World Wide Web hosts. One of the most popular and respectful is WebPageTest.org.

It provides more than comprehensive information about the connection time, TTFB, TLS/SSL initialization time (if applicable), as well as the loading of individual elements of the web page.

To check the advanced settings, download speeds and the entire stack of parameters, you can use the browser. Chrome, FireFox and even Safari have appropriate developer tools.

Nginx optimization

Optimal Nginx configuration is presented in this article. We will briefly go through the already known parameters and add some new ones that directly affect TTFB.


Firstly we need to define the number of "workers" in Nginx. Workers in NGINX

Each workflow is able to handle many connections at once and is linked to the physical processor cores. If you know exact quantity of cores in your server, you can specify the number yourself, or trust Nginx:

worker_processes auto;

# Determining of the number of working processes

In addition, you must specify the number of connections:

worker_connections 1024;

# Quantification of connections by one working process, ranging from 1024 to 4096


It is necessary to use a switched off by default directive multi_accept so the web server could process the maximum number of requests:

multi_accept on;

# Workflows will accept all connections

Note that the function will be useful only if large number of simultaneous requests occurs. If it’s not then optimization of work processes needed, so that they won’t work in vain:

accept_mutex on;

# Workflows will take connections in turns

Improving TTFB and server response time both depend on the directives tcp_nodelay and tcp_nopush:

tcp_nodelay on;
tcp_nopush on;

# Activate of directives tcp_nodelay and tcp_nopush

The two functions allow you to disable certain features of the TCP, which were relevant in the 90s, when the Internet was begining to spread, but do not make sense in the modern world. The first directive sends the data as soon as they are available (bypasses the Nagle’s algorithm). The second allows you to send a header response (web page) and the beginning of the file waiting for the package filling (ie, includes tcp_cork). So the browser can start displaying the web page faster.

At first glance, the functions are contradictory. Therefore tcp_nopush directive should be used together with sendfile. In this case the packets are filled prior to shipment, as this directive is much faster and more optimal than the classic method read + write. After the package is full, Nginx automatically disables tcp_nopush and tcp_nodelay forces a socket to send the data. To enable sendfile:

sendfile on;

# Enable more effective, compared to read + write, file sending method

So the combination of all three directives reduces the network load and speeds the sending of files.


Another important optimization affects the size of the buffers. If they are too small Nginx will often use disks, too big and RAM will be quickly filled up. Buffers in Nginx

So you need to set up four directives. client_body_buffer_size and client_header_buffer_size are responsible for the buffers read size of the body and the client request header respectively. client_max_body_size sets the maximum size of the client request; and large_client_header_buffers specifies the maximum number and size of buffers to read large request headers.

The optimal buffer settings will look like this:

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;

# 10k buffer size for the body of the request, 1 KB per title, 8MB for the query buffer and 2 to read large headlines

Timeouts and keepalive

Proper configuration of standby time and keepalive can also significantly improve server responsiveness.

Directives client_body_timeout and client_header_timeout set the time delay for reading a body and request header:

client_body_timeout 10;
client_header_timeout 10;

# Specifying the waiting time in seconds

Nginx can disable clients lacking of response using using reset_timedout_connection directive:

reset_timedout_connection on;

# Disable timed-out connections

Keepalive_timeout sets the wait time before the connection stop and keepalive_requests limits the number of keepalive-requests from the same client:

keepalive_timeout 30;
keepalive_requests 100;

# Set the timeout to 30 and limitations 100 on client requests

Send_timeout sets the wait time for transmission response between two write operations:

send_timeout 2;

# Nginx will wait for an answer 2 seconds


Enabling caching may significantly improve server response time. Client caching

Methods are shown in more detail in the material about caching with Nginx, but in this case cache-control is more important. Nginx is able to send a request to cache rarely modified data, which are often used on the client side. To do this you want to add a line in the server section of conf file:

location ~ * (jpg|jpeg|png|gif|ico|css|js) $ {expires 365d;}

# Targets file formats and duration cache storing

Also it does not hurt to cache information about commonly used files:

open_file_cache max = 10,000 inactive = 20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

# Enables the cache tags of 10 000 files for 30 seconds

Open_file_cache specifies the maximum number of files for which information is stored, and the storage time. open_file_cache_valid sets the time after which you need to check the relevance of the information, open_file_cache_min_uses specifies the minimum number of references to the file and open_file_cache_errors enables errors of files search caching.


How to improve the time of receipt of the first byte, and the responsiveness of the web serverformance of the entire server and, accordingly, the response time and TTFB. So the best solution is to disable basic log and store information about critical errors only:
access_log off;
error_log /var/log/nginx/error.log crit;

# Turn off the main logging

Gzip compression

Usefulness of [ http://thehighload.com/post/Gzip+compression+of+js%2Fcss%2Fhtml Gzip] is difficult to overstate. Compression can significantly reduce traffic and relieve the channel. But it has a downside which is time needed for compression. So you may need to turn it off to improve TTFB and server response time. Gzip compression

At this stage, we can not recommend Gzip off as compression improves the Time To Last Byte, ie, the time needed for a full page load. And this is a more important parameter for client. Implementation of HTTP/2 will greatly improve TTFB and server response time, as it contains a built-in methods for header compression and multiplexing. So that in the future disabling of Gzip may not be as prominent as it is now.

PHP Optimization: FastCGI in Nginx

All sites use modern server technology. PHP, for example, which is also important to optimize. Typically, PHP opens a file, verifies and compiles the code, then executes. PHP can cache the result for rarely modified files using OPcache module. And Nginx, connected to PHP with FastCGI module can store the result of the PHP script and send it to the user in instant.

The most important

Optimization of resources and the correct settings of web server are the main influencing TTFB and server response time factors. Also do not forget about regular software updates to the stable releases, which aim at optimization and performance improvements.

  читать на русском

Sign up to read high quality stuff on advanced development

Google Email

Esc for later