What is Network Latency
Measured in milliseconds, network latency is the time it takes a site visitor to connect to your webserver, their request to be processed and the server to begin sending data. Several factors impact latency, including:
- Server performance – There is a correlation between server performance metrics—including server speed, hardware used (e.g., HDD/SDD drives) and available RAM—and your site latency.
- Round-trips – A round-trip is the journey taken by an object request (e.g., HTML files, stylesheets and script files) to your webserver and back to the user. Round-trip time (RTT) is primarily affected by the distance between webserver and user, as well as the number of intermediate points through which a connection travels.
A slight change in latency can have a perceivable effect on page load time and user experience (UX). This is especially true for commercial websites (i.e., e-commerce sites), where high latency can significantly impact overall performance and therefore UX.
Measuring latency is typically done using one of the following methods:
Round trip time (RTT) – Calculated using a ping, a command-line tool that bounces a user request off of a server and calculates how long it takes to return to the user device.
In most cases, the ping rate gives a relatively accurate assessment of latency. Sometimes, however, throttling and congestion create a difference between the ping rate and the effective latency of a web server.
Actual/perceived time to first byte (TTFB) – TTFB is the time taken for a user’s browser to begin loading a webpage after your server receives an initial request. There are two measures of TTFB:
- Actual TTFB – The time it takes for a user’s browser to receive the first byte of data from a server. Actual TTFB is mostly impacted by network speed and connectivity.
- Perceived TTFB – The time it takes for a user to notice that the page has started to load. This is an important SEO and UX metric and is mostly impacted by the time it takes for an HTML file to be parsed.
A number of tools can affect actual and perceived TTFB, including:
- TCP connection pre-pooling – Reduces connection times by preemptively opening multiple stand-by connections to handle subsequent requests.
- Progressive image rendering – The loading of pixilated versions of an image, which are gradually replaced by higher resolution variants. This gives your user the impression that a page is loading quicker than it normally would.
Using CDNs to Reduce Latency
Imperva and other CDNs can be used to reduce your website’s latency, improving overall site performance and UX. Among other methods, this is done through:
Content caching – CDNs cache and compress mirror versions of your web pages, which are then stored in strategically placed data centers. Content is then delivered to users based on their geolocation, thereby reducing round trip times and latency.
Connection optimization – CDNs optimize connections between users and origin servers through session reuse, TCP pre-pooling and network peering. Premium CDNs speed up communication further by routing traffic through a tier 1 network backbone having a minimal amount of hops.
In addition to reducing latency, CDNs improve your site’s page load times through front-end optimization (FEO) techniques such as minification, file compression and image optimization.