Full Stack Web Performance is written for anyone grappling with the challenges of performance in a DevOps environment. Whether you’re a web developer, a DevOps engineer, an engineering manager or an architect, we think you’ll glean something useful from this practical how-to by Tom Barker.
We’re in the midst of a giant leap forward in software engineering and IT. Cross-functional DevOps teams are the order of the day, and Full Stack Web Performance addresses how web performance fits into this ever-changing environment. Topics in our book are organized into three high-level areas of focus in a product development group:
- Client-side – the user-facing piece of the application that generally runs on the user’s hardware
- Infrastructure – consisting of the facilitating pieces of our application, commonly the CDN and cloud service
- Operations – the practices we put in place to monitor and alert the health of our applications
Full Stack Web Performance also presents ways to leverage existing tools and libraries for huge payoffs. The recommendations and solutions outlined in our book can be measured in days and weeks rather than months and years.
In Chapter 1 we discuss client-side issues. Here browser makers are implementing their own performance improvements with new and incremental changes creating new complexity for developers and ops. One way to keep up with the changes is to run synthetic performance testing, such as speed tests.
These testing tools load a site and run a battery of tests against it, using a dictionary of performance best practices as the criteria. There are many quality performance-testing tools on the market. We look at WebPageTest as an example and give you a step-by-step tutorial on how it works.
Performance testing tools are necessary in a network environment, but they don’t always work in a sensible ad hoc solution. We recommend working them into your existing continuous integration environment, and we show you a variety of ways it can be done.
Accomplish Web Performance Wins via Infrastructure
In Chapter 2, we look at infrastructure performance optimizations worth implementing. We firmly believe there are significant wins you can achieve by simply leveraging your existing architecture. A content delivery network (CDN) in particular can show immediate and significant performance improvements.
A CDN is a globally distributed network used for hosting and serving data. Our book specifically discusses two commercial options available via a CDN: edge caching and global traffic management.
Latency issues are a big concern for all websites. To avoid delays many companies deploy multiple data centers across the country to keep things running smoothly. Proximity of your end users to the machines serving your application is key. With edge caching you can serve content from the same state or even the same city.
Global traffic management (GTM) is another feature of a CDN that helps balance traffic between data centers. This system automatically follows a certain criteria for routing traffic: availability, proximity and performance. In this way data is disseminated at an optimal rate.
In addition, using a cloud service provider to create an infrastructure that scales to accommodate heavy traffic helps avoid performance-killing bottlenecks. The basic architecture of a website running on a cloud platform looks very similar to traditional architecture. There are application nodes (connection points) running in availability zones. And there is a load balancer routing incoming traffic upfront.
Even cloud providers go offline occasionally. We look at options to keep downtime to a minimum and not rely solely on one tool.
In this section we look at the operations needed to maintain your website’s full stack performance. Specifically, how to quantify the actual experience. How are your machines performing in the wild, for example? How are personal devices interacting with your network? And more importantly, how do you identify, triage and debug an issue in production that is impacting your customers?
Using an application performance management tool (APM) is critical to help troubleshoot performance issues. An APM captures metrics on the machines they are installed on and communicates those to the hosted platform. The APM platform then processes the data and makes it available via dashboards.
Some of the key metrics used in our New Relic dashboard example include:
- Throughput – which measures requests over a specific time either per second or per minute
- Errors – tracking application errors is important because they may lead to bad HTTP responses and craft an incorrect error rate
- Most expensive transaction – identifying the transactions that are suddenly taking much longer to respond
- Node health – how to monitor your CPU usage and memory usage
- Third party SLAs – if everything else is running smoothly, the cause of the slowdown might be due to your third-party partner. In many cases, your site may include parts that are dependent on an API to process user input
To find out how to stay up to date on developing DevOps issues here’s your copy of Full Stack Web Performance.
Full Stack Web Performance author Tom Barker is director of Software Engineering and Development at Comcast and an adjunct professor at Philadelphia University.