by Oliver
17. November 2015 23:11
We've recently been doing some optimization work on Camping.info to improve user experience through faster web site load times. This post goes into the details of the optimization steps and their effect on the site's performance. Measure, Improve, Measure again To be confident that the changes we will introduce to Camping.info actually improve the performance or perceived performance of the site, we set up an automated test harness on TeamCity using the webpagetest API wrapper node module and a custom powershell wrapper script around that which collects the test results and reports them to TeamCity. The following paragraphs will go into some detail on the concrete steps we took to improve our users's experience. Include external script in existing bundle As described in Avoid Blocking Requests on External Domains, we chose to include the Bugsnag javascript library in our already existing script bundle. This saves one request and one DNS lookup. Here's a look at the start page performance before and after: The savings are humble but noticeable – the Time To Start Render drops from >1200 ms to 1100-1200 ms, which in practice will correlate with a slightly faster page appearance. Host jQuery on your own server – or don't Based on the previous improvement I assumed that saving a DNS lookup alone could already help in improving perceived performance. So for loading jQuery we switched from cdnjs.cloudflare.com to our own domain. It turns out though that this didn't have any impact on rendering or load times. This is actually a tricky optimization – it depends a lot on who your audience is and what sites they visit. Loading e.g. jQuery from an external host will either save one request because the client's browser might have that resource cached after visiting a totally unrelated site that includes the same library, or your user will pay for an extra DNS lookup as compared to just loading the library from your own server. The decision is up to you. Load external javascript after window.load A large block of potential optimization on Camping.info actually concerns the loading, parsing, and execution of javascript. Due to the organic growth of the site over the last 8 years, deferring javascript execution to a point in time after the page has actually rendered turns out to be a complex issue. We still have plenty of pre-loaded or inline javascript blocks, which is mostly due to the way ASP.NET WebForms and its UpdatePanels work. The only easy solution to dependency managament for all of those code blocks was to simply load all dependencies before the HTML that refers to them. This pattern, unfortunately, has lead to one large script bundle that we load in the <head> section of the page because loading it later would mean breaking all inline script block execution. Fixing this will require significant refactoring and thorough testing. But there still is room for improvement! All external javascript can safely be loaded after the page has rendered. This includes Facebook buttons, the AddThis widget, and advertisment scripts. We already had most of these scripts loading after the window onload event, but additionally deferring loading of connect.facebook.net/en_US/fbds.js showed the following improvement on the start page: Now, while the render start time did not decrease, the page load time decreased from around 1.8s to 1.5s. Thi is definitely a decent improvement but please don't overrate it – most of the page's content had probably already been loaded even in the old version. But now we can at least be sure that all Facebook assets will definitely be loaded only after all of the page's own assets have been loaded. And that's good. It turns out that on a different page, the improvement after this single change is actually even more significant: Here we can see that the deferred loading of the Facebook script actually improves not only the page load time, but also the start render and DOM content ready times. One script is still being loaded before the onload event – Google Analytics. I couldn't quite convince myself to defer its loading until after onload, because we use it to track some user metrics and timings, and I felt that GA might not report the same quality of results if loaded too late. Please leave your opinions on this topic in the comment section. Specify image dimensions inline to speed up rendering The worst grade in our PageSpeed score was actually for not specifying image dimensions, neither in HTML nor in CSS: So we went ahead and did that for the start page. Here's how that improved our score: I honestly cannot tell any difference in performance with image dimensions provided. There are several possible causes for this: maybe the images in the above-the-fold content are loaded fast enough to not delay page rendering maybe the page's CSS allows the browser to start rendering even without knowing the exact image dimensions something that I have no clue about at the moment. Loading CSS file from same domain To speed up rendering it also seemed to be a good idea to deliver our site's CSS file from the same domain as the HTML, thus saving a DNS lookup during the early stage of page rendering. Actually, the start render time dropped a bit by doing that but unfortunately the page load time increased a bit indeterministically: It's safe to assume that the additional load time was caused by the fact that all image resources that are referenced in our CSS were now also being retrieved from the main domain instead of the cookieless one which in turn delayed loading of other image resources. For now we reverted this change, but we know that we can further optimize the render process by serving out CSS even faster. It would probably also help a lot if we split our large CSS file into smaller ones that could be loaded per page. Changes without performance impact Wrapping inline javascript blocks in $().ready() Todos for the next performance sprint defer loading of as many javascript files as possible to after the onload event combine and minify ASP.NET AJAX's ScriptResource.axd and WebResource.axd files load CSS from page domain but referenced images from cookieless domain (try css-url-rewrite) load less CSS per page – ideally inline the CSS needed for the above-the-fold content use HTML and CSS instead of images for our Google map buttons – this will save a ton of requests on the search page Where are we at now? Happy performance tuning!
by Oliver
14. November 2015 21:40
Get your own WebPageTest server and test agent up and running in minutes, not hours! Motivation The original documentation can be found here and here. Unfortunately, it's a bit vague in some parts, especially if you don't set up infrastructural pieces and cloud server instances on a daily basis. So here's a how-to guide to get you up and running as fast as possible. Infrastructure overview To run web page tests against your own private instance, we need: a WebPageTest server instance (the master) one ore more WebPageTest test agents (the clients) The master receives test jobs and delegates them to one of the clients. You can run tests from the web interface or using the API through the webpagetest node module. You might want to think about where in the world you want to spin up those virtual machines. The WPT server (master) can really be hosted anywhere you want but the test agents (client) location should be chosen conciously because their distance to the tested site's server will play a role in the results you will see later during testing. How to set up the master (WPT server) You need an Amazon AWS account to set this up quickly. If you haven't got one, you either quit here and set up your own server with the WebPageTest stack, or you go and create one. Now, go to your AWS dashboard, to Instances –> Instances, and click "Launch Instance": On the next screen, go to Community AMIs, enter one of the ids that can be found here – I chose ami-22cefd3f (eu-central-1) – and hit "Select": In step 2, you can choose a t2.micro instance. The WPT server does not need to be high performance – it only delegates test execution and gathers the results. It's when setting up the client (test agent), that we'll have to pay attention to the performance of the instance. Now, you keep clicking Next until you reach "Step 6. Configure Security Group". Here we need to add a firewall rule that allows us to access our WPT server (master) through HTTP, otherwise no testing will be possible. Giving the security group a more descriptive name and description (❸) is optional but nice: In step 7, review your settings if you want, then hit "Launch": As highlighted in the screen above, AWS will now want to assign a (ssh) key pair to this instance. In case you have an existing key pair you can re-use that. In case you're doing this for the first time, you won't have any existing key pairs to choose from and will have to create a new one. The "Launch Instances" button will activate only after you've downloaded your private key (❸): Clicking ❷ gets you to the Instances overview that was empty at the beginning where you'll find the public IP address and DNS entry of your instance: Congratulations, you've successfully completed the setup of the WPT server (master)! If you now open http://your.instance.ip you should see the WebPageTest UI: To log into your instance via SSH follow one of the guides here. In short: Either use ssh from the command line, available on all linuxes and even on Windows if you have Git installed, e.g. in C:\Program Files\Git\usr\bin: ssh -i wpt-server.pem ubuntu@[public-ip|public-dns] Or, on Windows, use PuTTY. In this case you'll first have to generate a PuTTY compatible private key file from your *.pem file and then you can connect through PuTTy. Here's how to do that. How to set up the client (WPT test agent) Now, we need to set up at least one test agent to actually execute some tests. There's a long list of pre-configured, regularly updated Windows AMIs with all software installed that's needed to execute tests in the documentation. To get started quickly, pick one that contains all major browsers and is located in your favorite region. In this guide, we're going to use ami-54291f49 (IE11/Chrome/Firefox/Safari) in region "eu-central (Frankfurt)". Basically, we repeat the steps from the master setup, but now using the test agent AMI. In step 2, when choosing an Instance Type, we'll now have to ensure that our agent will deliver consistent results. This performance review recommends the following choices (prices will vary by region, the ones displayed here were for US East N. Virginia), quoting: If you’re running just a couple tests per hour, on small HTTP sites, a t2.micro will be okay ($13/month) If you’re running just a couple tests per hour, on large or secure sites, you’ll need to use a t2.medium ($52/month) If you’re running lots of tests per hour, you can’t use t2’s – the most efficient agent will be a c3.large ($135/month) In step 3, we have to configure our test agent with the following information: where to find the WPT server (master): use the public IP address or DNS name what's the location (name) of this agent: a string used in the locations.ini of the master To be honest, I haven't quite wrapped my head around the auto-scaling feature of WPT. That's why we set up a single location ("first") manually that this client will be identified with. In the user data field under Advanced Details we enter: wpt_server=52.29.your.ip wpt_location=first Now, either click your way through the remaining steps or jump to "Review and Launch" and launch your test agent instance. The key pair dialog will pop up again, and now you can choose your existing key "wpt-server" to assign to that instance. You won't use it to connect to it, anyway, because the default connection type to a Windows instance is RDP for which a firewall rule was automatically added in step 6. After launching, a link will be available with instructions on how to connect to that Windows instance, but you shouldn't need to do that. Connecting master and client One step is left: we have to configure the master to know which test agents it can use. This part was actually one of the most tedious bits in the setup because juggling several configuration files with lots of options and entries to make them do what you want them to do is never easy. For the manual management of test agents we need to do the following: Log into the master, e.g. ssh -i wpt-server.pem ubuntu@pu.bl.ic.ip Go to the folder /var/www/webpagetest/www/settings/ Edit locations.ini to contain these blocks (sudo nano locations.ini): [locations]
1=first
default=first
[first]
1=first_wptdriver
2=first_ie
label="My first agent"
default=first_wptdriver
[first_wptdriver]
browser=Chrome,Firefox,Safari
[first_ie]
browser=IE 11
In settings.ini, at the very end, set ec2_locations=0 to hide the predefined EC2 locations from the location dropdown in the browser.
Restart NGNIX: sudo service nginx restart
Now, if you go to http://your.public.ip again, you'll see "My first agent" in the location dropdown and "Chrome", "Firefox", and "Safari (Windows)" in the browser dropdown. I didn't try to find out how to show "IE 11" as well, but at this moment I didn't care. (You might have to wait a few moments before the location lists update after the NGINX restart.)
You can now run your first test!
After some 10-15 seconds you should see this screen:
And a few moments later the first results should show. Congratulations!
Automating tests via the WebPageTest API
If you've tried to run WebPageTests in an automated way, you'll without a doubt have found the webpagetest node module. With your private server and test agent set up, you'll now need to dispatch your tests like so:
webpagetest test http://my.example.com ↵
--server http://<master_ip> ↵
--key <api_key> ↵
--location first_wptdriver:Chrome
The location argument refers to the definitions in the locations.ini file. The api key can be found in the keys.ini file on the master:
We run our test from within TeamCity using a custom script, but that's a topic for another post!
Happy WebPageTesting!