Improving Web Site Performance On Camping.Info

by Oliver 17. November 2015 23:11

We've recently been doing some optimization work on Camping.info to improve user experience through faster web site load times. This post goes into the details of the optimization steps and their effect on the site's performance. Measure, Improve, Measure again To be confident that the changes we will introduce to Camping.info actually improve the performance or perceived performance of the site, we set up an automated test harness on TeamCity using the webpagetest API wrapper node module and a custom powershell wrapper script around that which collects the test results and reports them to TeamCity. The following paragraphs will go into some detail on the concrete steps we took to improve our users's experience. Include external script in existing bundle As described in Avoid Blocking Requests on External Domains, we chose to include the Bugsnag javascript library in our already existing script bundle. This saves one request and one DNS lookup. Here's a look at the start page performance before and after: The savings are humble but noticeable – the Time To Start Render drops from >1200 ms to 1100-1200 ms, which in practice will correlate with a slightly faster page appearance. Host jQuery on your own server – or don't Based on the previous improvement I assumed that saving a DNS lookup alone could already help in improving perceived performance. So for loading jQuery we switched from cdnjs.cloudflare.com to our own domain. It turns out though that this didn't have any impact on rendering or load times. This is actually a tricky optimization – it depends a lot on who your audience is and what sites they visit. Loading e.g. jQuery from an external host will either save one request because the client's browser might have that resource cached after visiting a totally unrelated site that includes the same library, or your user will pay for an extra DNS lookup as compared to just loading the library from your own server. The decision is up to you. Load external javascript after window.load A large block of potential optimization on Camping.info actually concerns the loading, parsing, and execution of javascript. Due to the organic growth of the site over the last 8 years, deferring javascript execution to a point in time after the page has actually rendered turns out to be a complex issue. We still have plenty of pre-loaded or inline javascript blocks, which is mostly due to the way ASP.NET WebForms and its UpdatePanels work. The only easy solution to dependency managament for all of those code blocks was to simply load all dependencies before the HTML that refers to them. This pattern, unfortunately, has lead to one large script bundle that we load in the <head> section of the page because loading it later would mean breaking all inline script block execution. Fixing this will require significant refactoring and thorough testing. But there still is room for improvement! All external javascript can safely be loaded after the page has rendered. This includes Facebook buttons, the AddThis widget, and advertisment scripts. We already had most of these scripts loading after the window onload event, but additionally deferring loading of connect.facebook.net/en_US/fbds.js showed the following improvement on the start page: Now, while the render start time did not decrease, the page load time decreased from around 1.8s to 1.5s. Thi is definitely a decent improvement but please don't overrate it – most of the page's content had probably already been loaded even in the old version. But now we can at least be sure that all Facebook assets will definitely be loaded only after all of the page's own assets have been loaded. And that's good. It turns out that on a different page, the improvement after this single change is actually even more significant: Here we can see that the deferred loading of the Facebook script actually improves not only the page load time, but also the start render and DOM content ready times. One script is still being loaded before the onload event – Google Analytics. I couldn't quite convince myself to defer its loading until after onload, because we use it to track some user metrics and timings, and I felt that GA might not report the same quality of results if loaded too late. Please leave your opinions on this topic in the comment section. Specify image dimensions inline to speed up rendering The worst grade in our PageSpeed score was actually for not specifying image dimensions, neither in HTML nor in CSS: So we went ahead and did that for the start page. Here's how that improved our score: I honestly cannot tell any difference in performance with image dimensions provided. There are several possible causes for this: maybe the images in the above-the-fold content are loaded fast enough to not delay page rendering maybe the page's CSS allows the browser to start rendering even without knowing the exact image dimensions something that I have no clue about at the moment. Loading CSS file from same domain To speed up rendering it also seemed to be a good idea to deliver our site's CSS file from the same domain as the HTML, thus saving a DNS lookup during the early stage of page rendering. Actually, the start render time dropped a bit by doing that but unfortunately the page load time increased a bit indeterministically: It's safe to assume that the additional load time was caused by the fact that all image resources that are referenced in our CSS were now also being retrieved from the main domain instead of the cookieless one which in turn delayed loading of other image resources. For now we reverted this change, but we know that we can further optimize the render process by serving out CSS even faster. It would probably also help a lot if we split our large CSS file into smaller ones that could be loaded per page. Changes without performance impact Wrapping inline javascript blocks in $().ready() Todos for the next performance sprint defer loading of as many javascript files as possible to after the onload event combine and minify ASP.NET AJAX's ScriptResource.axd and WebResource.axd files load CSS from page domain but referenced images from cookieless domain (try css-url-rewrite) load less CSS per page – ideally inline the CSS needed for the above-the-fold content use HTML and CSS instead of images for our Google map buttons – this will save a ton of requests on the search page Where are we at now? Happy performance tuning!

Page Load Performance – Avoid Blocking Requests on External Domains

by Oliver 6. November 2015 21:32

This week we started to look into the page load performance at Camping.Info as well as our discoverize portals. After some initial testing and measuring, we came up with a list of action that should all speed up the user perceived page load times. The Problem Today we'll take a look at this one request: http://d2wy8f7a9ursnm.cloudfront.net/bugsnag-2.min.js. For your info, Bugsnag is an exception tracing and management solution that I can seriously recommend to have a look at. Anyway, in their docs the Bugsnag team suggests this: Include bugsnag.js from our CDN in the <head> tag of your website, before any other <script> tags. That's what we initially did. It turns out, though, that the request for the bugsnag javascript library is quite costly, especially looking at the DNS lookup time of 265ms. Here's a screenshot from a waterfall chart by GTmetrix: That's over half a second for a script of less than 3kB in size! If you have a look at the request for WebResource.axd?d= three lines below, you'll see that that resource was loaded faster than the DNS lookup for bugsnag took. Improve, Improve So let's just load the bugsnag library from our own server and save that longish DNS lookup. But, wait, we can even do better than this! We already load a bunch of javascript files as a bundle inside master_CD8711… (using the great SquishIt library, by the way) so we'll just prepend a copy of bugsnag to that bundle and save a whole request altogether! Now, that's great. And that's exactly what the crew at Bugsnag recommends for advanced usages: If you'd like to avoid an extra blocking request, you can include the javascript in your asset compilation process so that it is inlined into your existing script files. The only thing to be sure of is that Bugsnag is included before your onload handlers run. This is so that we can report stacktraces reliably. Disclaimer There's one drawback to this solution: you might not get the latest and greatest bits from Bugsnag hosting your own version. I've quickly brainstormed how to fix this issue and one way to guarantee a fresh (enough) version would be to check for the current version during your deployment process on your continuous integration server and throw an error if it's newer than the one that currently resides in our project Also, this is just one of several fixes to noticeably improve page load performance – we have bigger fish to catch looking at the bars up there! Now, let's get back to performance tuning! Oliver

Setting up NGINX as Reverse Proxy for Camping.Info

by Oliver 24. June 2013 23:08

worker_processes 1; # http://nginx.org/en/docs/ngx_core_module.html#worker_processes Thanks to http://www.theunixtips.com/how-to-find-number-of-cpus-on-unix-system for helping me find out how many processor cores our VPS is using. Fixing the painful "24: Too many open files" error in NGINX When this error appears in your /var/etc/nginx/error.log file, you really want to do something about it. It basically tells you that some of your users are not being served content but instead receiving HTTP 500 errors generated by NGINX. Not a good thing! To investigate the open file descriptors on your linux (we're running NGINX on Ubuntu 12.04), I followed advice from this post: Linux: Find Out How Many File Descriptors Are Being Used. Then, to fix the low limits I found, 1024 and 4096 for the soft and hard limits, respectively, the post Set ulimit parameters on ubuntu provided a good walkthrough to changing those limits persistently, i.e. even after a reboot. But I somehow had the feeling that in case of NGINX there had to be a simpler solution. Turns out, there is. Let me introduce you to: worker_rlimit_nofile This thread in the NGINX forums contained the hint I needed: Re: Handling nginx's too many open files even I have the correct ulimit. I had actually posted to that thread before and received the helpful answer over a month ago, but I somehow hadn't got around to implementing it. Until today. So here's what you need to do: Set the worker_rlimit_nofile (in the main context) to something significant (I used 65536 = 2^16) instead of the default 1024 as soft and 4096 as the hard limit. Also set worker_connections to the same number to allow for a lot of parallel connections, both from clients and to upstream servers. Your nginx.conf file should now contain these lines: 1: user www-data; 2: worker_processes 1; 3: worker_rlimit_nofile 65536; 4: pid /var/run/nginx.pid; 5:  6: events { 7:     worker_connections 65536; ## default: 1024 8: # multi_accept on; 9: } Reload NGINX after configuration change In a shell, tell NGINX to reload the configuration. You don't need to restart the whole process to get a configuration change live. Just run: /usr/sbin/nginx -s reload or service nginx reload Now enjoy thousands and thousands of parallel connections to your proxy server :-)

Optimize Images for Your Website

by Oliver 17. June 2013 13:59

This is just a short post to draw your attention to a sweet tool I've just discovered: PNGGauntlet. It runs on Windows using the .NET 4 framework and is as easy to use as you could possibly wish. Also: it's completely free to use. Convert Your Existing PNGs For starters, we'll just convert some existing PNGs – can't really do any harm with that. In the Open File dialog, there's an option to filter for only .png files. You can choose many of them at once: If you provide an Output directory, the optimized files will be written to that destination. But: the tool also has the option to overwrite the original files, which is awesome if you use some kind of source control (and thus have a backup) and just want to get the job done. During my first run, using the 8 processing threads my CPU has to offer, … … I got savings from 3% to 27%: PNGGauntlet also tells me, that in total I saved 4,52 KB. If those were all images on your web site, that would be a not so bad improvement, especially when you get it investing about 2 min of your time and no extra expenses! Real Savings Running PNGGauntlet on the sprites that we use for Camping.Info, we were really surprised: out of 172 KB it saved us over 31%, a whole 54 KB! Now that's an improvement that on a slightly slower connection will already be noticeable. We'll definitely check the rest of our images for more savings. Convert Other Image Formats You can also choose to change your images format to PNG if you're in the mood. I tried converting all GIFs in the Orchard CMS Admin theme to PNGs and went from a total of 24 KB for 20 files to less than 17 KB with no loss of quality – an over 30% saving! Just beware that you'll need to change the file references in your project to pick up the new PNGs. Roundup Easy, fast and cheap (as in free) image optimization doesn't have to be magic anymore – today anyone can do it. Check out PNGGauntlet to try for yourself. There's really no excuse not to!

Asynchronously posting back a form by hitting the Enter key – in ASP.NET WebForms using jQuery

by Oliver 2. August 2012 15:19

On Camping.Info, we lately had a user report that hitting the enter key while in the login form one would be redirected to the search page instead of being logged in. This happened when the focus was still on the password text box because the “Search” button on the page was the first input element on the page (inside the single <form> element that WebForms allows for) of type submit which would by default handle the enter key press event. To improve the user experience, we set out to find a way to postback the correct UpdatePanel – the one which currently has the focus (or one of its child controls). Using an ASP.NET Panel and the DefaultButton property There is a built-in solution to this problem if you’re willing to slightly change your markup and add a hidden <asp:Button> element to your part of the page you want to asynchronously update. You need to wrap your existing html into an <asp:Panel> element and set its DefaultButton property to the ID of that hidden button. This would look something like this: 1: <div class="form21"> 2: <asp:Panel runat="server" DefaultButton="btnHiddenRegister"> 3: <asp:TextBox ID="txtEmail" runat="server" /> 4: <asp:TextBox ID="txtPassword" runat="server" TextMode="Password" /> 5: <asp:LinkButton ID="btnRegister" runat="server">Register</asp:LinkButton> 6: <asp:Button ID="btnHiddenRegister" runat="server"style="display:none" /> 7: </asp:Panel> 8: </div> Of course, you need to hook up the code-behind Click handler for the btnRegister LinkButton to the hidden Button element as well to make this work. The proposed solution will instead have html like the following: 1: <div class="form21 jq-form"> 2: <asp:TextBox ID="txtEmail" runat="server" /> 3: <asp:TextBox ID="txtPassword" runat="server" TextMode="Password" /> 4: <asp:LinkButton ID="btnRegister" runat="server" CssClass="jq-form-submit">Register</asp:LinkButton> 5: </div> Let’s see how we get this to work. Using jQuery On stackoverflow.com, I quickly found this post about a jQuery replacement for DefaultButton or Link where the this answer points to the original solution from March 2009 proposed by this post on fixing the enter key in ASP.NET with jQuery. I’ve improved on the proposed answer in a few ways, so this is what this post is about. The original solution 1: $(document).ready(function(){ 2: var $btn = $('.form_submit'); 3: var $form = $btn.parents('.form'); 4: 5: $form.keypress(function(e){ 6: if (e.which == 13 && e.target.type != 'textarea') { 7: if ($btn[0].type == 'submit') 8: $btn[0].click(); 9: else 10: eval($btn[0].href); 11: return false; 12: } 13: }); 14: }); Remarks: works only if the content is present at the time the script executes, and works only for a single pair of form and form-submit elements. Our improved solution 1: $('body') 2: .off('keypress.enter') 3: .on('keypress.enter', function (e) { 4: if (e.which == 13 && e.target.type != 'textarea') { 5: var $btn = $(e.target).closest('.jq-form').find('.jq-form-submit'); 6: if ($btn.length > 0) { 7: if ($btn[0].type == 'submit') { 8: $btn[0].click(); 9: } 10: else { 11: eval($btn[0].href); 12: } 13: return false; 14: } 15: } 16: }); Improvements The first thing to improve is the registration of the keypress event handler. Since Camping.Info is quite a dynamic site with lots of asynchronous postbacks to the server (mostly using WebForms’ UpdatePanels) a lot of the javascript code we need on our page won’t execute properly as is because it expects the elements it works with to be present at the time of the script’s execution. That’s also the case with the original solution: both the submit button element with class .form_submit and the pseudo-form element .form (I’ve called it pseudo-form element since it’s just a parent element of any type marked with the class .form) must be present when the script executes, otherwise nothing will happen. That’s why – using jQuery 1.7 syntax – we register the keypress event handler NOT on the pseudo-form element but directly on the body using on(). We also the event namespace it by appending .enter to its name keypress so that we can safely remove it (using off()) in case we call the script a second time during postback to prevent it from being attached twice. Please note, that at this point neither the pseudo-form nor the form submit element must be present on the page for this to work. Once the enter key has been pressed anywhere on the page and we’re inside the event handler, we set out to check for a pseudo-form surrounding the element that received the enter key press event and inside it we look for a submit element. [I’ve also changed the class names for those elements by our internal convention to prefix all those that are jQuery related by jq- to set them apart from the CSS class names our designer works with.] Only if we’ve found both those elements, we proceed with either clicking the element if it’s really a button or executing the javascript code that ASP.NET rendered into our LinkButton’s href tag which asynchronously posts back to the server for some update to our page. A subtle but important detail here is to return false (and thus prevent the event from being processed any further) only in case we actually found the pseudo-form and submit elements, so that in the opposite case the default behavior of the page will be preserved. Conclusion The solution presented here works equally well with static and dynamically loaded page content and for any number of forms on your page. It replaces the solution provided by ASP.NET WebForms’ using a Panel and its DefaultButton property, resulting also in cleaner markup. Happy Coding!

Rendering problem with flot 0.7 and excanvas in IE8

by Oliver 8. September 2011 02:41

On Camping.info, we use flot 0.7 with jQuery and asynchronous updates for internal click statistics visualization. Unfortunately, we’ve been having some trouble with our flot graph – but only in IE8 and only sometimes! This has been really annoying, especially since we’re expecting a usage growth of this graph and IE8 has a user share of 25%. The problem is that after fetching the graph data from the server the graph sometime won’t appear on the canvas, although it seems to be there as the yellow tooltip suggests when hovering over the graph line. The graph looks like this (broken vs. intact):                      We were (are?) having a hard time reproducing this bug on our staging system but it would appear from time to time. On our live system we would get it quite often, although a page refresh often helped. Searching the web for flot+IE8 you’ll get a few results most of which point to the “flot-graphs” Google Group. A search on stackoverflow.com led me to an interesting post called IE8 is sometimes displaying my page correctly, sometimes not, which seemed interesting enough to check out. There, they talk about a problem with the  display:inline-block  CSS rule and that it is error-prone in IE<=8. So I checked the source of excanvas.min.js which is responsible for providing a canvas element to flot in IE browsers below version 9, and sure enough I found this: 1: a.cssText="canvas{display:inline-block;overflow:hidden;text-align:left;width:300px;height:150px} The fix proposed in this answer was to use zoom:1 to force elements with display:inline-block  to have a layout in IE (if you really want to know what that means, here’s more background info). Well, I tried that – to no avail. The next hint came from a post in the flot-graphs Google Group and proposed using flashcanvas instead of excanvas as the canvas provider for flot. Oh well, we haven’t got anything to lose, right? I replaced the script include and saw – exactly the same thing as before: nothing! (Check out the IE developer toolbar on the right – the you can see the Flash object inside the canvas element that was generated by flashcanvas.) What was left? Oh, I hadn’t tried to check for an updated version of excanvas yet – so I went and did that. The latest download version dates back to March 2009 but there has been some activity since then so I decide to go for the latest source which is also already a year and a half old but still by a whole year younger that the last official download I replaced the file and haven’t had any problem so far! We will be keeping an eye on this, but for now my hopes are high that this update might have finally helped. Happy coding!

Missing Index: CREATE NONCLUSTERED INDEX

by Oliver 17. August 2011 18:28

Lately, we encountered a problem with the speed of our search on www.camping.info for a certain set of search criteria. It sometimes used to take over a few seconds before the updated results were shown. This most likely seemed to be a problem with the database so I went to investigate the offending queries using the wonderful NHibernate Profiler. I found a very slow query: So I went and copied the long running query into a new query window in SSMS (SQL Server Management Studio). Since we use MS SQL Express on our production servers we don’t get the advanced database tuning advisor features of the full edition. Still, when you right click on any query window you’ll see the option “Include Actual Execution Plan” … [FULL]                                      [EXPRESS] … which already offers a lot of detail, as you can see in the following screenshot: When you right click on the execution plan you’ll see the option “Missing Index Details…” - there you get a CREATE INDEX statement that is ready to use once you give a name to the new index. I did this for three indexes and now we have this for the same query: 115 ms instead of 1993 ms – that’s an improvement of 94%! Even if DB queries lasting longer than 50-60 ms are not really fast anymore, we’ve still got quite an improvement here. Well, that’s it. Using a well-priced tool like NHibernate Profiler to identify slow queries and the Execution Plan feature of SSMS, we’re able to get quite a performance improvement in a short time. Happy Coding and Optimizing, Oliver

Running a multi-lingual web application: System.IO.PathTooLongException

by Oliver 4. March 2011 15:18

We definitely have long paths on our client’s platform www.camping.info, for example for a concrete rating on the detail page of a campsite with a long name in a state with a long name - http://www.camping.info/deutschland/schleswig-holstein-und-hamburg/campingplatz-ferien-und-campinganlage-schuldt-19573/bewertung/r23989 - but the path (everything after the .info including the ‘/’) is still only 112 characters long which is still a long way from the 260 character barrier that’s the default in ASP.NET (see the MSDN). The problem Well, the same page in Greek for example has the following URL: http://el.camping.info/γερμανία/σλέσβιχ-χολστάιν-αμβούργο/campingplatz-ferien-und-campinganlage-schuldt-19573/αξιολόγηση/r23989, at least that is what we see in the browser bar. Essentially, this URL will be encoded when requesting the page from the server so it becomes a gigantic http://el.camping.info/%CE%B3%CE%B5%CF%81%CE%BC%CE%B1%CE%BD%CE%AF%CE%B1/%CF%83%CE%BB%CE%AD%CF%83%CE%B2%CE%B9%CF%87-%CF%87%CE%BF%CE%BB%CF%83%CF%84%CE%AC%CE%B9%CE%BD-%CE%B1%CE%BC%CE%B2%CE%BF%CF%8D%CF%81%CE%B3%CE%BF/campingplatz-ferien-und-campinganlage-schuldt-19573/%CE%B1%CE%BE%CE%B9%CE%BF%CE%BB%CF%8C%CE%B3%CE%B7%CF%83%CE%B7/r23989! Now the path of the URL is a whopping 310 characters long! That’s quite a bit over the limit - but even for shorter URLs the cyrillic or greek equivalents surpass the limit not as rarely as one would think once they are URL encoded. The exception you get when exceeding the default of 260 chars is: System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be lessthan 260 characters, and the directory name must be less than 248 characters. And this is what the error looks like in Elmah: The solution You don’t have to search long to find a solution to this problem on the web: http://forums.asp.net/t/1553460.aspx/2/10. Just make sure the httpRuntime node contains the properties maxUrlLength AND relaxedUrlToFileSystemMapping like so: <httpRuntime maxUrlLength="500" relaxedUrlToFileSystemMapping="true" /> You might wonder what the relaxedUrlToFileSystemMapping property really does: you can read more on MSDN. In short, if set to true “the URL does not have to comply with Windows path rules” (MSDN). Happy coding, Oliver

GEN0 und GEN2 Heap gegenläufig?

by robert 3. June 2010 19:48

Gen2 und Gen0 sind deutlich gegenläufig, versteh ich nicht, hat jemand ein Erklärungsmuster? Fängt der Garbarge-Collector an in Gen2 aufzuräumen weil viel neuer Speicher allokiert werden muß? Bild: Speicher für Camping.Info Server: (8 GIG RAM, 2x Quadcore, ca.: 1000 aktive Sessions, Requests per Second: Avg.: 50, Peak: 120) Das 10x mehr Speicher in Gen2 liegt bedeutet natürlich, dass wir unser Memory-Leak immer noch nicht aufgespürt haben. Normal müsste Gen 2 kleiner sein als Gen 0? Viele Fragen :-) (Robert)

Projekt Versionierung und Buildmanagement

by robert 1. August 2009 13:54

Für unsere Projekte verwenden wir eine Versionierungsstrategie, in der es darum geht die Versionskontrolsystem-Revision (in unserem Fall SVN) zuzuordnen: Beispiel www.camping.info: Interessant für uns, ist nur der hintere Teil: 2013.214. 2013: ist die SVN Revision von www.camping.info 214: ist die SVN Revision von Speak-Lib, einer Hilfs-Bibliothek Die Version wird vor jedem deployment in die Assembly.Info.cs geschrieben, noch händisch, und so wieder ausgelesen: Assembly assembly = Assembly.GetExecutingAssembly(); ltVersion.Text = assembly.GetName().Version.ToString();   Für das Build-Mangement verwenden wir Target-Process, wobei der Buildname auch die Revision beinhaltet: Hier lässt sich sehen, dass die aktuelle Camping.Info Version (x.x.2013.214) am 30 Juil deployed wurde und zur Iteration 4.7 gehört. Noch mehr Informationen finden sich den Build-Details: Die Build-Details beinhalten alle in der Version durchgeführten Änderungen. Diese Informationen können als Info an den Kunden direkt weitergegeben werden. Mit Erwin Oberascher von www.camping.Info haben wir jedoch den Ideal-Fall, in dem der Kunde Zugriff und Interesse an solchen Projekt-Artefakten hat.

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.