Improving Web Site Performance On Camping.Info

by Oliver 17. November 2015 23:11

We've recently been doing some optimization work on Camping.info to improve user experience through faster web site load times. This post goes into the details of the optimization steps and their effect on the site's performance. Measure, Improve, Measure again To be confident that the changes we will introduce to Camping.info actually improve the performance or perceived performance of the site, we set up an automated test harness on TeamCity using the webpagetest API wrapper node module and a custom powershell wrapper script around that which collects the test results and reports them to TeamCity. The following paragraphs will go into some detail on the concrete steps we took to improve our users's experience. Include external script in existing bundle As described in Avoid Blocking Requests on External Domains, we chose to include the Bugsnag javascript library in our already existing script bundle. This saves one request and one DNS lookup. Here's a look at the start page performance before and after: The savings are humble but noticeable – the Time To Start Render drops from >1200 ms to 1100-1200 ms, which in practice will correlate with a slightly faster page appearance. Host jQuery on your own server – or don't Based on the previous improvement I assumed that saving a DNS lookup alone could already help in improving perceived performance. So for loading jQuery we switched from cdnjs.cloudflare.com to our own domain. It turns out though that this didn't have any impact on rendering or load times. This is actually a tricky optimization – it depends a lot on who your audience is and what sites they visit. Loading e.g. jQuery from an external host will either save one request because the client's browser might have that resource cached after visiting a totally unrelated site that includes the same library, or your user will pay for an extra DNS lookup as compared to just loading the library from your own server. The decision is up to you. Load external javascript after window.load A large block of potential optimization on Camping.info actually concerns the loading, parsing, and execution of javascript. Due to the organic growth of the site over the last 8 years, deferring javascript execution to a point in time after the page has actually rendered turns out to be a complex issue. We still have plenty of pre-loaded or inline javascript blocks, which is mostly due to the way ASP.NET WebForms and its UpdatePanels work. The only easy solution to dependency managament for all of those code blocks was to simply load all dependencies before the HTML that refers to them. This pattern, unfortunately, has lead to one large script bundle that we load in the <head> section of the page because loading it later would mean breaking all inline script block execution. Fixing this will require significant refactoring and thorough testing. But there still is room for improvement! All external javascript can safely be loaded after the page has rendered. This includes Facebook buttons, the AddThis widget, and advertisment scripts. We already had most of these scripts loading after the window onload event, but additionally deferring loading of connect.facebook.net/en_US/fbds.js showed the following improvement on the start page: Now, while the render start time did not decrease, the page load time decreased from around 1.8s to 1.5s. Thi is definitely a decent improvement but please don't overrate it – most of the page's content had probably already been loaded even in the old version. But now we can at least be sure that all Facebook assets will definitely be loaded only after all of the page's own assets have been loaded. And that's good. It turns out that on a different page, the improvement after this single change is actually even more significant: Here we can see that the deferred loading of the Facebook script actually improves not only the page load time, but also the start render and DOM content ready times. One script is still being loaded before the onload event – Google Analytics. I couldn't quite convince myself to defer its loading until after onload, because we use it to track some user metrics and timings, and I felt that GA might not report the same quality of results if loaded too late. Please leave your opinions on this topic in the comment section. Specify image dimensions inline to speed up rendering The worst grade in our PageSpeed score was actually for not specifying image dimensions, neither in HTML nor in CSS: So we went ahead and did that for the start page. Here's how that improved our score: I honestly cannot tell any difference in performance with image dimensions provided. There are several possible causes for this: maybe the images in the above-the-fold content are loaded fast enough to not delay page rendering maybe the page's CSS allows the browser to start rendering even without knowing the exact image dimensions something that I have no clue about at the moment. Loading CSS file from same domain To speed up rendering it also seemed to be a good idea to deliver our site's CSS file from the same domain as the HTML, thus saving a DNS lookup during the early stage of page rendering. Actually, the start render time dropped a bit by doing that but unfortunately the page load time increased a bit indeterministically: It's safe to assume that the additional load time was caused by the fact that all image resources that are referenced in our CSS were now also being retrieved from the main domain instead of the cookieless one which in turn delayed loading of other image resources. For now we reverted this change, but we know that we can further optimize the render process by serving out CSS even faster. It would probably also help a lot if we split our large CSS file into smaller ones that could be loaded per page. Changes without performance impact Wrapping inline javascript blocks in $().ready() Todos for the next performance sprint defer loading of as many javascript files as possible to after the onload event combine and minify ASP.NET AJAX's ScriptResource.axd and WebResource.axd files load CSS from page domain but referenced images from cookieless domain (try css-url-rewrite) load less CSS per page – ideally inline the CSS needed for the above-the-fold content use HTML and CSS instead of images for our Google map buttons – this will save a ton of requests on the search page Where are we at now? Happy performance tuning!

What's Wrong With Our Specification By Example Tests

by Oliver 25. July 2013 11:44

We've been working on our customizable portal software discoverize for about two years now using Orchard CMS. From the beginning we were convinced to use Specification By Example to build up a live documentation of the functionality of our software. This has been very important to us since we plan to drive tens if not hundreds of portals using the same code base. Last year, I've already written about how we do our integration testing. I've also written about why we do browser based testing as opposed to some lower level testing that is in place e.g. in the Orchard.Specs project inside the Orchard source solution. But here we are, a year has passed, and we're still not happy with our approach. Problems we're facing The biggest problem is really that to write an acceptance test for a new feature takes nearly as much time as it takes for the feature to implement. This might be tolerable for mission critical software used in banks or space shuttles, but it's just over the top for a consumer website. On the other hand, we want an insurance that the software we ship contains as few bugs as possible. Development speed down by 50% The websites we generate using our software are quite complex and interactive. This has repeatedly posed challenges on writing robust browser based tests. We chose Coypu over Selenium because it has a cleaner API and handles asynchronous postbacks really well, but its API has still been limiting to us so we regularly find ourselves hacking around those limitations. All of which has to be tested, of course, which takes a noticeable amount of time. Another problem we face is that we need to change our HTML to accommodate for testing. For example, we keep adding id attributes to elements just so we have an easy and reliable way of accessing those elements in our specification tests. This seems not right but that's how we get stuff to work. Test Execution speed too high for continuous feedback The spec test execution time is too high. We're talking about 50-90 seconds per test case if they pass, add another 20-30 secs if they fail (because of the browser automation timeouts). That's a real bummer because it's so easy to loose focus during that time. Additionally, before executing a test suite (or a single test, if that's what you want) we compile and publish our code to a separate destination which the specs run on. This process takes another 45-50 secs which is ok if you run all specs at once but adds significant overhead when working on a single acceptance test. Related to this, we keep having trouble in quickly finding the cause for a broken test because not all of our commits are being pushed through the acceptance tests pipeline since a single run sometimes takes longer than the time between commits. This makes finding the cause for a breaking test harder. Test Fragility keeps us busy Another reoccurring problem are breaking tests due to UI changes. This might be a simply change of CSS, HTML or a JavaScript snippet, but it happens all the time. Also, there are usually at least a couple of tests that break simultaneously because they reuse certain steps which is not only annoying but often misleading as to where the error really comes from. Demoralization All of the above lead to decreases morale both in writing new tests and in fixing broken ones. Which in turn adds even more overhead to the development process. Looking for success stories This post came into existence because we believe in Specification by Example and we also believe that other teams are successfully running integration tests, even by the use of an automated browser. If you're part of such a team, or have any other valuable feedback to share, please do so in the comments. Happy testing!

SpecFlow Step Definition with Optional Parameter

by Oliver 13. May 2013 11:41

Today, this question came up on the SpecFlow Google Group: Assuming I would like to define in Gherkin the following: 1. When I send some argument xxx with parameter aaa and another parameter bbb 2. When I send some argument xxx with parameter aaa And I would like to have only one reusable function, something like this: [When(@"I send some argument (.*) with parameter (.*) and another parameter (.*)")] public void foo(string arg, string paramA, string paramB) {   // check if paramB is null and do something } I am aware of the table feature (pipe separated values) but I would like to stick with this text-alike syntax. We've encountered this use case several times in the past (also avoiding the table syntax) and used to solve it by delegating the shorter version to the longer one but I decided to go see if I can find a more elegant solution. Matching the steps The first step at matching both steps was to simply match the first step. Since version 1.9 SpecFlow has this wonderful syntax highlighting in .feature files which helps identify unbound steps: We can see, that our first pattern is too greedy and matches the second step but not the way we need. Changing the regular expression for the first parameter to something more restrictive, allows us to restrict the match to only the first step (notice that the second step has been colored purple to notify us of the fact that there is no matching step definition, yet): The regex ([^ ]*) we use here means that we match all characters that are not spaces 0 to n times, thus denying the match of the second step because of the space character following the argument aaa. Sometimes, though, you also need to match spaces in arguments and that's when we use a slightly modified version like this:  "([^"]*)"  which means: match a quote, then match everything but a quote 0 to n times and save this match as a reference, and then match another quote. In a verbatim string (prefixed by the @ sign) this will look like this: Note, that now you'll have to enclose your spaced string value in quotes, though, but you can still use the same method to put that step attribute on. Now, let's go for the second argument. Using a .NET Optional Parameter My first try was to add an optional parameter to the method we already have and provide it with a default argument like this: Unfortunately, SpecFlow complains that for the first step with only one argument the matching method needs to have only one parameter. I thought that the compiler would generate two methods here, one with and one without the optional parameter so that at runtime it could pick the right one depending on which parameters were provided with a  value. It turns out that this is not so. Seems that IL code for a method with optional parameters contains only one method, as well, as per this article: Intermediate language for optional parameter method: IL .method private hidebysig static void Method([opt] int32 'value', [opt] string name) cil managed { .param [1] = int32(1) .param [2] = string('Perl') .maxstack 8 L_0000: ldstr "value = {0}, name = {1}" L_0005: ldarg.0 L_0006: box int32 L_000b: ldarg.1 L_000c: call void [mscorlib]System.Console::WriteLine(string, object, object) L_0011: ret } That's why SpecFlow complains. Solution: Use Two Methods It looks like there is not direct solution to the problem that would require only a single method. The solution we employ seems to be all you can do about it, at least right now with SpecFlow 1.9. Which would be to use (at least) two separate methods, one of which delegates its execution to the other, more general one: Happy spec'ing!

Revoke Access to Applications using Google OpenID

by Oliver 9. May 2013 12:20

During automatic frontend testing, some of our tests recently broke, which were trying to connect a Google account to our new TeamReview application using OpenID. Those tests used to make sure that on Google's confirmation page the checkbox to remember my choice was unchecked. I'd like to show a screenshot of what it used to look like, but unfortunately, it seems as if that old page has died. On the new confirmation page, no such checkbox is available. This means that during a test run I cannot temporarily accept access to our application and later revoke that access by simply deleting my Google cookies. I now have to go to Google's Authorizing Applications & Sites page and revoke access manually. Just for the record.

Writing Acceptance Tests for an Orchard / ASP.NET MVC Application – using SpecFlow, Coypu (Selenium) and the MvcIntegrationTestFramework

by Oliver 22. August 2012 13:48

When we started development on Marinas.info, we decided to write acceptance tests for all important features of our application. This decision was even more justified by the fact that a bunch of similar platforms are to follow using the same codebase. We wanted an application with less bugs and easier maintenance. Writing good, automated acceptance tests is not easy and it’s not fast, either. For some time now, we’ve been trying to get the first set of our tests run green, which proved especially tricky on our TeamCity continuous integration server. This post investigates a working solution. The ingredients: SpecFlow, Coypu (Selenium), Browser, Web Server, and MvcIntegrationTestFramework SpecFlow In .NET world, using SpecFlow to write acceptance tests is nothing new and has recently become, yet again, more appealing after its update to version 1.9. One of our scenarios for verifying image upload functionality looks like this: It’s simple to write, easy to read and great living documentation. For Browser based tests you need: Coypu (Selenium) Everyone who has written tests for Selenium for even a mildly ajax-y site knows how painful it can be to create reliably working tests. Coypu alleviates the pain and makes test creation as straight-forward as it should be in the first place. Coypu is: A robust wrapper for browser automation tools on .Net, such as Selenium WebDriver that eases automating ajax-heavy websites and reduces coupling to the HTML, CSS & JS A more intuitive DSL for interacting with the browser in the way a human being would A few examples of Coypu’s clean API can be seen here in one of the step definitions for the above scenario (Browser is an instance of the BrowserSession class from Coypu): A web browser To run browser based tests you, of course, need … a browser! Coypu offers support for quite a bunch of them, including the usual suspects Internet Explorer, Chrome, and Firefox. A web server You need to host your application in some web server or another to process requests. Well, this statement turns out to be only partially true, as you will see with the MvcIntegrationTestFramework. But at least for browser based test you need a web server, and you basically have the choice between IIS and IIS Express (if you don’t want to write your own or use someone else’s implementation). We chose IIS Express as it is manageable through a non-administrator account, but it needs to be installed on all machines that will execute the tests. For non-browser based tests: MvcIntegrationTestFramework Introduced by Steven Sanderson in 2009, this small framework allows to write integration tests for ASP.NET MVC applications and execute them without a browser! It empowers you to make assertions on your controllers’ actions’ results rather than on the rendered html output by injecting some clever hooks into your MVC application under test. An example of how a test would look can be found in the above mentioned post. The “magic” of this framework lies in the use of ApplicationHost.CreateApplicationHost() which creates an application domain for hosting your ASP.NET application. Check out this screenshot of part of the source code: How to put the pieces together After a quite radical evolution of our test code (which you can read up on in my follow-up post The Long Road to Browser Based Acceptance Testing), we finally settled for the following: Before the first test starts, setup an instance of the AUT (application under test). This includes: deploying the AUT as we do for our staging environment, but to a temp folder initialize an AppHost instance à la MvcIntegrationTestFramework, i.e. an ASP.NET enabled application domain that hosts the AUT execute the Orchard setup command via the AppHost instance (instead of running the setup through a browser, which we used to do but was a lot slower) Before each test run (SpecFlow scenario) we then execute various commands to setup the environment for the concrete test, e.g.: clean the database simply by overwriting it with a copy we saved after the initial setup create Marina entries that will be displayed and searchable on the site, again, using the AppHost instance Once we want to execute steps in the browser, we do the following: start an instance of IIS Express pointing to the deployed application (we used the wrapper code from Spinning up IISExpress for integration testing) initiate a Coypu BrowserSession which under the hood creates an instance of the browser you choose after battling with Internet Explorer, Chrome, and Firefox Portable, we now use Firefox 10.0.6 ESR (Extended Support Release) because version 10 is of now the highest version supported by Selenium (2.1.25) and the ESR doesn’t ask to be updated all the time After each test run (SpecFlow scenario) we do this: close the browser shut down the IIS Express instance (we slightly modified the above mentioned wrapper code calling Kill() on the process instance after the call to CloseMainWindow() so that it reliably terminates even on TeamCity) Conclusion Setting up a reliable environment for automatically executing acceptance tests has not been a walk through the park but we finally have a solution that basically “just works”. Hopefully, our experience will help you save a couple of hours and also some headache along the way Happy coding!

Testing: trying to get it right

by Oliver 9. February 2011 10:54

Read a great post on Steve Sanderson’s blog with the promising title Writing Great Unit Tests – and it is definitely worth reading. He mentions another post with the title Integration Testing Your ASP.NET MVC Application which I also recommend. One of the eye openersfor me was this quote in his post: “TDD is a design process, not a testing process”. Let me elaborate: “TDD is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests.” I must admit that I haven’t read much yet about TDD – but we’ve been writing tests for quite some time now. Unfortunately, most of them probably fall into the Dirty Hybrids category that Sanderson sees between two good ends, one being True Unit Tests, the other being Integration Tests. I allow myself to add his illustration here: So it looks like our goal should be to write tests in either of the outer categories and slowly but surely get rid of the time consuming, easy-to-break hybrid tests. One problem that a lot of people writing web applications are confronted with at some point, is testing the whole application stack from browser action over server reaction to browser result. We’ve put some effort into abstracting away the HttpContext class to use the abstraction in both our frontend and in tests, but it falls short of being a worthy replacement for the real HttpRequest and HttpResponse classes. With all that works we’re missing a possibility to use our abstraction in a third party URL Rewriting engine, so the requests we are testing never get processed by it. We have tests for the rules that are applied by the engine, but for more sophisticated setups this simply is not enough. Thanks to a link on Steve Sanderson’s blog post on integration testing ASP.NET MVC applications I stumbled upon Phil Haack’s HttpSimulator – and it looks just like the piece in the puzzle we’ve been missing for all that time. (I have no idea how we didn’t find that earlier.) Another thing I’m new to is kata. Kata is a Japanese word describing detailed choreographed patterns of movements practiced either solo or in pairs. A code kata is an exercise in programming which helps hone your skills through practice and repetition. At first, it might sound weird to practice and repeat the same exercise over and over again. But then again, if we think of musicians or sportsmen it’s not hard to see that they become great at what they do only by practicing. A lot. And they practice the same move (let’s just call it that) over and over again. The idea behind the code kata is that programmers should do the same. This is what Dave Thomas, one of the authors of The Pragmatic Programmer, also promotes on his blog. I stumbled upon a very interesting kata in a blog post by Robert Martin about test driven development, which deals with the evolution of tests and poses the assumption that there might be a kind of priority for test code transformation that will lead to good code: The Transformation Priority Premise. The kata he mentions further down is the word wrap kata. The post is rather long so count at least 15 min to read it. For me it was well worth it. The solution to a very simple problem, at least a problem that’s easy to explain, can be quite challenging – and the post shows hands on how writing good tests can help you find a good solution faster. It showed me (again) that writing good tests lets you write good code. Just like the quote above states: TDD is a software development process. Happy coding – through testing ;-) Oliver

Synergien und Testpläne

by robert 11. June 2010 09:56

Die Qualtität unserer Abläufe wird besser und besser :-) Folgendes Bild, zeigt wie Test-Läufe nach jedem Deployment von www.camping.Info dokumentiert werden. (In anderen Projekten haben wir ähnliche Abläufe. )   Wir haben noch viele ambitionierte Ziele für die Verbesserung der Deployment-Prozesse, zum Beispiel das Einbinden von Selenium-Tests oder das vollautomatische Deployment mit automatischen Rollback im Fehlerfall, so dass bei einer Auslieferung überhaupt keine manuellen Schritte mehr notwendig sind und ArbeitLebenszeit nur für einfache Kontrolle aufgewendet wird. Was ein Entwickler (Oliver) in Vollzeit und ein Tester/Admin (Mark), der mit wenigen Stunden unterstützt hier leisten und geleistet haben finde ich beachtlich! Hut ab an Euch Beide! Sehr positiv finde ich auch die Synergien für die Entwicklungen von E-Commerce-Connector, aber auch für Beratungsleistungen. Teamwork-Helps!

enjoyed the post?

Tags:

Testen

Abnahme von Web-Formularen

by robert 24. November 2009 19:03

Gesucht sind leichtgewichtige Kriterien für das Testen und Abnehmen von Formularen unabhängig von der eigentlichen Funktionalität (Speichern von Ratings, Verschicken von Emails). Upload-Formular (Datei, Bild, ..) (sollten getestet werden mit) eigenwillige Dateinamen, mit Sonderzeichen, mit Punkten, ohne Endung zu große Dateien nicht unterstützte Dateitypen Texteingabefelder (sollten getestet werden auf) keine Eingabe sehr große Eingabe komischer Inhalt: Am besten mit chinesisch und türkisch Testen Script-Injektion DropDownlisten/Radios/Checkboxen (sollten getestet werden auf) verhalten wenn keine Auswahl getroffen wurde Allgemeine Kriterien: Sinnhaftigkeit von Benutzer-Feedback (Fehlermeldung, Aufforderungen, etc.). Vorhanden sein von Benutzer-Feedback Ist das Feedback auch in kleine Auflösungen auf dem Bildschirm sichtbar? Verschiedene Browser Feedback sollten in den von der Anwendung unterstützten Browsern getestet werden. Interaktivitäte/dynamische Formularelemente sollten getestet werden.

enjoyed the post?

Tags:

Testen

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.