by Oliver
28. January 2014 21:24
This is embarrassing. For the n-th time during the past couple of years I've felt an unease waiting for our projects (read: solutions) to compile. I kept seeing this: This is MSBuild using 1 (!), yes, one!, of the 8 CPU cores I've sitting in my machine to get my work done. What about the other 7? Why don't you use them, MSBuild? With that single core, currently my simple local build of our project discoverize takes around 36 seconds: Tell MSBuild to use all cpu cores Well, it's as easy as adding /m or /maxcpucount to your msbuild command line build to boost your build times: Down to 8 seconds with 3 additional characters: [space]/m. That's easily a 4.5 times improvement! Your mileage may vary Of course, every project is different, so your speed increase might be higher or a lot lower than what I've seen. But it's an easy measure to get at least some improvement in build times with very little effort. Don't trust Visual Studio on that one, though – the solution builds slowly there, still. For reference, let me tell you, that the /maxcpucount switch can actually take a parameter value like so: /maxcpucount:4. So if you lots of other stuff going on in the background or I don't know for what reason, really, you can limit the number of cpus used by MSBuild. Props to the Orchard team for a highly parallelizable build One of the specifics of the Orchard source code that's the base for discoverize is the very loose coupling between the 70+ projects in the solution. This allows MSBuild to distribute the compilation work to a high number of threads because there are almost no dependencies between the projects that MSBuild would have to respect. Great work! Happy building!
by Oliver
25. July 2013 11:44
We've been working on our customizable portal software discoverize for about two years now using Orchard CMS. From the beginning we were convinced to use Specification By Example to build up a live documentation of the functionality of our software. This has been very important to us since we plan to drive tens if not hundreds of portals using the same code base. Last year, I've already written about how we do our integration testing. I've also written about why we do browser based testing as opposed to some lower level testing that is in place e.g. in the Orchard.Specs project inside the Orchard source solution. But here we are, a year has passed, and we're still not happy with our approach. Problems we're facing The biggest problem is really that to write an acceptance test for a new feature takes nearly as much time as it takes for the feature to implement. This might be tolerable for mission critical software used in banks or space shuttles, but it's just over the top for a consumer website. On the other hand, we want an insurance that the software we ship contains as few bugs as possible. Development speed down by 50% The websites we generate using our software are quite complex and interactive. This has repeatedly posed challenges on writing robust browser based tests. We chose Coypu over Selenium because it has a cleaner API and handles asynchronous postbacks really well, but its API has still been limiting to us so we regularly find ourselves hacking around those limitations. All of which has to be tested, of course, which takes a noticeable amount of time. Another problem we face is that we need to change our HTML to accommodate for testing. For example, we keep adding id attributes to elements just so we have an easy and reliable way of accessing those elements in our specification tests. This seems not right but that's how we get stuff to work. Test Execution speed too high for continuous feedback The spec test execution time is too high. We're talking about 50-90 seconds per test case if they pass, add another 20-30 secs if they fail (because of the browser automation timeouts). That's a real bummer because it's so easy to loose focus during that time. Additionally, before executing a test suite (or a single test, if that's what you want) we compile and publish our code to a separate destination which the specs run on. This process takes another 45-50 secs which is ok if you run all specs at once but adds significant overhead when working on a single acceptance test. Related to this, we keep having trouble in quickly finding the cause for a broken test because not all of our commits are being pushed through the acceptance tests pipeline since a single run sometimes takes longer than the time between commits. This makes finding the cause for a breaking test harder. Test Fragility keeps us busy Another reoccurring problem are breaking tests due to UI changes. This might be a simply change of CSS, HTML or a JavaScript snippet, but it happens all the time. Also, there are usually at least a couple of tests that break simultaneously because they reuse certain steps which is not only annoying but often misleading as to where the error really comes from. Demoralization All of the above lead to decreases morale both in writing new tests and in fixing broken ones. Which in turn adds even more overhead to the development process. Looking for success stories This post came into existence because we believe in Specification by Example and we also believe that other teams are successfully running integration tests, even by the use of an automated browser. If you're part of such a team, or have any other valuable feedback to share, please do so in the comments. Happy testing!
by Oliver
19. June 2013 11:39
The scenario I'm facing quite regularly during development is that I want to change to a different feature branch that really someone else is working on to do some maintenance or the like. I know that I can just fast-forward my local branch to the current HEAD of the corresponding remote branch, but using e.g. a GUI such as the wonderful GitExtensions, I have to first check out my local branch and then merge the origin's head into it. This might not only take longer than needed but sometimes lead to problems when the solution is currently open in Visual Studio (depending on the differences between the two branches I'm switching between). Of course, I wasn't the first to want to get around unnecessary overhead, and this stackoverflow answer helped me find a solution to my "problem". For a branch named "design" simply do the following on a command line inside your repository: git branch -f design-2 origin/design-2 Git will answer with something like this: Branch design-2 set up to track remote branch design-2 from origin. And that's that. Beware: as stated in a comment of the above mentioned answer, … Just be very careful not to do that unless you've made absolutely sure the merge would be a fast-forward! Happy gitting!
by Oliver
13. May 2013 11:41
Today, this question came up on the SpecFlow Google Group: Assuming I would like to define in Gherkin the following: 1. When I send some argument xxx with parameter aaa and another parameter bbb 2. When I send some argument xxx with parameter aaa And I would like to have only one reusable function, something like this: [When(@"I send some argument (.*) with parameter (.*) and another parameter (.*)")] public void foo(string arg, string paramA, string paramB) { // check if paramB is null and do something } I am aware of the table feature (pipe separated values) but I would like to stick with this text-alike syntax. We've encountered this use case several times in the past (also avoiding the table syntax) and used to solve it by delegating the shorter version to the longer one but I decided to go see if I can find a more elegant solution. Matching the steps The first step at matching both steps was to simply match the first step. Since version 1.9 SpecFlow has this wonderful syntax highlighting in .feature files which helps identify unbound steps: We can see, that our first pattern is too greedy and matches the second step but not the way we need. Changing the regular expression for the first parameter to something more restrictive, allows us to restrict the match to only the first step (notice that the second step has been colored purple to notify us of the fact that there is no matching step definition, yet): The regex ([^ ]*) we use here means that we match all characters that are not spaces 0 to n times, thus denying the match of the second step because of the space character following the argument aaa. Sometimes, though, you also need to match spaces in arguments and that's when we use a slightly modified version like this: "([^"]*)" which means: match a quote, then match everything but a quote 0 to n times and save this match as a reference, and then match another quote. In a verbatim string (prefixed by the @ sign) this will look like this: Note, that now you'll have to enclose your spaced string value in quotes, though, but you can still use the same method to put that step attribute on. Now, let's go for the second argument. Using a .NET Optional Parameter My first try was to add an optional parameter to the method we already have and provide it with a default argument like this: Unfortunately, SpecFlow complains that for the first step with only one argument the matching method needs to have only one parameter. I thought that the compiler would generate two methods here, one with and one without the optional parameter so that at runtime it could pick the right one depending on which parameters were provided with a value. It turns out that this is not so. Seems that IL code for a method with optional parameters contains only one method, as well, as per this article: Intermediate language for optional parameter method: IL
.method private hidebysig static void Method([opt] int32 'value', [opt] string name) cil managed
{
.param [1] = int32(1)
.param [2] = string('Perl')
.maxstack 8
L_0000: ldstr "value = {0}, name = {1}"
L_0005: ldarg.0
L_0006: box int32
L_000b: ldarg.1
L_000c: call void [mscorlib]System.Console::WriteLine(string, object, object)
L_0011: ret
}
That's why SpecFlow complains.
Solution: Use Two Methods
It looks like there is not direct solution to the problem that would require only a single method. The solution we employ seems to be all you can do about it, at least right now with SpecFlow 1.9. Which would be to use (at least) two separate methods, one of which delegates its execution to the other, more general one:
Happy spec'ing!
by Oliver
20. March 2013 15:24
While setting up a specification tests project for our new TeamReview tool, I was facing an HTTP 500.19 error when hosting our site in IIS Express. There are lots of questions on stackoverflow concerning this error, Microsoft has a whole page on it, but there is a whole bunch of suberrors that this error addresses. Error 0x8007007b: Cannot read configuration file Unfortunately, none of the above mentioned links contained or solved the specific error code I was seeing: Error Code 0x8007007b Config Error Cannot read configuration file Config File \\?\C:\Projects\_teamaton\teamreview\TeamReview.Specs\bin\Debug\..\..\..\TeamReview.Web\web.config After some reading, trying, fiddling, it appeared to me that maybe the path to the config file somehow messed up IIS Express. I admit that it was at least a bit unusual to use the parent directory dots. But it came from my test harness code where I wanted to use relative paths and used Path.Combine() to do that: var webPath = Path.Combine(Environment.CurrentDirectory, "..", "..", "..", "TeamReview.Web");
Pitfall: .. in path
Well, it turns out IIS Express didn't like it. Once I called it with a cleaned up path string, everything just worked:
"C:\Program Files (x86)\IIS Express\iisexpress.exe" /path:"C:\Projects\_teamaton\teamreview\TeamReview.Web" /port:12345 /systray:false
So, watch out for your physical path values when using IIS Express!
Use DirectoryInfo to navigate up your directory tree
To get the correct path without using absolute paths but also avoiding the .. I used the DirectoryInfo class:
var webPath = Path.Combine(
new DirectoryInfo(Environment.CurrentDirectory).Parent.Parent.Parent.FullName, "TeamReview.Web");
by Oliver
11. March 2013 15:02
This is a technical post about how we automatically deploy new portals that will be run by our own portal software discoverize.
After filling in and submitting our portal creation form we do the following:
Save the entered information into /App_Data/<PortalName>/portal.xml
Trigger the TeamCity build through a web request to a specific URL
Send an email with the request status or any errors that occurred
Display a Thank You page.
Now, the TeamCity build configuration that was triggered under 2. does the following:
Execute the Powershell script create-portal.ps1 in c:\projects\discoverize
create the AppPool Discoverize-Portals if it doesn't exist yet (all our portals run within it)
iterate over all folders inside wwwroot\discoverize.com\App_Data
if a folder with the same name exists under wwwroot\discoverize-portals, skip it
else move the current folder from wwwroot\discoverize.com\App_Data to wwwroot\discoverize-portals
add a line with the full path to the new portal folder to newportals.txt (e.g. C:\inetpub\wwwroot\discoverize-portals\<PortalName>)
add a new site to IIS and set its AppPool to Discoverize-Portals
start the new site in IIS (needs Admin privileges, that's why we use a parameterized task scheduler task for that)
Build the source code from the repository
Execute the Deploy-NewPortals target from deploy.proj using MSBuild in c:\projects\discoverize
Iterate over the entries in newportals.txt, for each entry do:
Copy the build output from step 2. to wwwroot\discoverize-portals\<PortalName>
Run the site setup (this uses Orchard's setup method) with our own recipe (using Orchard.exe)
Call an action (SetupComplete) on the new site to mark deployment as done
Create a default entry so there's something to look at (using Orchard.exe)
The SetupComplete action then finishes doing this:
save the information of the portal.xml into an application specific configuration file
create new user "portal-owner" for content management
send an email about the successful installation of the new portal
Quite complex, if I look at it now, but it works!
by Oliver
28. January 2013 14:26
Today, I faced the exception mentioned in the post title: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. (Plus some session details). Circumstances The exception was thrown in my local dev environment while working on our Orchard CMS based portal software discoverize, calling any page in the portal. Obviously, something was wrong not with a single page but rather with a piece of infrastructure. Interestingly enough, only a few moments before trying to open the web site I had done some database manipulation using SqlCeCmd deleting some unneeded columns from one of our tables. It seems that after that the site broke. Solutions tried I tried to get hold of the DB like this: stop and start the web site in IIS (using appcmd stop site "discoverize" and appcmd stop site "discoverize") – no change take DB offline by renaming the file – waited a few moments, renamed it back – no change! Here I started wondering where the lock is saved – is it inside the DB? took the whole application pool offline and restarted it – bang! That helped. I now have my site back up and running and can continue develepment. Conclusion If you encounter the SQL CE timeout error during development inside a web application, restarting the app's app pool will probably get you back to work. Happy coding!
by Oliver
28. January 2013 14:26
Today, I faced the exception mentioned in the post title: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. (Plus some session details). Circumstances The exception was thrown in my local dev environment while working on our Orchard CMS based portal software discoverize, calling any page in the portal. Obviously, something was wrong not with a single page but rather with a piece of infrastructure. Interestingly enough, only a few moments before trying to open the web site I had done some database manipulation using SqlCeCmd deleting some unneeded columns from one of our tables. It seems that after that the site broke. Solutions tried I tried to get hold of the DB like this: stop and start the web site in IIS (using appcmd stop site "discoverize" and appcmd stop site "discoverize") – no change take DB offline by renaming the file – waited a few moments, renamed it back – no change! Here I started wondering where the lock is saved – is it inside the DB? took the whole application pool offline and restarted it – bang! That helped. I now have my site back up and running and can continue develepment. Conclusion If you encounter the SQL CE timeout error during development inside a web application, restarting the app's app pool will probably get you back to work. Happy coding!
by Oliver
28. August 2012 23:41
Working on Marinas.info, we want to create SEO friendly links to specific searches that will hopefully rank high up in the search engines. They should look something like www.marinas.info/germany for marinas situated in Germany or www.marinas.info/wífi for marinas that provide wifi internet access. We also plan on supporting most European languages, some 30+ or so, even if not from the start. Anyway, the number of URLs we will generate is likely to explode, and we need a solution to consistently generate and handle those thousands of different URLs. This post investigates a few solutions that have popped up. External URL Rewriting solutions – UrlRewritingNet and the IIS Url Rewrite Module The two solutions listed here are called external because they integrate with the ASP.NET MVC request pipeline in way that they will be invisible to the actual web application. Routes would be rewritten in an early step and the application would only get to see the rewritten URL that it would know how to handle. UrlRewritingNet Since 2006, UrlRewritingNet has been around and I assume it’s a widespread solution. We use it in Camping.info, and probably the best part is the easy configuration. Still, I remember that we had a few problems with it that needed fixing, so we would most likely have to use that slightly customized version. I’m not 100% sure if it would work with MVC out of the box since we’ve only used it with ASP.NET WebForms. IIS Url Rewrite Module This is a solution by Microsoft that would certainly get the job done, too. I haven’t really used it yet in any larger project though, so I’d have to dive into it first. If you feel like giving it a go, check out http://learn.iis.net/page.aspx/734/url-rewrite-module/. Internal solutions – Orchard Rewrite Rules and ASP.NET MVC Routing These approaches work inside the application. Orchard Rewrite Rules Orchard Rewrite Rules is an Orchard Module that brings (a subset of) mod_rewrite style URL rewriting to any Orchard installation. We’ve successfully used it e.g. to redirect traffic from http://marinas.info to http://www.marinas.info. The greatest benefit is that it’s just another module that you can simply enable and disable at runtime from within the application without a need to recycle the app pool as would be the case for the two external rewriting solutions since their configuration is coupled to the web.config file. The drawback of this solution is that we’re also new to its syntax and it’s not that clear (yet) how we would efficiently map the thousands of routes with their dynamic values that we are aiming at. ASP.NET MVC Routing It seems that ASP.NET MVC’s built-in routing engine can provide us with what we are looking for. At first glance, looking at the standard examples of route definition, it didn't appear that promising. Remember, we want to support URLs such as www.marinas.info/germany and www.marinas.info/wifi for a lot of search categories. The routes to match those request would either have to be very specific, e.g. one per category, or very general, e.g. one for all categories. The first approach would lead to the thousands of routes I mentioned earlier, the latter would also catch requests for other parts of the application that might have nothing to do with search since it’s so general. It turns out there are more advanced features in ASP.NET MVC Routing that allow to overcome these restrictions. Custom Route Constraint with IRouteConstraint and Custom Route Handler with MvcRouteHandler This answer on Stack Overflow to a question concerning exactly the same problem points to a blog post on writing your own Route Constraint which is a great way to deal with the second problem mentioned above, that a too general route would match too many requests. The custom route constraint could e.g. constrain the application of a certain route to only valid values. Another approach, presented in this post on SEO friendly URLs in MVC, would be to derive from the default MvcRouteHandler and implement some custom logic to handle our custom URLs. Conclusion: ASP.NET MVC Routing has it all I’m glad I set out to examine those different solutions because I totally underestimated the power of ASP.NET MVC Routing if I hadn’t. Another great post on Optimizing ASP.NET MVC3 Routing talks about even more of its aspects and how Stack Overflow handles their large number of routes with high performance using (mostly) built-in functionality. Looks very much like this is the way to go! Happy coding!
by Oliver
22. August 2012 13:48
When we started development on Marinas.info, we decided to write acceptance tests for all important features of our application. This decision was even more justified by the fact that a bunch of similar platforms are to follow using the same codebase. We wanted an application with less bugs and easier maintenance. Writing good, automated acceptance tests is not easy and it’s not fast, either. For some time now, we’ve been trying to get the first set of our tests run green, which proved especially tricky on our TeamCity continuous integration server. This post investigates a working solution. The ingredients: SpecFlow, Coypu (Selenium), Browser, Web Server, and MvcIntegrationTestFramework SpecFlow In .NET world, using SpecFlow to write acceptance tests is nothing new and has recently become, yet again, more appealing after its update to version 1.9. One of our scenarios for verifying image upload functionality looks like this: It’s simple to write, easy to read and great living documentation. For Browser based tests you need: Coypu (Selenium) Everyone who has written tests for Selenium for even a mildly ajax-y site knows how painful it can be to create reliably working tests. Coypu alleviates the pain and makes test creation as straight-forward as it should be in the first place. Coypu is: A robust wrapper for browser automation tools on .Net, such as Selenium WebDriver that eases automating ajax-heavy websites and reduces coupling to the HTML, CSS & JS A more intuitive DSL for interacting with the browser in the way a human being would A few examples of Coypu’s clean API can be seen here in one of the step definitions for the above scenario (Browser is an instance of the BrowserSession class from Coypu): A web browser To run browser based tests you, of course, need … a browser! Coypu offers support for quite a bunch of them, including the usual suspects Internet Explorer, Chrome, and Firefox. A web server You need to host your application in some web server or another to process requests. Well, this statement turns out to be only partially true, as you will see with the MvcIntegrationTestFramework. But at least for browser based test you need a web server, and you basically have the choice between IIS and IIS Express (if you don’t want to write your own or use someone else’s implementation). We chose IIS Express as it is manageable through a non-administrator account, but it needs to be installed on all machines that will execute the tests. For non-browser based tests: MvcIntegrationTestFramework Introduced by Steven Sanderson in 2009, this small framework allows to write integration tests for ASP.NET MVC applications and execute them without a browser! It empowers you to make assertions on your controllers’ actions’ results rather than on the rendered html output by injecting some clever hooks into your MVC application under test. An example of how a test would look can be found in the above mentioned post. The “magic” of this framework lies in the use of ApplicationHost.CreateApplicationHost() which creates an application domain for hosting your ASP.NET application. Check out this screenshot of part of the source code: How to put the pieces together After a quite radical evolution of our test code (which you can read up on in my follow-up post The Long Road to Browser Based Acceptance Testing), we finally settled for the following: Before the first test starts, setup an instance of the AUT (application under test). This includes: deploying the AUT as we do for our staging environment, but to a temp folder initialize an AppHost instance à la MvcIntegrationTestFramework, i.e. an ASP.NET enabled application domain that hosts the AUT execute the Orchard setup command via the AppHost instance (instead of running the setup through a browser, which we used to do but was a lot slower) Before each test run (SpecFlow scenario) we then execute various commands to setup the environment for the concrete test, e.g.: clean the database simply by overwriting it with a copy we saved after the initial setup create Marina entries that will be displayed and searchable on the site, again, using the AppHost instance Once we want to execute steps in the browser, we do the following: start an instance of IIS Express pointing to the deployed application (we used the wrapper code from Spinning up IISExpress for integration testing) initiate a Coypu BrowserSession which under the hood creates an instance of the browser you choose after battling with Internet Explorer, Chrome, and Firefox Portable, we now use Firefox 10.0.6 ESR (Extended Support Release) because version 10 is of now the highest version supported by Selenium (2.1.25) and the ESR doesn’t ask to be updated all the time After each test run (SpecFlow scenario) we do this: close the browser shut down the IIS Express instance (we slightly modified the above mentioned wrapper code calling Kill() on the process instance after the call to CloseMainWindow() so that it reliably terminates even on TeamCity) Conclusion Setting up a reliable environment for automatically executing acceptance tests has not been a walk through the park but we finally have a solution that basically “just works”. Hopefully, our experience will help you save a couple of hours and also some headache along the way Happy coding!