by Anton
2. October 2013 15:16
Ususally we spec out features using SpecFlow. Then we write out the step definitions and code the feature (or vice versa). When we programmed the “export entries” feature for the portal management area of discoverize, we did so using TDD (test driven development) with unit tests. Since it is an MVC project, we could mock the controller (and the services needed). It all went well, and in the end the feature was coded. Yet, the SpecFlow scenario had no step definitions to fulfill it. @unittestScenario: Export every property from every entry Given I have 2 entries When I export all properties Then I get a file with 3 lines And the first line are the column names, that is the property names And each other line represents the data of one entry
Usually we write steps that follow links and push buttons in the web interface – as the user would do. This time – since we already had good coverage of the controller action – we decided to hook up the unit tests as the step definitions.
This is quite easy if you know how. We used the @unittest tag to suppress starting the IIS Express and browser for this scenario. Since our unit tests are in a different project than the SpecFlow tests we did everything according to this documentation. After a little refactoring in the unit tests to extract appropriate methods for the steps and adding the step attributes the SpecFlow scenario went green.
[Then(@"I get a file with (\d+) lines"), Scope(Tag = "unittest")]public void FileHasLines(int numberOfLines) { var lines = _exportText.Split('\n'); Assert.AreEqual(numberOfLines, lines.Count());}
by Oliver
25. July 2013 11:44
We've been working on our customizable portal software discoverize for about two years now using Orchard CMS. From the beginning we were convinced to use Specification By Example to build up a live documentation of the functionality of our software. This has been very important to us since we plan to drive tens if not hundreds of portals using the same code base. Last year, I've already written about how we do our integration testing. I've also written about why we do browser based testing as opposed to some lower level testing that is in place e.g. in the Orchard.Specs project inside the Orchard source solution. But here we are, a year has passed, and we're still not happy with our approach. Problems we're facing The biggest problem is really that to write an acceptance test for a new feature takes nearly as much time as it takes for the feature to implement. This might be tolerable for mission critical software used in banks or space shuttles, but it's just over the top for a consumer website. On the other hand, we want an insurance that the software we ship contains as few bugs as possible. Development speed down by 50% The websites we generate using our software are quite complex and interactive. This has repeatedly posed challenges on writing robust browser based tests. We chose Coypu over Selenium because it has a cleaner API and handles asynchronous postbacks really well, but its API has still been limiting to us so we regularly find ourselves hacking around those limitations. All of which has to be tested, of course, which takes a noticeable amount of time. Another problem we face is that we need to change our HTML to accommodate for testing. For example, we keep adding id attributes to elements just so we have an easy and reliable way of accessing those elements in our specification tests. This seems not right but that's how we get stuff to work. Test Execution speed too high for continuous feedback The spec test execution time is too high. We're talking about 50-90 seconds per test case if they pass, add another 20-30 secs if they fail (because of the browser automation timeouts). That's a real bummer because it's so easy to loose focus during that time. Additionally, before executing a test suite (or a single test, if that's what you want) we compile and publish our code to a separate destination which the specs run on. This process takes another 45-50 secs which is ok if you run all specs at once but adds significant overhead when working on a single acceptance test. Related to this, we keep having trouble in quickly finding the cause for a broken test because not all of our commits are being pushed through the acceptance tests pipeline since a single run sometimes takes longer than the time between commits. This makes finding the cause for a breaking test harder. Test Fragility keeps us busy Another reoccurring problem are breaking tests due to UI changes. This might be a simply change of CSS, HTML or a JavaScript snippet, but it happens all the time. Also, there are usually at least a couple of tests that break simultaneously because they reuse certain steps which is not only annoying but often misleading as to where the error really comes from. Demoralization All of the above lead to decreases morale both in writing new tests and in fixing broken ones. Which in turn adds even more overhead to the development process. Looking for success stories This post came into existence because we believe in Specification by Example and we also believe that other teams are successfully running integration tests, even by the use of an automated browser. If you're part of such a team, or have any other valuable feedback to share, please do so in the comments. Happy testing!
by Oliver
13. May 2013 11:41
Today, this question came up on the SpecFlow Google Group: Assuming I would like to define in Gherkin the following: 1. When I send some argument xxx with parameter aaa and another parameter bbb 2. When I send some argument xxx with parameter aaa And I would like to have only one reusable function, something like this: [When(@"I send some argument (.*) with parameter (.*) and another parameter (.*)")] public void foo(string arg, string paramA, string paramB) { // check if paramB is null and do something } I am aware of the table feature (pipe separated values) but I would like to stick with this text-alike syntax. We've encountered this use case several times in the past (also avoiding the table syntax) and used to solve it by delegating the shorter version to the longer one but I decided to go see if I can find a more elegant solution. Matching the steps The first step at matching both steps was to simply match the first step. Since version 1.9 SpecFlow has this wonderful syntax highlighting in .feature files which helps identify unbound steps: We can see, that our first pattern is too greedy and matches the second step but not the way we need. Changing the regular expression for the first parameter to something more restrictive, allows us to restrict the match to only the first step (notice that the second step has been colored purple to notify us of the fact that there is no matching step definition, yet): The regex ([^ ]*) we use here means that we match all characters that are not spaces 0 to n times, thus denying the match of the second step because of the space character following the argument aaa. Sometimes, though, you also need to match spaces in arguments and that's when we use a slightly modified version like this: "([^"]*)" which means: match a quote, then match everything but a quote 0 to n times and save this match as a reference, and then match another quote. In a verbatim string (prefixed by the @ sign) this will look like this: Note, that now you'll have to enclose your spaced string value in quotes, though, but you can still use the same method to put that step attribute on. Now, let's go for the second argument. Using a .NET Optional Parameter My first try was to add an optional parameter to the method we already have and provide it with a default argument like this: Unfortunately, SpecFlow complains that for the first step with only one argument the matching method needs to have only one parameter. I thought that the compiler would generate two methods here, one with and one without the optional parameter so that at runtime it could pick the right one depending on which parameters were provided with a value. It turns out that this is not so. Seems that IL code for a method with optional parameters contains only one method, as well, as per this article: Intermediate language for optional parameter method: IL
.method private hidebysig static void Method([opt] int32 'value', [opt] string name) cil managed
{
.param [1] = int32(1)
.param [2] = string('Perl')
.maxstack 8
L_0000: ldstr "value = {0}, name = {1}"
L_0005: ldarg.0
L_0006: box int32
L_000b: ldarg.1
L_000c: call void [mscorlib]System.Console::WriteLine(string, object, object)
L_0011: ret
}
That's why SpecFlow complains.
Solution: Use Two Methods
It looks like there is not direct solution to the problem that would require only a single method. The solution we employ seems to be all you can do about it, at least right now with SpecFlow 1.9. Which would be to use (at least) two separate methods, one of which delegates its execution to the other, more general one:
Happy spec'ing!
by Oliver
20. March 2013 15:24
While setting up a specification tests project for our new TeamReview tool, I was facing an HTTP 500.19 error when hosting our site in IIS Express. There are lots of questions on stackoverflow concerning this error, Microsoft has a whole page on it, but there is a whole bunch of suberrors that this error addresses. Error 0x8007007b: Cannot read configuration file Unfortunately, none of the above mentioned links contained or solved the specific error code I was seeing: Error Code 0x8007007b Config Error Cannot read configuration file Config File \\?\C:\Projects\_teamaton\teamreview\TeamReview.Specs\bin\Debug\..\..\..\TeamReview.Web\web.config After some reading, trying, fiddling, it appeared to me that maybe the path to the config file somehow messed up IIS Express. I admit that it was at least a bit unusual to use the parent directory dots. But it came from my test harness code where I wanted to use relative paths and used Path.Combine() to do that: var webPath = Path.Combine(Environment.CurrentDirectory, "..", "..", "..", "TeamReview.Web");
Pitfall: .. in path
Well, it turns out IIS Express didn't like it. Once I called it with a cleaned up path string, everything just worked:
"C:\Program Files (x86)\IIS Express\iisexpress.exe" /path:"C:\Projects\_teamaton\teamreview\TeamReview.Web" /port:12345 /systray:false
So, watch out for your physical path values when using IIS Express!
Use DirectoryInfo to navigate up your directory tree
To get the correct path without using absolute paths but also avoiding the .. I used the DirectoryInfo class:
var webPath = Path.Combine(
new DirectoryInfo(Environment.CurrentDirectory).Parent.Parent.Parent.FullName, "TeamReview.Web");
by Oliver
22. August 2012 13:48
When we started development on Marinas.info, we decided to write acceptance tests for all important features of our application. This decision was even more justified by the fact that a bunch of similar platforms are to follow using the same codebase. We wanted an application with less bugs and easier maintenance. Writing good, automated acceptance tests is not easy and it’s not fast, either. For some time now, we’ve been trying to get the first set of our tests run green, which proved especially tricky on our TeamCity continuous integration server. This post investigates a working solution. The ingredients: SpecFlow, Coypu (Selenium), Browser, Web Server, and MvcIntegrationTestFramework SpecFlow In .NET world, using SpecFlow to write acceptance tests is nothing new and has recently become, yet again, more appealing after its update to version 1.9. One of our scenarios for verifying image upload functionality looks like this: It’s simple to write, easy to read and great living documentation. For Browser based tests you need: Coypu (Selenium) Everyone who has written tests for Selenium for even a mildly ajax-y site knows how painful it can be to create reliably working tests. Coypu alleviates the pain and makes test creation as straight-forward as it should be in the first place. Coypu is: A robust wrapper for browser automation tools on .Net, such as Selenium WebDriver that eases automating ajax-heavy websites and reduces coupling to the HTML, CSS & JS A more intuitive DSL for interacting with the browser in the way a human being would A few examples of Coypu’s clean API can be seen here in one of the step definitions for the above scenario (Browser is an instance of the BrowserSession class from Coypu): A web browser To run browser based tests you, of course, need … a browser! Coypu offers support for quite a bunch of them, including the usual suspects Internet Explorer, Chrome, and Firefox. A web server You need to host your application in some web server or another to process requests. Well, this statement turns out to be only partially true, as you will see with the MvcIntegrationTestFramework. But at least for browser based test you need a web server, and you basically have the choice between IIS and IIS Express (if you don’t want to write your own or use someone else’s implementation). We chose IIS Express as it is manageable through a non-administrator account, but it needs to be installed on all machines that will execute the tests. For non-browser based tests: MvcIntegrationTestFramework Introduced by Steven Sanderson in 2009, this small framework allows to write integration tests for ASP.NET MVC applications and execute them without a browser! It empowers you to make assertions on your controllers’ actions’ results rather than on the rendered html output by injecting some clever hooks into your MVC application under test. An example of how a test would look can be found in the above mentioned post. The “magic” of this framework lies in the use of ApplicationHost.CreateApplicationHost() which creates an application domain for hosting your ASP.NET application. Check out this screenshot of part of the source code: How to put the pieces together After a quite radical evolution of our test code (which you can read up on in my follow-up post The Long Road to Browser Based Acceptance Testing), we finally settled for the following: Before the first test starts, setup an instance of the AUT (application under test). This includes: deploying the AUT as we do for our staging environment, but to a temp folder initialize an AppHost instance à la MvcIntegrationTestFramework, i.e. an ASP.NET enabled application domain that hosts the AUT execute the Orchard setup command via the AppHost instance (instead of running the setup through a browser, which we used to do but was a lot slower) Before each test run (SpecFlow scenario) we then execute various commands to setup the environment for the concrete test, e.g.: clean the database simply by overwriting it with a copy we saved after the initial setup create Marina entries that will be displayed and searchable on the site, again, using the AppHost instance Once we want to execute steps in the browser, we do the following: start an instance of IIS Express pointing to the deployed application (we used the wrapper code from Spinning up IISExpress for integration testing) initiate a Coypu BrowserSession which under the hood creates an instance of the browser you choose after battling with Internet Explorer, Chrome, and Firefox Portable, we now use Firefox 10.0.6 ESR (Extended Support Release) because version 10 is of now the highest version supported by Selenium (2.1.25) and the ESR doesn’t ask to be updated all the time After each test run (SpecFlow scenario) we do this: close the browser shut down the IIS Express instance (we slightly modified the above mentioned wrapper code calling Kill() on the process instance after the call to CloseMainWindow() so that it reliably terminates even on TeamCity) Conclusion Setting up a reliable environment for automatically executing acceptance tests has not been a walk through the park but we finally have a solution that basically “just works”. Hopefully, our experience will help you save a couple of hours and also some headache along the way Happy coding!
by robert
6. June 2010 18:21
BDD bietet die Chance, Entwicklungsprozesse zu verbessern. Wo wir sind, wohin wir wollen und wie wir dies erreichen können, darum geht es in diesem Blog-Post. Bestandsaufname Manchmal schreiben wir BDD inspirierten Code, doch wir erreichen nicht das wesentliche Ziel von BDD: "Q&A und nicht-technische Projektteilnehmer sollen besser gemeinsam arbeiten können". Vereinzelt wird eine Spezifikation im BDD-Style einem Kunden vorgelegt, doch unsere User-Stories sind eher Aufgabensammlungen die komplett losgelöst von der Implementierung stehen. Überspitzt gesagt, ist BDD-Style bei uns eher TDD, mit einer anderen Namenskonvention. Wo soll es hin Ich würde vorschlagen, dass wir die zentrale Idee von BDD auch für uns übernehmen, also: "Wir möchten BDD Nutzen um die Kommunikation zwischen Auftraggeber, UI-Team, Entwicklern und Q&A zu verbesseren." Was bedeutet das für die einzelnen Rollen: Product Owner/ Auftraggeber Arbeitet mit User-Stories und Szenarien in einem vorgegebenem Format Bekommt Vogelperspektiven Sicht auf Implementierungsfortschritt Tester/ Q&A Automatische Tests werden aussagekräftiger, der Quelltext wird zugänglicher, weil leichter verständlich Kann Testszenarien in Entwickler-freundlicher Sprache formulieren und den Umsetzungfortschritt (passiv) nachvollziehen Kann Testszenarien direkt im Quelltext definieren Ist wie ein "Product-Owner" für die Q&A Sicht UI-Team Hat einen besseren Blick auf die Entwicklungsgeschehnisse Kann leichter beim Spezifizieren unterstützen und gleich das richtige Format für Projektmanager, Kunden und Entwickler liefern Entwickler Implementierungsschritte und Fortschritt werden von Nicht-Technikern nachvollzogen Muß in einer Form arbeiten, die die tatsächliche Implementierung der Spezifikationen an das nicht technische Team-dokumentiert Konkrete Schritte 1: Anforderungserfassung: Der erste Schritt den wir gehen müssen ist es Anforderungen, User-Stores etc. in einem BDD tauglichen Format zu erfassen. (Siehe dazu auch weiter unten das Template) 2: Implementierungsfeedback: Damit alle sehen können wo wir sind, benötigen wir einen Feedback-Mechanismus der Teil der Buildprozess ist, also am Besten vom Continous-Integrations-Server erzeugt wird. (Screenshot zeigt StoryQ HTML-Report) Template für BDD-Spezifikation Das Template für die BDD-Spezifikation ist Denglish, hier sollten wir überlegen ob wir Deutsch verwenden, wobei wir Methoden und Klassen (dort wo die Konzepte auch in englisch existieren) eigentlich in Englisch umsetzen Titel (Beschreibung der User-Story, auch Mehrzeilig) Die User-Story erzählt: As a [Rolle] I want [Features] So that [Nutzen] Akzeptanz Kriterium: (als Szenarien) Szenario 1: Title Given [Kontext] And [Mehr Kontext]... When [Ereignis] Then [Ergebniss] And [Anderes Ergebniss]... Szenario 2: .... Szenario 3: ... Framework oder Konvention Frameworks gibt es wie Sand am Meer. Welches ist das Richtige? Wie sieht unser Prozess aus, wie wird aus einer User-Story Code? Wie können wir dafür sorgen, dass die Umsetzung und der Umsetzungstand nicht nur beim Implementierer bleibt, sondern auch für alle zur Verfügung steht. Folgende Kriterien könnten wir verwenden: Alle Teammitglieder, insbesondere nicht-technische sollen einfachen Zugriff auf den Implementierungsstand erhalten. Das Framework sollte gute Reports erzeugen oder die Reports sollten gut zu erweitern sein. Ideal wäre eine Integrationsmöglichkeit mit Target-Process (hier können wir aber vielleicht die entsprechenden User-Stories-Ids angeben und so zumindest Links erzeugen). Eine Integration über die Testfälle wäre denkbar? Spezifikation != Test-Driven-Development Mir scheint es wichtig, dass Spezifikationen nicht mit TDD verwechselt werden und das eine BDD Prozess der viele Rollen umschließt, ergänzend zu Unit- und Integrationstests steht. Unsere Kunden oder das UI-Team interessieren Implementierungsdetails, die vielleicht Test-Getrieben mit Unit-Tests entwickelt wurden nicht. BDD in meinem Verständniss sollte Geschäftslogik-relevant sein. Um BDD-Tests von anderen Tests abgrenzen zu können, könnten sollten Sie vielleicht in einer eigenen Ordnerstruktur landen. Eine Datei und Klassen Namenskonvention sollte BDD-Tests deutlich abheben. Das Feedback (z.B.: HTML Reports) über den Implementierungsstatus und die Spezifikation an Nicht-Entwickler sollte also nur BDD-Tests umfassen bzw. diese zumindest alleine stellen. Links: http://blog.dannorth.net/whats-in-a-story/ http://blog.thomasbandt.de/39/2326/de/blog/tdd-bdd-status-quo.html http://www.jamesthigpen.com/blog/2009/05/07/simple-bdd-reporting-with-nunit/
by robert
13. June 2009 17:05
Hier nochmal die Anforderung der Entwicklung (CTRL F12): Die Tests sind grün. Nach der Umsetzung bleibt folgender Eindruck: Positives Erst die Spezifikation zu schreiben fokussiert auf das Ziel Die Entwicklung geht einfach von der Hand Die Benamung von Klassen und Methoden sind selbstsprechend Negatives Die Hilfsmethoden wollen nicht so recht passen Richtig Ästhetisch scheint der Code nicht (aber vielleicht ist as manchmal bei real-world code einfach so?) Hier die Testklasse nach der Implementierung. namespace Tests.Domain.Campsites
{
public class CampsiteImageSearchBehaviour : BaseTest
{
private int _createdImadeId { get { return _imageSetup.Created[8].Id; } }
public void Arrange_n_images_in_storage(int amountOfImages)
{
_nHibernateHelper.TruncateTableCampsiteImages();
_imageSetup.Add(amountOfImages).Persist();
}
private CampsiteImageSearchDescription Get_search_desc_for_id(int imageId)
{
var searchDesc = new CampsiteImageSearchDescription();
searchDesc.Filter.CampsiteImageIds.Add(imageId);
return searchDesc;
}
[Test]
public void Should_retrieve_pager_from_search()
{
Arrange_n_images_in_storage(10);
var searchDesc = Get_search_desc_for_id(_createdImadeId);
_campsiteImageService.GetBy(searchDesc);
Assert.That(searchDesc.PageCount, Is.EqualTo(1));
Assert.That(searchDesc.TotalItems, Is.EqualTo(1));
}
[Test]
public void Should_return_image_by_id_using_search_description()
{
Arrange_n_images_in_storage(10);
var searchDesc = Get_search_desc_for_id(_createdImadeId);
var campsiteImages = _campsiteImageService.GetBy(searchDesc);
Assert.That(campsiteImages.Count, Is.EqualTo(1));
Assert.That(campsiteImages[0].Id, Is.EqualTo(_createdImadeId));
}
}
}