What's Wrong With Our Specification By Example Tests

by Oliver 25. July 2013 11:44

We've been working on our customizable portal software discoverize for about two years now using Orchard CMS. From the beginning we were convinced to use Specification By Example to build up a live documentation of the functionality of our software. This has been very important to us since we plan to drive tens if not hundreds of portals using the same code base.

Last year, I've already written about how we do our integration testing. I've also written about why we do browser based testing as opposed to some lower level testing that is in place e.g. in the Orchard.Specs project inside the Orchard source solution. But here we are, a year has passed, and we're still not happy with our approach.

Problems we're facing

The biggest problem is really that to write an acceptance test for a new feature takes nearly as much time as it takes for the feature to implement. This might be tolerable for mission critical software used in banks or space shuttles, but it's just over the top for a consumer website. On the other hand, we want an insurance that the software we ship contains as few bugs as possible.

image

Development speed down by 50%

The websites we generate using our software are quite complex and interactive. This has repeatedly posed challenges on writing robust browser based tests. We chose Coypu over Selenium because it has a cleaner API and handles asynchronous postbacks really well, but its API has still been limiting to us so we regularly find ourselves hacking around those limitations. All of which has to be tested, of course, which takes a noticeable amount of time.

Another problem we face is that we need to change our HTML to accommodate for testing. For example, we keep adding id attributes to elements just so we have an easy and reliable way of accessing those elements in our specification tests. This seems not right but that's how we get stuff to work.

Test Execution speed too high for continuous feedback

The spec test execution time is too high. We're talking about 50-90 seconds per test case if they pass, add another 20-30 secs if they fail (because of the browser automation timeouts). That's a real bummer because it's so easy to loose focus during that time. Additionally, before executing a test suite (or a single test, if that's what you want) we compile and publish our code to a separate destination which the specs run on. This process takes another 45-50 secs which is ok if you run all specs at once but adds significant overhead when working on a single acceptance test.

Related to this, we keep having trouble in quickly finding the cause for a broken test because not all of our commits are being pushed through the acceptance tests pipeline since a single run sometimes takes longer than the time between commits. This makes finding the cause for a breaking test harder.

Test Fragility keeps us busy

Another reoccurring problem are breaking tests due to UI changes. This might be a simply change of CSS, HTML or a JavaScript snippet, but it happens all the time. Also, there are usually at least a couple of tests that break simultaneously because they reuse certain steps which is not only annoying but often misleading as to where the error really comes from.

Demoralization

All of the above lead to decreases morale both in writing new tests and in fixing broken ones. Which in turn adds even more overhead to the development process.

Looking for success stories

This post came into existence because we believe in Specification by Example and we also believe that other teams are successfully running integration tests, even by the use of an automated browser. If you're part of such a team, or have any other valuable feedback to share, please do so in the comments.

Happy testing!

Comments (3) -

Joe
Joe United Kingdom
9/13/2013 12:04:42 PM #

Hi,
I work in a company that has successfully implemented Specflow BDD. We had a problems at first with tests taking as long to write as implementing the feature but have now got it to a level where it's trivial.

For Example, the below  snippet of code is actually THREE acceptance tests and only ONE scenario has been written:

@headless
Scenario Outline: Find services page
  Given I am on the Find services page
  When I click on a '<link>'  
  Then I should go to '<page>'

Examples:
  | link                          | page                       |
  | service finder                | Urgent Care Service Search |
  | Services A-Z                  | Services A-Z               |
  | LINks and Healthwatch members | Accountability             |

The focus should be on just creating some very generic scenarios that can be re-used and then testing new features becomes as trivial as adding some more test data.

Hope this helps but give me an email if you need any further tips. We have over a thousand scenarios on our of our application and they run in a couple of minutes so it doesn't really affect our continuous integration anymore. One of the main things that can slow tests down is the insertion and clean up of test data. We stopped doing that and just used existing data. It may add a bit of maintenance occasionally but speed gained in running the tests makes up for that 100 times.

Give me a shout if you want any further tips.

Reply

Lance Kind
Lance Kind United States
10/7/2013 10:54:26 AM #

It's difficult (though maybe still possible) to succeed with a browser ONLY test strategy.  I don't know if you've heard of the Test Pyramid, but take that into account as this simple model reflects what happens in reality--100% UI testing is costly.  I've also tried this and found it high maintenance:
* creating the tests in the first place (not bad)
* keeping them working correctly (harder)
* keeping the feedback loop fast (impossible)

blogs.agilefaqs.com/.../

Nice writeup and it added structure to my own past experiences doing System Test Only and UI Test Heavy projects.  Now that I've learned, I just don't do that and *always* advise my clients to avoid this like the plague.

Lance Kind
http://ConfessionsOfAnAgileCoach.blogspot.com

Reply

Chris
Chris Australia
11/18/2013 6:36:18 AM #

Hi Anton

Just wondering if you found a way to improve the scenarios you outlined in this post?
For Acceptance Tests we use Specflow for web-service/back-end integration testing. We have minimal UI tests through Canopy(http://lefthandedgoat.github.io/canopy/) and have had some success.

Now trying to refactor and expand  all our testing scenarios (UI/REST/SOAP). I have had a look at a few frameworks and am interested if your situation has improved since this post?
Appreciate the help you go give us before we embark down the same road!

Thanks
Chris

Reply

Add comment

  Country flag

biuquote
  • Comment
  • Preview
Loading

About Oliver

shades-of-orange.com code blog logo I build web applications using ASP.NET and have a passion for javascript. Enjoy MVC 4 and Orchard CMS, and I do TDD whenever I can. I like clean code. Love to spend time with my wife and our children. My profile on Stack Exchange, a network of free, community-driven Q&A sites

About Anton

shades-of-orange.com code blog logo I'm a software developer at teamaton. I code in C# and work with MVC, Orchard, SpecFlow, Coypu and NHibernate. I enjoy beach volleyball, board games and Coke.