Acceptance tests – the simpler the better
I always preferred to write software than write about it, however during my work for various companies I have noticed that all the teams were facing the similar kind of problems. I thought that it would be a good idea to share my observations as well as some information about tools I have written for myself and others who would like to use them.
My first post is about a topic which is valid to each company at some point – writing acceptance tests. I would like to share my remarks on dealing with those tests, and describe why, at the end, I have decided to create my own framework.
First steps with ATDD and FitNesse
A few years ago I was working in a team developing a new, modern version of application for our customer. We tried a new approach of testing our software. Until then, our projects were tested in a way, that development team was writing unit tests, while later, a dedicated test team was executing automated UI tests, as well as, a long list of manual tests. In our project, we wanted to follow an ATDD model and introduce automated acceptance tests. That is why we had started using a FitNesse framework for .NET. It was serving a web application with an editor, where our BA/SME could enter acceptance criteria in a tabular format. It also allowed us to execute those tests against services that we had been developing. Below, there is an example of a TableFixture usage, taken from FitNesse UserGuide:
!|TableFixtureTest |
|Item |Product code |Price |
|Pragmatic Programmer |B978-0201616224| 34.03|
|Sony RDR-GX330 |ERDR-GX330 | 94.80|
|Test Driven Development By Example|B978-0321146533| 32.39|
|Net Total | |161.22|
|Tax (10% on applicable items) | | 9.48|
|Total | |170.70|
We thought that the idea of this approach was very good and definitively much better than what we had had before. First of all, BA was able to write tests and check immediately if given scenario was supported by application. Secondly, we could finally get some clear requirements and scenarios for our project and verify them much quicker against written code.
After few months, we had noticed that reality was a bit different. All scenarios entered in editor had to be mapped to underlying code in order to execute tests. Unfortunately, this editor was neither offering any hints for test syntax, nor any list of implemented methods that could be used to write test scenarios. It meant that it was not possible to write scenarios without constant checking how other tests are written or directly taking a look to code in order to check what is possible to do, what became quite problematic, when our project grew a bit. Maybe because of that, or maybe because of a fact that BA/SME were too busy doing their regular work, they had never entered any test in this editor. It meant that we had been left alone with this new tool…
So, as seen from a developer’s perspective, we had had to learn a new syntax for writing tests. It sounds easy to learn how to use a table fixture, a row fixture and few others constructs, but we had had to spend quite a long time to learn how to cope with all special cases like dealing with expected/actual value comparison, escaping properly all special characters etc. It took us a while to organize our tests and mappings properly as well.
Talking about reorganization… There was basically no such thing like refactoring, which meant that we had had to repeat all of the changes twice – first in code, after that in the web editor, typing everything manually. The bigger the changes, the more painful they were.
After all, we had managed to successfully finish this project and testing process that we applied was remarkably better than previously, however writing acceptance tests with this tool was not a pleasure. After a break from this project, it was difficult to jump back into those tests and re-learn the way they work, as well as, recall all the syntax.
A story about BDD and SpecFlow
A few years later, in a different country, company and project, we had started using SpecFlow to reflect business requirements as testable scenarios, written in BDD way. Below, there is an example taken from Wikipedia:
Story: Returns go to stock In order to keep track of stock As a store owner I want to add items back to stock when they're returned Scenario 1: Refunded items should be returned to stock Given a customer previously bought a black sweater from me And I currently have three black sweaters left in stock When he returns the sweater for a refund Then I should have four black sweaters in stock
This time, our PO and QA were actively working with us on writing and validating requirements, so it made sense to use this tool, because it allowed them to write scenarios without knowledge of programming languages. After few months of work we had started having the same issues with maintainability of those tests like before.
SpecFlow is much better integrated with VisualStudio than FitNesse. It is possible to write and execute tests directly from IDE and SpecFlow plugin offers some help while writing them. What it has in common with FitNesse is that it is also based on concept of writing scenarios in plain text, which is later mapped to underlying code. Like FitNesse, it also has custom conventions and mechanisms for those mappings. When our project was small, we have not noticed any problems, but when it grew a bit (together with test code base), our tests became difficult to maintain in this form.
Like before, any refactoring applied in code, had to be manually applied in feature text files as well. IDE was also constantly showing that none of underlying scenario methods are used (because of reflection based mapping), so it was difficult to determine what could be really cleaned up and what is used during test execution.
While entire framework look&feel is very similar to standard testing frameworks like NUnit, SpecFlow follows different rules of executing tests, which we were not aware of at the beginning. The good example is that all of the methods with [BeforeScenario] attribute are called for each scenario, no matter if they belong to the same class as executed scenario steps or not. What we expected, was the same behavior as NUnit [SetUp] attribute. It was a big surprise for us when we have discovered that, and we had to spent a lot of time to sort out how to write code for test initialization properly.
The second significant difference was that the binding rules were allowing steps that belonged to one scenario to be mapped to multiple instances of classes with step methods.
We encountered some cases where given methods (that supposed to setup scenario data) were executed on different class instance than when methods (that supposed to act on previously prepared data). This problem, as well as issues related to [BeforeScenario] behavior, forced us to start using a ScenarioContext to share data between various steps. The ScenarioContext is basically a global dictionary allowing to put and get data objects identified by string literals, so its usage made our tests even less readable.
The beginnings of LightBDD
So finally, after spending another day trying to understand how our tests were working and how the test context is being shared between steps, I have started working on a simple wrapper on top of NUnit tests, that would allow us to:
- write tests with testing tools that everybody knows how to use and what to expect from them, and
- use all of the standard refactoring methods and IDE help to maintain those tests, but
- keep all of our test definition as clear as possible, so it would be readable and editable by people who does not know C#.
This was the beginning of LightBDD…