2018
10.21

LightBDD2
Hi,

It was a while since I wrote anything here, but today I would like to announce the LightBDD.Tutorials repository and the first LightBDD tutorial I have made: Web Api Service Tests!

The LightBDD exists since 2013 year and it has evolved a lot since then. For all of that time I tried to maintain the documentation wiki as well as set of example projects, but both were more focused on presenting what LightBDD is capable for. What it is lacking is a full example of how it can be used with the technologies like WebApi, NServiceBus, Selenium and others.

With the introduction of LightBDD.Tutorials this is finally changing ­čÖé

At this moment there is one tutorial on how to use LightBDD to service test AspNetCore WebApi project, but over time, the repo will grow with other fully working examples.

Happy reading and testing!

2017
07.30

This summer is very busy at work. The Summer of Craft has been started to strengthen our development culture. Each day there is something interesting going on, where people can learn new things, share knowledge or meet with peers. One of the event is called Techfast, which is a 30 minute chat on a given topic over breakfast and I had an opportunity to drive one about asynchronous programming and execution. After the discussion, I received a feedback that it helped to understand what async is about, so I thought it would be worth to share it here as well.

Asynchronous programming is not a new concept. Asynchronous programming patterns exist in .NET in various forms since version 2.0, however with language support of async / await keywords in .NET 4.5 asynchronous programming became very popular and it is used now almost everywhere.

Now I’m using async features at work as well as my projects (LightBDD 2.0 is based on async) I still remember how confusing it was when I started exploring it. I am coming from C++ / C# background where in the past, if I wanted to implement some functionality to be more responsive or to process things faster, I was using threads. So when started working with async methods I had questions like:

  • How does async method work?
  • Does async execution involve multi-threading?
  • How async is different from it?

After spending more time digging into the implementation details, reading documentation as well as simply using it for a while (which involved finding a few surprising behaviors) I got a better understanding of how it works. I realized however that it would be much simpler for me to grasp it by finding a simple real life example describing it well – that is why I initiated that discussion on Techfast.

Let's make some burgers

And now let’s make some burgers

Let’s forget about all the programming languages and syntax and think for a while about something much nicer – food – to be more precise preparing food.

We would be preparing a burger from: buns, beef patties, onions, lettuce, tomato, cheese slices and ketchup.

Preparing a burger:

  1. First, wash all the vegetables, peel off onions, slice onions and tomatoes and finally put them on grill.
  2. Then put beef patty on a grill, remember to flip it a few times in order to make it well done (I don’t like blood in my food) and when it is almost ready, put a slice of cheese on top of it to melt a little bit.
  3. In the meantime, put buns on grill for a moment to toast them.
  4. When everything is ready take it off the grill, put meat in bun followed by onion, lettuce and tomato, add some ketchup on top and cover with other part of bun.
  5. The burger is done!

So what does it have with async? Let’s think for a while. I had a few distinguish tasks there:

  1. Wash, slice vegetables and grill them;
  2. Grill beef patty, flipping it few times and melting cheese at the end;
  3. Toast buns;
  4. Put everything together to finish burger.

Asynchronous vs synchronous operations

Now, did I do those tasks one after another? Did I wait till vegetables roast before I put patty on a grill? No not really. As soon as I put onions and tomatoes on the grill I started grilling the patty as well as buns. I have done it asynchronously! As soon as I realized that I will have to wait to finish my current task (grilling veggies) I switched to another one (grilling patty) then another (toasting buns). I could do that because I was not involved in the process of the grilling – it did not matter if I would stand next to the grill or not, all the ingredients will be roasting.

So what is the difference between synchronous and asynchronous operation? The synchronous operation is the one where I am fully involved. The example here is washing and slicing vegetables. I am actively doing it. I cannot walk of the sink or table hoping that vegetables wash and slice themselves. Also as I am actively doing that work, there is no waiting element here so no reason to start other task. I do it from the beginning to the end – synchronously.

So the first observation is: async is about utilizing time to do other task where in other case we will have to wait (do Thread.Sleep() or Semaphore.Wait() methods ring a bell?)
Second observation: async is about dividing the operation into a set of smaller tasks that then could be executed asynchronously.

Does async mean multi-threading?

So how about multi-threading? Does async operations involve multiple threads?

Let’s modify our example a bit:
This time we would be making the same burger(s) but there would be two people preparing it.
The first cook takes the first available task (washing and slicing vegetables).
At the same time the other cook goes to the BBQ and start grilling the patty and buns.
When vegetables are sliced the first cook puts them on the grill as well…

We could continue this story with talking that cooks will move on to prepare other burgers or we could add more tasks for cooks, but I think the example describes the clue of the story.
The cook represents a thread. In first scenario we have managed to prepare a burger with one thread / cook where in second scenario tasks were distributed between multiple cooks.

As in both scenarios we managed to make a burger the conclusion is: async execution is independent from multi-threading. Multiple threads can support / speedup the asynchronous operations but single thread is enough to perform async operations as well.

Conclusion

When we cook, we have to perform various small tasks to finish with our favorite burger in a hand. Some of the tasks we do require our full involvement and attention such as slicing or washing vegetables. Other tasks however (such us grilling patties or roasting buns) does not require our full attention allowing us to start them, move on to something else in the meantime and come back to finish them when ready.

In the programming world it is very similar. Some of the operations such as typical collection sorting algorithms can be executed immediately, while others such as network or disk I/O operations requires executing thread to wait in order to finish.
The async programming with async / await model allows to build methods in a way that they are composed of tasks where executing thread can move on to execute different task if current task does not allow further processing without waiting.

As one cook is enough to make a burger, one thread is also sufficient for asynchronous execution. Similarly to more cooks, more threads may make things faster, but they are not necessary – for example:

  • Javascript is single-threaded and supports asynchronous processing,
  • .NET console applications are asynchronous and multi-threaded by default,
  • .NET WinForms applications uses one thread by default to process async methods called from UI thread.
2017
02.28

LightBDD2
I’m happy to announce that LightBDD 2 is released and ready to be used.

New platforms and frameworks support

The LightBDD has been reworked to allow support for various platforms and frameworks.
With version 2, the LightBDD packages are targeting both, .NET Framework (>= 4.5) and .NET Standard (>= 1.3) frameworks which allows it to be used in platforms like regular .NET Framework, .NET Core or even Universal Windows Platform.

New testing framework integrations

The testing frameworks integration projects has been reworked as well to leverage from cross-platform frameworks capability as well as remove LightBDD 1.x integration drawbacks.

A following list of integrations is available with LightBDD 2:

  • LightBDD.NUnit3 – integration with NUnit framework 3x series,
  • LightBDD.NUnit2 – integration with NUnit framework 2x series (to simplify migration from LightBDD 1x),
  • LightBDD.XUnit2 – integration with xUnit framework 2x series,
  • LightBDD.MsTest2 – integration with MsTest.TestFramework, a successor of MsTest.

Asynchronous scenario support

The LightBDD 2 runners are fully supporting async scenario execution.

The example below shows scenario execution for steps returning Task:

It is possible also to mix synchronous steps with asynchronous ones with RunScenarioActionsAsync method:

New configuration mechanism

The LightBDD configuration mechanism has been changed too. In version 2, all the configuration is now done in code, and framework has been changed to allow more customizations than version 1x.

More details

For more details, feel free to visit project home page.
In order to jump quickly into the code, a quick start wiki page may be helpful
Finally, there is also a wiki page describing how to migrate tests between LightBDD major versions.

Happy testing!

2017
01.22

It has been almost 3 and a half year since first version of LightBDD has been released on Nuget (1.1.0) and almost half a year since last update (1.7.2).
Since the beginning of the project, new C# language features have become more popular like async/await and new platform such as .NET Core or standards like .NET Standard emerged.

Due to a fact that LightBDD 1.X is targeting .NET Framework 4.0 and its few implementation details (like usage of ThreadLocal<>, StackTrace or CriticalFinalizerObject), it made it difficult to adapt to new trends.
Also, with project evolution some of its features became obsolete.

Because of these reasons, it is time to make bigger changes in the framework and take it to version 2.

What will change?

Full async support in core engine

The version 2 engine will be designed to run scenarios and steps in asynchronous manner. The async scenario and step methods will be supported but ultimately it will depend on step syntax implementation. It is planned to support async execution in extended step syntax (parameterized steps) but not in simplified one.

Support for other platforms and frameworks

The first release of version 2 will be targeting .NET Framework 4.6 as well as .NET Standard 1.6 making LightBDD available for new frameworks and platforms.
LightBDD will be officially supporting .NET Core and .NET Framework platforms and possibly more in future.

After release, an additional investigations and tests will be made in order to check possibility to extend support to .NET Framework 4.5 (the lack of AsyncLocal<> class makes it problematic).
Currently, there is a plan to drop .NET Framework 4.0 support.

Testing framework support

LightBDD version 2 will support following testing frameworks:

  • NUnit3 (.NET Framework and .NET Core),
  • XUnit2 (.NET Framework and .NET Core),
  • MsTest (.NET Framework and .NET Core).

MbUnit support will be dropped as the project is dead – it may be however added later if there would be a need for it.

Framework modularization

The LightBDD projects have been reworked in order to separate LightBDD features from the core engine.
Features such as step execution syntax, step commenting or summary generation will be separated into dedicated packages enabling:

  • an ability to version and evolve features independently,
  • users to pick features they really need.

In code configuration

Due to a fact that app.config is not available across all platforms, the LightBDD engine will be now configured in code.
For each testing framework, a LightBDD scope initialization code will have to be present (the scope initialization may look differently, depending on testing framework) and it will allow to configure the runner, including:

  • customizing summary report generation,
  • enabling additional features and extensions,
  • customizing framework core mechanics like culture info, step name formatting method and more.

The code below may change on release but it visualizes how configuration will be done:

Less workarounds

The new version will eliminate some of the caveats of LightBDD 1.X implementation.

It will be no longer needed to apply [assembly: Debuggable(true, true)] workaround to properly format scenario names in release mode – instead, it would be required to use LightBDD specific [Scenario] attribute to mark scenario methods instead of test framework specific attribute like [Test], [Fact] or [TestMethod].

Also, the explicit LightBDD scope makes summary files to always generate, where in version 1.X summary files were not generated if their creation was taking more than 3 seconds (limitation of CriticalFinalizerObject).

Migrating LightBDD 1.X to 2.0

The upgrade to version 2 will require test code update, however the number of changes are reduced to minimum and most likely will cover:

  • namespaces update,
  • framework specific test method attribute update to [Scenario] attribute,
  • LightBDD configuration change from app.config to in-code configuration,
  • updates in context based scenarios.

LightBDD 2 will be no binary compatible with LightBDD 1.X.

When LightBDD 2 would be available?

The current state of the project is that all the implementation changes are finished, however other tasks have to be done before the release, including finalizing the layout of projects, updating CI pipeline, documentation and wiki page.

The new release should be available in next few weeks.

2016
07.17

Octopus Project Builder

In my last post I wrote about my plans to create the Octopus Project Builder, a tool allowing to configure Octopus Deploy projects from yaml files, like Jenkins Job Builder does for Jenkins.

Since last month, I managed to progress with this work and I would like to share the outcome.

The Octopus Project Builder allows to configure:

  • Project Groups,
  • Projects,
  • Lifecycles,
  • Library Variable Sets (including Script Modules).

As I mentioned previously, the Project definitions can be very verbose, especially in the deployment actions section, that is why OPB allows also to define templates for Projects, Deployment Steps as well as Deployment Step Actions. The templates can be parameterized as well as it is possible to override template values when template is used in resource definition.

So how does it look like?
Below there are example yaml files with sample configuration.

The Project Builder, Lifecycle and Library Variable Set definitions are self explanatory.

The Project definition yaml is very simplistic however. It is because it uses the parameterized template to install the nuget package on target boxes.

So how does the template look?

The project template has specified the most common properties (so it is not needed to define them in each project). It has defined two template parameters as well, one to specify the package name to be installed and the other to specify on what machines the package will be installed. Further in the template definition, the template parameters are used with ${param_name} syntax. The template itself is also using an another template to define the deployment step action. This example shows that template parameters can be passed further to the inner templates.
Finally the deployment action definitions shows escaping sequences.
Normally, any existence of ${param_name} would be treated as the template parameter usage. If this behavior is not desired, then $ symbol has to be escaped with \. However in this example we want to compose the Octopus.Action.Package.CustomInstallationDirectory from installation directory and package name, that is why there is \\ that represents directory symbol.

Yaml configuration description

The yaml configuration offers much more options than ones presented in the example. The OctopusProjectBuilder project home page contains a configuration manual describing the full configuration model.

Finally, a nuget package is available as well: OctopusProjectBuilder.Console

Feel free to take a look at it and give it a go.
Also, any feedback is welcome.

Have fun!

2016
06.13

Since last few months we have started redefining our CI/CD pipelines to use Octopus Deploy for deployment. Octopus is a great tool to define and manage deployment environment and deployments. It allows to nicely separate environment details (like number of boxes, box names), the environment related settings (like URLs, Connection Strings) and the deployment process (steps that have to be performed) from project executables. Moreover, Octopus offers out of the box all the tools needed to propagate packages to all target boxes, install them as windows services or IIS applications – it is a great benefit, because before we had to develop and maintain a quite complicated scripts for doing the same.

Over this few months of work we have found however one deficiency in this tool. All of the project, process, variable or environment configuration has to be done through UI, and as in every UI some of the operations are not easy to be done. Scenarios like moving variables from the project to the variable set, duplicating steps within the process or applying same changes on multiple processes are time consuming and a bit irritating over time.

Jenkins Job Builder

We have had a similar issue with other tool in the past: Jenkins, but we have found a great solution for it: Jenkins Job Builder. The JJB allows to define Jenkins Job in human friendly YAML format, and the beauty of it is that:

  • it is a text, so all the operations like moving variables to a different scope, changing definitions, renaming etc are as simple as text copy-paste/replacement operations,
  • it can be put into a source control system, which allows to see the change history and gives an easy way of restoring previous versions,
  • it can be easily applied to other Jenkins instances (which is very handy in case of migrations and box rebuilding).

Octopus Project Builder

Inspired by Jenkins Job Builder, I decided to spend some time on creating similar a tool for Octopus, the Octopus Project Builder, hosted on GitHub: https://github.com/Suremaker/OctopusProjectBuilder.

It is a very early stage of the project, but I managed to explore the Octopus API a bit with Octopus.Client and Yaml serialization with YamlSerializer.

So, how does it look like?

I have a Project Group with test project:

Project Group

After I run the OPB download command:

OctopusProjectBuilder.exe -a download -d c:\temp\octo -u http://localhost:9020/api -k API-XXXXX

I got the file ProjectGroup_My group.yml with content:

…so the OPB managed to generate YAML for my project group.

Then, I edited the file with this content:

and run the upload command:

OctopusProjectBuilder.exe -a upload -d c:\temp\octo -u http://localhost:9020/api -k API-XXXXX

Finally I got my project group renamed and a new group created:

Updated Project Groups

Managing more data

Now there is a time for more complicated stuff, the projects themselves. This is the current work in progress. So far, I noticed that Octopus stores the step action definitions in a bit different key-value format. Here is a sample of how the project may look like:

Future plans

Playing with Octopus and YAML is an interesting experience and I would like to explore it a bit more.
So far I have a few thoughts what I would like to implement here.

First of all, none of the samples have any IDs put in YAML. I want to build all the correlations basing on the human friendly names. Above I presented a scenario where Project Group has been renamed. Basically it would be possible to specify that the current name is Y, while the previous was X. When OPB will be uploading definitions to Octopus, it will first look for name Y and then X in order to rename it to Y if it is not renamed yet.

The Actions section look a bit complicated here, with a long named keys that are not really user friendly, like Octopus.Action.Package.AutomaticallyUpdateAppSettingsAndConnectionStrings. I worry that they are also not that well documented, so it may be a bit difficult to find them. To overcome this problem I would like to implement some macro/templating mechanism, a bit similar to JJB macros, that would allow to define a template of an action and then easily apply it in various projects.

The next thing, is that OPB will support multiple input files, so it would be possible to split definitions of projects, variable sets etc. On download, it will also put all the definitions separately.

An another thing would be the sensitive data representation. I would like to implement a feature allowing to keep the sensitive data in YAML encrypted where OPB will decrypt it before uploading to Octopus.

Finally, I would plan to support following configuration in OBP:

  • Project Groups,
  • Projects (with process and variables),
  • Variable Set Libraries.

More updates will be posted soon…

2015
08.10

Acceptance testing service depending on Web API

Today, my new blog post has been published on tech.wonga.com.

I am describing there how we are acceptance testing services that depends on Web Api.

2015
07.02

LightBDD 1.7.0

It was a while since I have written a last post.
During this time many things happened, including a fact, that I was implementing new requirements for LightBDD.

Finally, since 21st of June, a new LightBDD version 1.7.0 is available for download on NuGet.

So what’s new:

xUnit support

Yes, since this version, there is a LightBDD.XUnit package, that supports writing and executing test scenarios with xUnit 2.0 (.NET 4.5.1). It means also that scenario tests can be executed in parallel with full support from Visual Studio and Resharper!

The project wiki page contains information about integration with xUnit, the adequate example project is present in project repository, as well as Templates folder and Visual Studio Gallery extension package has been updated to support it.

The integration with xUnit was interesting in a few aspects.

First of all, xUnit 2.0 supports parallel test execution. While LightBDD supports concurrent execution since its very early versions (it was always possible to run MbUnit tests in parallel), there was no decent support from newer Visual Studio / Resharper versions to run them. With xUnit it is possible.

Because of this feature, xUnit designers decided to not capture a System.Console output during test execution. Instead, they offer an ITestOutputHelper interface that has to be used to capture test output in order to display it in Visual Studio test windows. Later, I have found however that xunit console runner does not use this interface, but prints System.Console output. I have noticed also that standard progress printed by ConsoleProgressNotifier is unreadable because multiple tests report their progress at the same time.
An ability to print test execution progress is one of the important LightBDD features, that is why, I had to solve both of those issues. Finally, I have implemented a few more versions of IProgressNotifier interface, and configured LightBDD.XUnit to work properly with both, Visual Studio and console runner – more details are provided on wiki page.

The second problem with xUnit was that it does not support Assert.Ignore() calls to stop test execution and make the test ignored at run time. This feature is crucial for LightBDD, because it allows to execute all already implemented steps in given scenario, even if not entire scenario is fully implemented. It gives a better traceability of scenario implementation progress.
To make it working, I had to extend xUnit a little bit – I have added a [Scenario] attribute, which should be used instead of [Fact] or [Theory] attributes, and a ScenarioAssert.Ignore() method allowing to ignore test execution at run time. Hopefully, the xUnit has an amazing extensibility points, which all are supported natively by test runners and Resharper. Only because of that, I was able to implement ScenarioAssert.Ignore() for LightBDD.XUnit integration.

So finally, the test written with LightBDD.XUnit looks like that:

Steps auto-grouping

Steps auto-grouping was a requirement posted on the LightBDD project page.

With this version of LightBDD, if a consecutive steps start with the same type, e.g. GIVEN, WHEN, THEN, SETUP, all except first step would be renamed to AND:

More information about auto-grouping (and step syntax mixing) is available on wiki page.

Runtime comments for steps

This was an another requirement posted on the project page.

Since this version, it is possible to use StepExecution.Comment(), StepExecution.CommentFormat() methods to comment currently executed step, where those comments would be included in execution reports – more on wiki page.

Visual Studio Gallery extension

Since version 1.6.1 it was possible to install LightBDD for Visual Studio extension from Visual Studio Gallery. With this version, it has been extended with Project and Item templates allowing to write test scenarios in xUnit.

And more improvements

To read all the changes made in version 1.7.0, please feel free to take a look at Changelog.txt file.

Finally, I would be happy to announce that there is a LightBDD framework blog post on Wonga company blog for you to read!

Happy testing!

2015
03.19

Basic expectations for acceptance test framework

In previous post I have described a story of working with acceptance tests and how I encountered problems that motivated me to create LightBDD framework. In this post I have written about different types of tests, especially describing nature of acceptance and end-to-end tests. Now, I would like to focus on my observations regarding requirements for a framework allowing developers to work on behavioral tests effectively.

Basic requirements

During work in different companies I have realized that the expectations for acceptance tests and testing framework depends on the company size and its culture. The first team in which we started looking on improving our testing tools was a part of a small company with a very informal culture. Product Owner and Quality Assurance were dedicated to our team and were pairing with us in order to formulate scenarios that fulfill their expectations but also fit to the system architecture. They were interested how our acceptance and end-to-end tests look like. Both tests were having only one purpose at that time – to ensure that our software works fine.

That was a time when we have realized that tests written in SpecFlow were too difficult to maintain (I have described reasons previously). We have started asking questions what we really need from a testing framework.

Clear tests

The first set of questions were related to the fact that we were receiving requirements from PO/QA in form of business scenarios. We wanted to be able to quickly response PO/QA questions like:

Is this scenario already covered by tests?

What is this test checking exactly?

We thought that the best option would be to model our tests to allow to preserve a nice given-when-then that PO/QA were preparing for us. If our tests would reflect provided scenarios, as close as possible, they would be easy to present to PO/QA but also would be easy to read and understand by developers.

Maintainability

With the knowledge about maintenance problems related to tests written in frameworks like SpecFlow / Fitnesse, we realized that it was a crucial requirement for a testing framework. At that point we knew that it is a tricky problem, because maintainability issues reveal after a longer period, when project grows a bit. It is safe to say that project consisting of 1 scenario written in any testing framework looks easy to maintain, but would it be the same if there are 30 different scenarios? What if there are even more? All projects evolve (unless they are dead), so do the tests. Some of the scenarios become no longer applicable and are removed, some are added, while others are extended or shrunk with few steps. Finally some scenarios may become more precised or generalized, so their steps would just be altered.

All of those changes brought a following questions that we started considering in our design decisions:

How easy would it be to add a new scenario?

How easy would it be to add or remove steps to any given scenario?

How easy would it be to rename scenarios or steps?

If scenarios are removed, how easy would it be to clean methods that are no longer used by any scenarios?

How easy would it be to restructure and reorganize test suite?

If project has 5, 30, 100 scenarios, how long it would take to apply those changes to all of them?

By how easy we mean:

  • how many manual steps have to be taken by developer / PO / QA in order to apply change?
  • are all of those steps have to be applied into one place / project / location / repository, or they have to be made in different places?
  • how long it would take to apply such change?

Clean code

Maintainability does not refer only to changing code. It is also about:

  • understanding existing tests by new people in a team,
  • investigating why they are failing,
  • checking which scenarios are still valid after changed requirements.

It brings a following questions to be answered:

How easy would it be to understand how given scenario works?

Is it possible to analyze scenario flow, without debugging it?

How easy would it be to debug given scenario?

We wanted to have a framework that:

  • does not require using literals with regular expressions everywhere,
  • does not generate a bunch of files with unreadable code,
  • does not use loose binding between scenarios and underlying methods,
  • does not require usage of static contexts or any complex constructs to pass state between scenario methods,
  • has an intuitive behavior,
  • is easy to navigate with Visual Studio.

Traceability

Previously, I have mentioned that acceptance tests covers much wider scope than unit tests. During investigation of failed acceptance or end-to-end tests we have often been asking questions like:

What was the test stage when scenario has failed at?

Which operation performed on GUI failed the scenario?

Which component on end-to-end journey behaved incorrectly?

We wanted a framework that would allow those questions to be easily answered at the first glance, without spending minutes on analyzing logs and stack trace.

Execution progress monitoring

Acceptance tests are slow. End-to-end tests are even slower. All of us has spent so much time staring at Teamcity, waiting for tests to finish in order to close a ticket, to release project on production or to finally go home leaving board green. So many times it occurred that some of those tests were broken, causing whole build to fail. Those failing builds were taking much more time to execute than the ‘normal’ builds, making waiting even worse (I have described reasons for this behavior in Test characteristics section of this post)… If we only knew what was happening with those tests, we could immediately detect issue, stop the tests, fix it, rerun them and go home… Of course, during fix we were adding more Console.WriteLine() or _log.Debug() statements to the test methods to detect those problems much faster next time, but there were always some places where such logging was missing. Also, the practice itself was not good, because it made whole tests code less clear to read and required additional typing.

So, what we really wanted was a framework which would allow to answer a following questions without the need of any additional developer intervention:

What is the progress of tests that are currently being executed on CI?

Why current execution takes 2 minutes more than normally?

What are currently executed tests doing now?

Are those tests just slower but still passing, or is something horrible happening with them?

Simple solution is the best one

All of those requirements that I have just described could make an impression that we wanted to have a very complex, sophisticated framework and it would take at least a year to build it – it was exactly opposite! The first version of a testing framework that fulfilled all of those requirements, consisted of a class with 1 public method in total. It was quite difficult to call it even a framework…

Within a week, after a few design meetings we came with the idea to use a standard NUnit Framework with a few conventions to write our acceptance tests:

  • reflect a Given-When-Then scenario name as a test method name,
  • represent each scenario step as a method call in test,
  • name each step method the same as step in scenario (replace spaces with underscores),
  • wrap all steps with a RunScenario method, so step methods could be passes as delegates that would allow to omit brackets and will allow to display execution progress,
  • separate all test implementation details from test by using partial classes.

An example scenario taken from Wikipedia page:

would look like as follows:

with the example implementation as follows:

The BDDRunner.RunScenario() method was responsible for doing two things only:

  • executing step delegates in provided order,
  • printing step name before it’s execution.

That’s it!
So, how all requirements were fulfilled? – Lets see:

Requirement Solution
Clear tests Used conventions allowed PO/QA easily understanding tests, even if they were written purely in code. We were still able to pair and work together on them. We were also able to quickly browse our existing tests to check if given scenario is already in place or not.
Maintainability We decided to place all our tests directly in code, representing all feature elements (features, scenarios, steps) with corresponding code constructs like classes or methods. This allowed us to use all standard developers’ tools (IDE, Resharper) and methods (refactoring, static analysis, running tests from IDE) to maintain our test code effectively.
Clean code Instead of reinventing the wheel, we decided to use existing tools to do things that they are doing well. Everybody knew NUnit framework, how to write tests with it and what behaviors can be expected from it. We went with this well known test structure. The convention that we used for structuring our tests gave us better clarity of what given test is doing. Explicit steps execution allowed us to analyze them quickly and effectively (after all it is only a matter of navigating to step method implementation).
Traceability Representing each step as a method with self describing name and printing step name before it’s execution allowed us to localize and understand scenario failures quicker, by analyzing exception/assertion stack trace or checking a execution console output in both, CI and Visual Studio.
Execution progress monitoring Again, because each step name was printed before it’s execution, we have got an execution progress monitoring for free. It finally allowed us to track on CI what is the current stage of executed tests and quickly determine that some of the steps are executing longer or failing. Also, because the Teamcity was using time stamps for printing console logs, we could analyze which steps are executing longer and focus on their optimization.

LightBDD

I have noticed that the small BDDRunner class become very helpful for our team to develop both, acceptance and end-to-end tests, so I decided to create an opensource project and share it with others. The class that I have described above became a first version of LightBDD – there is a first commit showing how it looked then.

Thank you.

PS. In the upcoming post, I will describe how requirements changed when I joined a larger company with a corporation-like environment, and how LightBDD evolved into a current form.

2015
03.04

Beyond unit tests

All the projects I was working on were covered by various test types to ensure that developed code is functioning as expected. It is interesting, however, that almost each project had a slightly different combination of test types. Also, I have noticed that each company was naming and structuring those tests in a different way. Because all of those definitions are blurred a bit, I thought that it would be a good idea to take a closer look and describe how the tests were constructed, what was their purpose and what was the working experience with them from the developer perspective.

A different test types

Below, I have enumerated a most memorable types I have seen:

Type Description
unit tests Definitely the most common and well know test types. Well defined by Martin Fowler in his UnitTest bliki article. We used them to test pure business logic in isolation from external dependencies, like file system access, database, network etc.
They are the fastest tests as their scope is very small and all external dependencies are mocked.
integration tests I should probably say: an application internal integration tests, as we used them to test all classes responsible for communication with external dependencies like database, file system etc. within a developed service or application.
They have the same scope as unit tests, but they are much slower.
automated GUI acceptance tests We used those to test desktop or web application GUI interface by using automation tools like Selenium or QTP. In one project, they were used to verify business scenarios of desktop application deployed and configured in testing environment, so the tests were heavy and slow as scope was referring to whole application.
In other project, tests were referring only to a thin presentation layer of web application, where other parts such as back-end services were isolated.
service acceptance tests We used those tests in various projects and companies to verify behavior of services (HTTP or message based). They were user scenario specific, usually defined by Product Owner and/or Quality Assurance. Their scope was single service or few services making a logically autonomous component.
end to end tests Those tests were used in projects focusing on bigger systems, especially with SOA architecture and we used them to ensure that all services or components were working together properly. Same as acceptance tests, they were scenario based but the scope was basically a whole system.
manual user acceptance tests Those tests were manually executed by QA to ensure that application works as expected. Depending on software nature, they were similar to one of previous three test types.

In fact there were a much longer list of those (like regression, smoke, load and performance tests, etc.), but I have decided to omit them, as their structure, scope and way how they works is similar to the mentioned tests, and the only difference is reason why they are created, their function or just a different name.

Beyond unit tests

Apparently the unit tests are the most obvious and well known test types that could be spot in various projects. During my work history, those tests were always present (maybe except the very first projects I was working on). The presence of other test types was not as obvious. I would say that in projects that have been started before an Agile Methodology era, most of the tests were manual. The tests were defined in a very loosely manner (like: check that application starts and it is possible to do X), or they were structured in a list of steps to execute and expectations for execution results. I would skip the manual tests in further parts of this post, as they were usually done by a separate team of testers and there was nothing related to programming itself. I would like to just mention two things about them:

  • manual user acceptance tests were a base for writing their automated versions (later referred as acceptance tests),
  • the formalized version of manual tests was written in a form of steps and expectations, so in reality it was a precursor for BDD-like tests.

The integration tests were definitely not unit tests, because they were testing integration with external dependencies such as databases. If present, we used them to test all classes interacting directly with externals, like the ones following repository pattern. In order to run them, we had to have a real database to connect to (if possible, we tried to use an in-memory / file-based database version like SQLite to make easier to run tests) or the sample input files to play with. Beside that, those tests were not much different from unit tests. Because of this strong similarity, I would omit them from now on.

Now, if we take a look on GUI tests, it is easy to spot that they are really similar to service tests. The only difference is that GUI tests uses GUI as an interface, while service tests are using HTTP or messages as an interface to communicate with tested service/application. We used those tests to check the application behavior. Usually we were following an approach where the tested application was installed and run in the same way as it would be run after final release, so the successfully executed tests were giving us a proof that application would behave the same way when installed in production. The assumption that tested component has to be the same as on production means that during testing we have not been altering any program code with mocks. We were also trying to use only public and official APIs to run the tests (i.e. GUI, HTTP interfaces, messages, input files, etc.) and avoiding direct alterations of internal component state like manually altering data in database etc. Of course there were cases, where we decided to violate this rule, but usually it was dictated by the poor interfaces definition while the tests were being created for existing software or a significant test speedup. It is worth to mention that tests written in this form are more high-level and much slower than unit tests. Also, they are more behavior specific, focusing on the action result, not the way how it is achieved.

The last kind, end to end tests were used in projects consisting of multiple autonomous components. Similarly to acceptance tests, we were deploying all components in a form which would be deployed in production. Obviously, those tests were the slowest ones, because all the tested components had to perform a specific action in order to succeed the whole test – nothing was mocked there.

I have found interesting the way how Martin Fowler’s has identified those tests by their function:

  • Acceptance tests, covering a list of scenarios that define behavior of a specific feature (like login, shop basket, etc.),
  • User journey tests, covering all actions that have to be taken from the user perspective in order to achieve a specific goal,

and their scope:

Test characteristics

In comparison to unit tests, which are low-level, focused on a small part of code and fast, those tests:

  • are high-level, business scenario / behavior based,
  • refers to wide part of code, covering one or multiple components, hence
  • they are usually much slower to execute.

There are few interesting consequences of this characteristic. First of all, those are high-level tests, focusing on behaviors of tested component or whole system. They implement scenarios, often written in BDD form:

  1. Given an opened login window,
  2. when user enters valid credentials,
  3. and user clicks the login button,
  4. then the login window should close,
  5. and user should successfully log in to the application,
  6. and user account details should be displayed on the screen.

or

  1. Given a sample wav file present in input folder,
  2. when an EncodeFileMessage is sent to Encoding Service with sample file path and MP3 output format specified,
  3. then the Encoding Service should publish a FileEncodedEvent,
  4. and that published FileEncodedEvent should have a path to encoded file in MP3 format.

Those scenarios are focusing on what is happening in the system, not how it is done, so they are usually using a public API of application for triggering an action and later query / validate its outcome. The scenarios are referring to a business feature or a whole user journey, which means that the scope of those tests is much wider than in unit tests, covering a part of component, a whole one, a few components or even a whole system.

A test scope has a big influence on how those components are tested. If test refers only to one component, the component:

  • may be started directly from a process that performs a test, or
  • may be deployed into a dedicated testing environment and accessed remotely by the test.

If scope corresponds to multiple components, it usually means that all of them have to be deployed into a testing environment and configured to communicate with each other. If component has to be deployed before testing, a dedicated test environment has to be present in order to run such tests. It also implies a time overhead related to component installation, configuration and start-up.

With the most common testing approach, the tested component is executed in a separate process than test, so that test code communicates with it in asynchronous manner.

The test scope and asynchronous communication have a big impact on test execution time. Those tests are slow, of course, the execution speed depends on a project, type of tests and their structure. It may vary between less than a second for service test to more than few minutes for end-to-end test.

A huge factor in execution speed plays a way how assertions are defined in such tests. They base on component public API, which means that they usually check things like:

  • a requested information has been displayed on a screen,
  • a message X has been received,
  • a resource Y became available over HTTP, or
  • a file appeared on FTP server.

Those assertions are time based. They have to repeatedly check specified condition up to a defined timeout, because the tests are asynchronous and components require time to process requests in order to fulfill those criteria. In case when something goes wrong, this type of assertion would consume a full timeout before it fails. It can lead to situations where successful tests take few minutes to execute, while tests executed on a faulty system could take few hours until they all fail (it is a real example). It is worth to mention that the biggest time killers are those, checking that specific condition did not happened, as they always use whole timeout to succeed. As that kind of tests are never good enough (it is always possible that the tested condition would happen just after assertion finish), we have been always trying to eliminate or limit them if possible – usually the same scenarios could be easily covered with unit tests.

To summarize, the nature of acceptance and end-to-end tests makes them significantly distinct from the unit tests. In the next post, I will describe how we came up with an expectations for a testing framework allowing us to write acceptance tests in easy manner.

%d bloggers like this: