2019
01.27

LightBDD 3

Hello,

I am happy to announce the LightBDD 3.0!

Before I go to the post details, here are some quick links for those who just want to start looking into it:

In LightBDD project, I am trying to follow semantic versioning guidelines. I was able to develop the LightBDD 2.x series for almost 2 years without introducing a major breaking changes.
It means however that more and more methods and classes become obsolete during the project evolution, making the code base growing bigger and more difficult to maintain. Over that time, I observed also that some of the design decisions introduced unnecessary complexity or confusion. Finally, with the recent request to provide a signed version of the packages (which is a big breaking change), I have decided that it’s the time to make LightBDD 3.x.

So what’s new?


As I mentioned above, the main driver of going to version 3.0 is the introduction of few breaking changes, but from the usage perspective there is actually not that many changes as people might be expected. In this sense, the LightBDD project is taking an evolution path not the rewrite one ­čÖé

What’s new then?

No more confusing scenario namespaces

I guess the biggest usability improvement is that all the different scenario styles are located now in one namespace LightBDD.Framework.Scenarios.
No matter of what scenario style is your favourite dear reader, you can just specify using LightBDD.Framework.Scenarios; in your code to get access to all the extension methods on the Runner property.

A similar simplifications have been made as well in the other areas of LightBDD.

Fluent scenarios are first-class citizens now

In LightBDD 2.x, scenarios could be built in fluent way by calling Runner.NewScenario() method (and using proper namespace). While it was not difficult to do, such boilerplate code had to be repeated in every scenario using fluent syntax.

In LightBDD 3.0, the NewScenario() method has been removed, and it is possible now to start composing scenario in fluent manner by using Runner directly:

Removed obsolete code and simplified internals

With LightBDD 3.0 release, over 5000 lines of code has been deleted!

Besides obsolete code removal and many simplifications in the internal code, there are two visible areas of LightBDD that have been removed:

  • LightBDD.NUnit2 integration has been removed in favor of LightBDD.NUnit3,
  • Runner.RunScenarioActionsAsync() methods have been removed in favor of easier fluent scenario integration.

Signed assemblies

The last, a bit less visible but significant change is that all the packages except LightBDD.Fixie2 are signed now.
It means that LightBDD dlls can be put to GAC now, but more importantly, LightBDD can be used now to test signed test projects.

Why is that change a breaking one then?
Well, the signed assemblies are not binary compatible with non-signed ones. More over, the binding redirection will not work for them neither, which means that all projects using LightBDD have to be updated to use LightBDD 3.x and recompiled.

Lessons learned


Working on LightBDD gives me a lot of satisfaction. First of all, it is amazing to see people using it and finding the project useful. The second bit I love is that I learn on how to efficiently maintain library code.

When worked on LightBDD 2.0, I designed it to be modular, with clear separation of the engine (LightBDD.Core), the framework (LightBDD.Framework) and all the integrations (LightBDD.NUnit3 etc).
Having this separation in place, helped me to create solution that is easily extendable, due to proper separation of concerns. As example, I could easily introduce LightBDD.Fixie2 integration or add compact scenario syntax.
However, such separation of projects uncovered poor design decisions as well…

Using interfaces vs abstract classes

The LightBDD 2.x relies heavily on interfaces to expose only crucial information between LightBDD.Core and Framework. Going with interface approach allowed easy extension of the framework without exposing too much of the internals, allowing to refactor them easily as needed.

Such approach lead however to interesting observations, that depending on who is providing implementations of the interface, the architecture is or is not extendable.

TL;DR: Use interface when providing API together with implementation to the user. Use abstract classes when expecting user to provide the implementation.

Let’s take a look at two examples.

Example 1

The LightBDD.Core exposes following interfaces:

The IStepDecorator allows to define a decorator and configure LightBDD to call it for step methods.

The implementer of the decorator can use IStep interface to access step details:

The code presented above is the original LightBDD 2.x version of that interface.

During the LightBDD 2.x evolution, there was a need to expose more information about the step. As the implementation of that interface was provided by the LightBDD.Core itself, not the users, it was safe to just add more members to the interface, ending up with the latest version:

This change was easy to ship to the users without worries about backward compatibility of LightBDD packages.

Example 2

The IIntegrationContext is an interface defined in LightBDD.Core that has to be implemented by every integration project such as LightBDD.NUnit3.

Over time, it appeared that a new feature has to be added to return IDependencyContainer DependencyContainer { get; } property.
Unfortunately, adding this property to the interface will be a breaking change, as now, all of the implementers will be providing not full implementation of the interface.

Of course, all LightBDD implementations would get updated at the same time, but backward incompatibility could still pop-up to the user world, if users would only upgrade the LightBDD.Core package but no others.

So, how did it get implemented?
Actually, the interface was left unchanged, but instead, the abstract IntegrationContext class was updated with the new property:

It is worth to note, that the new property is virtual and contains the default value, in case it’s not overridden yet. This way, it is safe to update LightBDD.Core package and still successfully use it with older versions of the LightBDD.Framework and integrations.

The final note here is that in LightBDD 3.0, there is no more IIntegrationContext interface and IntegrationContext is now the base type for all contexts.

Extension methods

The extension methods are widely used concept in LightBDD.
They are used for extending IBddRunner with new scenario writing styles, as well as, for working with LightBDD configuration.

While this mechanism worked very well in LightBDD 2.x, there was one flaw in the design.

There is nothing more annoying than seeing such code working very well in one source file:

… but failing in other file with errors like:

Such problems were happening often when wrong usings were specified in the source file. While using LightBDD.Framework.Scenarios.Basic; will make the code compiling,
the using LightBDD.Framework.Scenarios.Extended; will fail in this case.

In LightBDD 3.0, all the namespaces have been simplified, and now all the flavors of the scenario styles are located in one namespace: using LightBDD.Framework.Scenarios;.

No more confusion!

2018
10.21

LightBDD2
Hi,

It was a while since I wrote anything here, but today I would like to announce the LightBDD.Tutorials repository and the first LightBDD tutorial I have made: Web Api Service Tests!

The LightBDD exists since 2013 year and it has evolved a lot since then. For all of that time I tried to maintain the documentation wiki as well as set of example projects, but both were more focused on presenting what LightBDD is capable for. What it is lacking is a full example of how it can be used with the technologies like WebApi, NServiceBus, Selenium and others.

With the introduction of LightBDD.Tutorials this is finally changing ­čÖé

At this moment there is one tutorial on how to use LightBDD to service test AspNetCore WebApi project, but over time, the repo will grow with other fully working examples.

Happy reading and testing!

2017
07.30

This summer is very busy at work. The Summer of Craft has been started to strengthen our development culture. Each day there is something interesting going on, where people can learn new things, share knowledge or meet with peers. One of the event is called Techfast, which is a 30 minute chat on a given topic over breakfast and I had an opportunity to drive one about asynchronous programming and execution. After the discussion, I received a feedback that it helped to understand what async is about, so I thought it would be worth to share it here as well.

Asynchronous programming is not a new concept. Asynchronous programming patterns exist in .NET in various forms since version 2.0, however with language support of async / await keywords in .NET 4.5 asynchronous programming became very popular and it is used now almost everywhere.

Now I’m using async features at work as well as my projects (LightBDD 2.0 is based on async) I still remember how confusing it was when I started exploring it. I am coming from C++ / C# background where in the past, if I wanted to implement some functionality to be more responsive or to process things faster, I was using threads. So when started working with async methods I had questions like:

  • How does async method work?
  • Does async execution involve multi-threading?
  • How async is different from it?

After spending more time digging into the implementation details, reading documentation as well as simply using it for a while (which involved finding a few surprising behaviors) I got a better understanding of how it works. I realized however that it would be much simpler for me to grasp it by finding a simple real life example describing it well – that is why I initiated that discussion on Techfast.

Let's make some burgers

And now let’s make some burgers

Let’s forget about all the programming languages and syntax and think for a while about something much nicer – food – to be more precise preparing food.

We would be preparing a burger from: buns, beef patties, onions, lettuce, tomato, cheese slices and ketchup.

Preparing a burger:

  1. First, wash all the vegetables, peel off onions, slice onions and tomatoes and finally put them on grill.
  2. Then put beef patty on a grill, remember to flip it a few times in order to make it well done (I don’t like blood in my food) and when it is almost ready, put a slice of cheese on top of it to melt a little bit.
  3. In the meantime, put buns on grill for a moment to toast them.
  4. When everything is ready take it off the grill, put meat in bun followed by onion, lettuce and tomato, add some ketchup on top and cover with other part of bun.
  5. The burger is done!

So what does it have with async? Let’s think for a while. I had a few distinguish tasks there:

  1. Wash, slice vegetables and grill them;
  2. Grill beef patty, flipping it few times and melting cheese at the end;
  3. Toast buns;
  4. Put everything together to finish burger.

Asynchronous vs synchronous operations

Now, did I do those tasks one after another? Did I wait till vegetables roast before I put patty on a grill? No not really. As soon as I put onions and tomatoes on the grill I started grilling the patty as well as buns. I have done it asynchronously! As soon as I realized that I will have to wait to finish my current task (grilling veggies) I switched to another one (grilling patty) then another (toasting buns). I could do that because I was not involved in the process of the grilling – it did not matter if I would stand next to the grill or not, all the ingredients will be roasting.

So what is the difference between synchronous and asynchronous operation? The synchronous operation is the one where I am fully involved. The example here is washing and slicing vegetables. I am actively doing it. I cannot walk of the sink or table hoping that vegetables wash and slice themselves. Also as I am actively doing that work, there is no waiting element here so no reason to start other task. I do it from the beginning to the end – synchronously.

So the first observation is: async is about utilizing time to do other task where in other case we will have to wait (do Thread.Sleep() or Semaphore.Wait() methods ring a bell?)
Second observation: async is about dividing the operation into a set of smaller tasks that then could be executed asynchronously.

Does async mean multi-threading?

So how about multi-threading? Does async operations involve multiple threads?

Let’s modify our example a bit:
This time we would be making the same burger(s) but there would be two people preparing it.
The first cook takes the first available task (washing and slicing vegetables).
At the same time the other cook goes to the BBQ and start grilling the patty and buns.
When vegetables are sliced the first cook puts them on the grill as well…

We could continue this story with talking that cooks will move on to prepare other burgers or we could add more tasks for cooks, but I think the example describes the clue of the story.
The cook represents a thread. In first scenario we have managed to prepare a burger with one thread / cook where in second scenario tasks were distributed between multiple cooks.

As in both scenarios we managed to make a burger the conclusion is: async execution is independent from multi-threading. Multiple threads can support / speedup the asynchronous operations but single thread is enough to perform async operations as well.

Conclusion

When we cook, we have to perform various small tasks to finish with our favorite burger in a hand. Some of the tasks we do require our full involvement and attention such as slicing or washing vegetables. Other tasks however (such us grilling patties or roasting buns) does not require our full attention allowing us to start them, move on to something else in the meantime and come back to finish them when ready.

In the programming world it is very similar. Some of the operations such as typical collection sorting algorithms can be executed immediately, while others such as network or disk I/O operations requires executing thread to wait in order to finish.
The async programming with async / await model allows to build methods in a way that they are composed of tasks where executing thread can move on to execute different task if current task does not allow further processing without waiting.

As one cook is enough to make a burger, one thread is also sufficient for asynchronous execution. Similarly to more cooks, more threads may make things faster, but they are not necessary – for example:

  • Javascript is single-threaded and supports asynchronous processing,
  • .NET console applications are asynchronous and multi-threaded by default,
  • .NET WinForms applications uses one thread by default to process async methods called from UI thread.
2017
02.28

LightBDD2
I’m happy to announce that LightBDD 2 is released and ready to be used.

New platforms and frameworks support

The LightBDD has been reworked to allow support for various platforms and frameworks.
With version 2, the LightBDD packages are targeting both, .NET Framework (>= 4.5) and .NET Standard (>= 1.3) frameworks which allows it to be used in platforms like regular .NET Framework, .NET Core or even Universal Windows Platform.

New testing framework integrations

The testing frameworks integration projects has been reworked as well to leverage from cross-platform frameworks capability as well as remove LightBDD 1.x integration drawbacks.

A following list of integrations is available with LightBDD 2:

  • LightBDD.NUnit3 – integration with NUnit framework 3x series,
  • LightBDD.NUnit2 – integration with NUnit framework 2x series (to simplify migration from LightBDD 1x),
  • LightBDD.XUnit2 – integration with xUnit framework 2x series,
  • LightBDD.MsTest2 – integration with MsTest.TestFramework, a successor of MsTest.

Asynchronous scenario support

The LightBDD 2 runners are fully supporting async scenario execution.

The example below shows scenario execution for steps returning Task:

It is possible also to mix synchronous steps with asynchronous ones with RunScenarioActionsAsync method:

New configuration mechanism

The LightBDD configuration mechanism has been changed too. In version 2, all the configuration is now done in code, and framework has been changed to allow more customizations than version 1x.

More details

For more details, feel free to visit project home page.
In order to jump quickly into the code, a quick start wiki page may be helpful
Finally, there is also a wiki page describing how to migrate tests between LightBDD major versions.

Happy testing!

2017
01.22

It has been almost 3 and a half year since first version of LightBDD has been released on Nuget (1.1.0) and almost half a year since last update (1.7.2).
Since the beginning of the project, new C# language features have become more popular like async/await and new platform such as .NET Core or standards like .NET Standard emerged.

Due to a fact that LightBDD 1.X is targeting .NET Framework 4.0 and its few implementation details (like usage of ThreadLocal<>, StackTrace or CriticalFinalizerObject), it made it difficult to adapt to new trends.
Also, with project evolution some of its features became obsolete.

Because of these reasons, it is time to make bigger changes in the framework and take it to version 2.

What will change?

Full async support in core engine

The version 2 engine will be designed to run scenarios and steps in asynchronous manner. The async scenario and step methods will be supported but ultimately it will depend on step syntax implementation. It is planned to support async execution in extended step syntax (parameterized steps) but not in simplified one.

Support for other platforms and frameworks

The first release of version 2 will be targeting .NET Framework 4.6 as well as .NET Standard 1.6 making LightBDD available for new frameworks and platforms.
LightBDD will be officially supporting .NET Core and .NET Framework platforms and possibly more in future.

After release, an additional investigations and tests will be made in order to check possibility to extend support to .NET Framework 4.5 (the lack of AsyncLocal<> class makes it problematic).
Currently, there is a plan to drop .NET Framework 4.0 support.

Testing framework support

LightBDD version 2 will support following testing frameworks:

  • NUnit3 (.NET Framework and .NET Core),
  • XUnit2 (.NET Framework and .NET Core),
  • MsTest (.NET Framework and .NET Core).

MbUnit support will be dropped as the project is dead – it may be however added later if there would be a need for it.

Framework modularization

The LightBDD projects have been reworked in order to separate LightBDD features from the core engine.
Features such as step execution syntax, step commenting or summary generation will be separated into dedicated packages enabling:

  • an ability to version and evolve features independently,
  • users to pick features they really need.

In code configuration

Due to a fact that app.config is not available across all platforms, the LightBDD engine will be now configured in code.
For each testing framework, a LightBDD scope initialization code will have to be present (the scope initialization may look differently, depending on testing framework) and it will allow to configure the runner, including:

  • customizing summary report generation,
  • enabling additional features and extensions,
  • customizing framework core mechanics like culture info, step name formatting method and more.

The code below may change on release but it visualizes how configuration will be done:

Less workarounds

The new version will eliminate some of the caveats of LightBDD 1.X implementation.

It will be no longer needed to apply [assembly: Debuggable(true, true)] workaround to properly format scenario names in release mode – instead, it would be required to use LightBDD specific [Scenario] attribute to mark scenario methods instead of test framework specific attribute like [Test], [Fact] or [TestMethod].

Also, the explicit LightBDD scope makes summary files to always generate, where in version 1.X summary files were not generated if their creation was taking more than 3 seconds (limitation of CriticalFinalizerObject).

Migrating LightBDD 1.X to 2.0

The upgrade to version 2 will require test code update, however the number of changes are reduced to minimum and most likely will cover:

  • namespaces update,
  • framework specific test method attribute update to [Scenario] attribute,
  • LightBDD configuration change from app.config to in-code configuration,
  • updates in context based scenarios.

LightBDD 2 will be no binary compatible with LightBDD 1.X.

When LightBDD 2 would be available?

The current state of the project is that all the implementation changes are finished, however other tasks have to be done before the release, including finalizing the layout of projects, updating CI pipeline, documentation and wiki page.

The new release should be available in next few weeks.

2016
07.17

Octopus Project Builder

In my last post I wrote about my plans to create the Octopus Project Builder, a tool allowing to configure Octopus Deploy projects from yaml files, like Jenkins Job Builder does for Jenkins.

Since last month, I managed to progress with this work and I would like to share the outcome.

The Octopus Project Builder allows to configure:

  • Project Groups,
  • Projects,
  • Lifecycles,
  • Library Variable Sets (including Script Modules).

As I mentioned previously, the Project definitions can be very verbose, especially in the deployment actions section, that is why OPB allows also to define templates for Projects, Deployment Steps as well as Deployment Step Actions. The templates can be parameterized as well as it is possible to override template values when template is used in resource definition.

So how does it look like?
Below there are example yaml files with sample configuration.

The Project Builder, Lifecycle and Library Variable Set definitions are self explanatory.

The Project definition yaml is very simplistic however. It is because it uses the parameterized template to install the nuget package on target boxes.

So how does the template look?

The project template has specified the most common properties (so it is not needed to define them in each project). It has defined two template parameters as well, one to specify the package name to be installed and the other to specify on what machines the package will be installed. Further in the template definition, the template parameters are used with ${param_name} syntax. The template itself is also using an another template to define the deployment step action. This example shows that template parameters can be passed further to the inner templates.
Finally the deployment action definitions shows escaping sequences.
Normally, any existence of ${param_name} would be treated as the template parameter usage. If this behavior is not desired, then $ symbol has to be escaped with \. However in this example we want to compose the Octopus.Action.Package.CustomInstallationDirectory from installation directory and package name, that is why there is \\ that represents directory symbol.

Yaml configuration description

The yaml configuration offers much more options than ones presented in the example. The OctopusProjectBuilder project home page contains a configuration manual describing the full configuration model.

Finally, a nuget package is available as well: OctopusProjectBuilder.Console

Feel free to take a look at it and give it a go.
Also, any feedback is welcome.

Have fun!

2016
06.13

Since last few months we have started redefining our CI/CD pipelines to use Octopus Deploy for deployment. Octopus is a great tool to define and manage deployment environment and deployments. It allows to nicely separate environment details (like number of boxes, box names), the environment related settings (like URLs, Connection Strings) and the deployment process (steps that have to be performed) from project executables. Moreover, Octopus offers out of the box all the tools needed to propagate packages to all target boxes, install them as windows services or IIS applications – it is a great benefit, because before we had to develop and maintain a quite complicated scripts for doing the same.

Over this few months of work we have found however one deficiency in this tool. All of the project, process, variable or environment configuration has to be done through UI, and as in every UI some of the operations are not easy to be done. Scenarios like moving variables from the project to the variable set, duplicating steps within the process or applying same changes on multiple processes are time consuming and a bit irritating over time.

Jenkins Job Builder

We have had a similar issue with other tool in the past: Jenkins, but we have found a great solution for it: Jenkins Job Builder. The JJB allows to define Jenkins Job in human friendly YAML format, and the beauty of it is that:

  • it is a text, so all the operations like moving variables to a different scope, changing definitions, renaming etc are as simple as text copy-paste/replacement operations,
  • it can be put into a source control system, which allows to see the change history and gives an easy way of restoring previous versions,
  • it can be easily applied to other Jenkins instances (which is very handy in case of migrations and box rebuilding).

Octopus Project Builder

Inspired by Jenkins Job Builder, I decided to spend some time on creating similar a tool for Octopus, the Octopus Project Builder, hosted on GitHub: https://github.com/Suremaker/OctopusProjectBuilder.

It is a very early stage of the project, but I managed to explore the Octopus API a bit with Octopus.Client and Yaml serialization with YamlSerializer.

So, how does it look like?

I have a Project Group with test project:

Project Group

After I run the OPB download command:

OctopusProjectBuilder.exe -a download -d c:\temp\octo -u http://localhost:9020/api -k API-XXXXX

I got the file ProjectGroup_My group.yml with content:

…so the OPB managed to generate YAML for my project group.

Then, I edited the file with this content:

and run the upload command:

OctopusProjectBuilder.exe -a upload -d c:\temp\octo -u http://localhost:9020/api -k API-XXXXX

Finally I got my project group renamed and a new group created:

Updated Project Groups

Managing more data

Now there is a time for more complicated stuff, the projects themselves. This is the current work in progress. So far, I noticed that Octopus stores the step action definitions in a bit different key-value format. Here is a sample of how the project may look like:

Future plans

Playing with Octopus and YAML is an interesting experience and I would like to explore it a bit more.
So far I have a few thoughts what I would like to implement here.

First of all, none of the samples have any IDs put in YAML. I want to build all the correlations basing on the human friendly names. Above I presented a scenario where Project Group has been renamed. Basically it would be possible to specify that the current name is Y, while the previous was X. When OPB will be uploading definitions to Octopus, it will first look for name Y and then X in order to rename it to Y if it is not renamed yet.

The Actions section look a bit complicated here, with a long named keys that are not really user friendly, like Octopus.Action.Package.AutomaticallyUpdateAppSettingsAndConnectionStrings. I worry that they are also not that well documented, so it may be a bit difficult to find them. To overcome this problem I would like to implement some macro/templating mechanism, a bit similar to JJB macros, that would allow to define a template of an action and then easily apply it in various projects.

The next thing, is that OPB will support multiple input files, so it would be possible to split definitions of projects, variable sets etc. On download, it will also put all the definitions separately.

An another thing would be the sensitive data representation. I would like to implement a feature allowing to keep the sensitive data in YAML encrypted where OPB will decrypt it before uploading to Octopus.

Finally, I would plan to support following configuration in OBP:

  • Project Groups,
  • Projects (with process and variables),
  • Variable Set Libraries.

More updates will be posted soon…

2015
08.10

Acceptance testing service depending on Web API

Today, my new blog post has been published on tech.wonga.com.

I am describing there how we are acceptance testing services that depends on Web Api.

2015
07.02

LightBDD 1.7.0

It was a while since I have written a last post.
During this time many things happened, including a fact, that I was implementing new requirements for LightBDD.

Finally, since 21st of June, a new LightBDD version 1.7.0 is available for download on NuGet.

So what’s new:

xUnit support

Yes, since this version, there is a LightBDD.XUnit package, that supports writing and executing test scenarios with xUnit 2.0 (.NET 4.5.1). It means also that scenario tests can be executed in parallel with full support from Visual Studio and Resharper!

The project wiki page contains information about integration with xUnit, the adequate example project is present in project repository, as well as Templates folder and Visual Studio Gallery extension package has been updated to support it.

The integration with xUnit was interesting in a few aspects.

First of all, xUnit 2.0 supports parallel test execution. While LightBDD supports concurrent execution since its very early versions (it was always possible to run MbUnit tests in parallel), there was no decent support from newer Visual Studio / Resharper versions to run them. With xUnit it is possible.

Because of this feature, xUnit designers decided to not capture a System.Console output during test execution. Instead, they offer an ITestOutputHelper interface that has to be used to capture test output in order to display it in Visual Studio test windows. Later, I have found however that xunit console runner does not use this interface, but prints System.Console output. I have noticed also that standard progress printed by ConsoleProgressNotifier is unreadable because multiple tests report their progress at the same time.
An ability to print test execution progress is one of the important LightBDD features, that is why, I had to solve both of those issues. Finally, I have implemented a few more versions of IProgressNotifier interface, and configured LightBDD.XUnit to work properly with both, Visual Studio and console runner – more details are provided on wiki page.

The second problem with xUnit was that it does not support Assert.Ignore() calls to stop test execution and make the test ignored at run time. This feature is crucial for LightBDD, because it allows to execute all already implemented steps in given scenario, even if not entire scenario is fully implemented. It gives a better traceability of scenario implementation progress.
To make it working, I had to extend xUnit a little bit – I have added a [Scenario] attribute, which should be used instead of [Fact] or [Theory] attributes, and a ScenarioAssert.Ignore() method allowing to ignore test execution at run time. Hopefully, the xUnit has an amazing extensibility points, which all are supported natively by test runners and Resharper. Only because of that, I was able to implement ScenarioAssert.Ignore() for LightBDD.XUnit integration.

So finally, the test written with LightBDD.XUnit looks like that:

Steps auto-grouping

Steps auto-grouping was a requirement posted on the LightBDD project page.

With this version of LightBDD, if a consecutive steps start with the same type, e.g. GIVEN, WHEN, THEN, SETUP, all except first step would be renamed to AND:

More information about auto-grouping (and step syntax mixing) is available on wiki page.

Runtime comments for steps

This was an another requirement posted on the project page.

Since this version, it is possible to use StepExecution.Comment(), StepExecution.CommentFormat() methods to comment currently executed step, where those comments would be included in execution reports – more on wiki page.

Visual Studio Gallery extension

Since version 1.6.1 it was possible to install LightBDD for Visual Studio extension from Visual Studio Gallery. With this version, it has been extended with Project and Item templates allowing to write test scenarios in xUnit.

And more improvements

To read all the changes made in version 1.7.0, please feel free to take a look at Changelog.txt file.

Finally, I would be happy to announce that there is a LightBDD framework blog post on Wonga company blog for you to read!

Happy testing!

2015
03.19

Basic expectations for acceptance test framework

In previous post I have described a story of working with acceptance tests and how I encountered problems that motivated me to create LightBDD framework. In this post I have written about different types of tests, especially describing nature of acceptance and end-to-end tests. Now, I would like to focus on my observations regarding requirements for a framework allowing developers to work on behavioral tests effectively.

Basic requirements

During work in different companies I have realized that the expectations for acceptance tests and testing framework depends on the company size and its culture. The first team in which we started looking on improving our testing tools was a part of a small company with a very informal culture. Product Owner and Quality Assurance were dedicated to our team and were pairing with us in order to formulate scenarios that fulfill their expectations but also fit to the system architecture. They were interested how our acceptance and end-to-end tests look like. Both tests were having only one purpose at that time – to ensure that our software works fine.

That was a time when we have realized that tests written in SpecFlow were too difficult to maintain (I have described reasons previously). We have started asking questions what we really need from a testing framework.

Clear tests

The first set of questions were related to the fact that we were receiving requirements from PO/QA in form of business scenarios. We wanted to be able to quickly response PO/QA questions like:

Is this scenario already covered by tests?

What is this test checking exactly?

We thought that the best option would be to model our tests to allow to preserve a nice given-when-then that PO/QA were preparing for us. If our tests would reflect provided scenarios, as close as possible, they would be easy to present to PO/QA but also would be easy to read and understand by developers.

Maintainability

With the knowledge about maintenance problems related to tests written in frameworks like SpecFlow / Fitnesse, we realized that it was a crucial requirement for a testing framework. At that point we knew that it is a tricky problem, because maintainability issues reveal after a longer period, when project grows a bit. It is safe to say that project consisting of 1 scenario written in any testing framework looks easy to maintain, but would it be the same if there are 30 different scenarios? What if there are even more? All projects evolve (unless they are dead), so do the tests. Some of the scenarios become no longer applicable and are removed, some are added, while others are extended or shrunk with few steps. Finally some scenarios may become more precised or generalized, so their steps would just be altered.

All of those changes brought a following questions that we started considering in our design decisions:

How easy would it be to add a new scenario?

How easy would it be to add or remove steps to any given scenario?

How easy would it be to rename scenarios or steps?

If scenarios are removed, how easy would it be to clean methods that are no longer used by any scenarios?

How easy would it be to restructure and reorganize test suite?

If project has 5, 30, 100 scenarios, how long it would take to apply those changes to all of them?

By how easy we mean:

  • how many manual steps have to be taken by developer / PO / QA in order to apply change?
  • are all of those steps have to be applied into one place / project / location / repository, or they have to be made in different places?
  • how long it would take to apply such change?

Clean code

Maintainability does not refer only to changing code. It is also about:

  • understanding existing tests by new people in a team,
  • investigating why they are failing,
  • checking which scenarios are still valid after changed requirements.

It brings a following questions to be answered:

How easy would it be to understand how given scenario works?

Is it possible to analyze scenario flow, without debugging it?

How easy would it be to debug given scenario?

We wanted to have a framework that:

  • does not require using literals with regular expressions everywhere,
  • does not generate a bunch of files with unreadable code,
  • does not use loose binding between scenarios and underlying methods,
  • does not require usage of static contexts or any complex constructs to pass state between scenario methods,
  • has an intuitive behavior,
  • is easy to navigate with Visual Studio.

Traceability

Previously, I have mentioned that acceptance tests covers much wider scope than unit tests. During investigation of failed acceptance or end-to-end tests we have often been asking questions like:

What was the test stage when scenario has failed at?

Which operation performed on GUI failed the scenario?

Which component on end-to-end journey behaved incorrectly?

We wanted a framework that would allow those questions to be easily answered at the first glance, without spending minutes on analyzing logs and stack trace.

Execution progress monitoring

Acceptance tests are slow. End-to-end tests are even slower. All of us has spent so much time staring at Teamcity, waiting for tests to finish in order to close a ticket, to release project on production or to finally go home leaving board green. So many times it occurred that some of those tests were broken, causing whole build to fail. Those failing builds were taking much more time to execute than the ‘normal’ builds, making waiting even worse (I have described reasons for this behavior in Test characteristics section of this post)… If we only knew what was happening with those tests, we could immediately detect issue, stop the tests, fix it, rerun them and go home… Of course, during fix we were adding more Console.WriteLine() or _log.Debug() statements to the test methods to detect those problems much faster next time, but there were always some places where such logging was missing. Also, the practice itself was not good, because it made whole tests code less clear to read and required additional typing.

So, what we really wanted was a framework which would allow to answer a following questions without the need of any additional developer intervention:

What is the progress of tests that are currently being executed on CI?

Why current execution takes 2 minutes more than normally?

What are currently executed tests doing now?

Are those tests just slower but still passing, or is something horrible happening with them?

Simple solution is the best one

All of those requirements that I have just described could make an impression that we wanted to have a very complex, sophisticated framework and it would take at least a year to build it – it was exactly opposite! The first version of a testing framework that fulfilled all of those requirements, consisted of a class with 1 public method in total. It was quite difficult to call it even a framework…

Within a week, after a few design meetings we came with the idea to use a standard NUnit Framework with a few conventions to write our acceptance tests:

  • reflect a Given-When-Then scenario name as a test method name,
  • represent each scenario step as a method call in test,
  • name each step method the same as step in scenario (replace spaces with underscores),
  • wrap all steps with a RunScenario method, so step methods could be passes as delegates that would allow to omit brackets and will allow to display execution progress,
  • separate all test implementation details from test by using partial classes.

An example scenario taken from Wikipedia page:

would look like as follows:

with the example implementation as follows:

The BDDRunner.RunScenario() method was responsible for doing two things only:

  • executing step delegates in provided order,
  • printing step name before it’s execution.

That’s it!
So, how all requirements were fulfilled? – Lets see:

Requirement Solution
Clear tests Used conventions allowed PO/QA easily understanding tests, even if they were written purely in code. We were still able to pair and work together on them. We were also able to quickly browse our existing tests to check if given scenario is already in place or not.
Maintainability We decided to place all our tests directly in code, representing all feature elements (features, scenarios, steps) with corresponding code constructs like classes or methods. This allowed us to use all standard developers’ tools (IDE, Resharper) and methods (refactoring, static analysis, running tests from IDE) to maintain our test code effectively.
Clean code Instead of reinventing the wheel, we decided to use existing tools to do things that they are doing well. Everybody knew NUnit framework, how to write tests with it and what behaviors can be expected from it. We went with this well known test structure. The convention that we used for structuring our tests gave us better clarity of what given test is doing. Explicit steps execution allowed us to analyze them quickly and effectively (after all it is only a matter of navigating to step method implementation).
Traceability Representing each step as a method with self describing name and printing step name before it’s execution allowed us to localize and understand scenario failures quicker, by analyzing exception/assertion stack trace or checking a execution console output in both, CI and Visual Studio.
Execution progress monitoring Again, because each step name was printed before it’s execution, we have got an execution progress monitoring for free. It finally allowed us to track on CI what is the current stage of executed tests and quickly determine that some of the steps are executing longer or failing. Also, because the Teamcity was using time stamps for printing console logs, we could analyze which steps are executing longer and focus on their optimization.

LightBDD

I have noticed that the small BDDRunner class become very helpful for our team to develop both, acceptance and end-to-end tests, so I decided to create an opensource project and share it with others. The class that I have described above became a first version of LightBDD – there is a first commit showing how it looked then.

Thank you.

PS. In the upcoming post, I will describe how requirements changed when I joined a larger company with a corporation-like environment, and how LightBDD evolved into a current form.

%d bloggers like this: