I have not been writing here for a while, but today I wanted to share something new.
My last post was purely about games, Elite Dangerous, to be precise. I still love this game and play it a lot.

Recently, I do not have much professional time to code, but I used the opportunity to write a tool for myself and the community of the game to help to explore the vast galaxy of Elite Dangerous.

For that, I have made Suremaker/edsm_scanner GitHub project with a background story described on its wiki page.


Elite Dangerous


Normally I write about development related stuff here, but today I decided to write about games, without which I would probably never get an interest in computers and IT.
I played a variety of games on Commodore 64, Amiga and then PCs. This time I would like to write about Elite Dangerous, a space simulation game.

I grew up playing Frontier: Elite II on Amiga and later Frontier: First Encounters on PC. They were amazing games allowing me to fly through countless space systems in the galaxy, trade, explore and fight enemies.
When I saw Elite Dangerous for the first time I hesitated buying it. It looked amazing and interesting to play, but it is an online game and I was worried of being griefed by other players (which is often mentioned in the game comments). I bought it finally and I love it!

First of all, the game offers SOLO mode, so if anyone is afraid of encountering other players, this is the mode to pick. I chose it at the beginning and could enjoy it in the way I was playing in the past πŸ™‚ Later in the game, I switched to OPEN mode and found out that the galaxy is so huge that it is possible to enjoy the game without being nagged by other, not so friendly, players. Also, no matter the mode, I can rebuy my destroyed ship with all the equipment for the 5% of its original price which is cool as it allows to make mistakes during the game and learn it without too much pain πŸ˜‰
I wrote this quick paragraph for those who like me were worried that playing online will be a pain…

So, what do I like about the game?

Elite Dangerous follows it’s predecessors and similarily to them offers a huge world with a set of different ways to play in it as a space ship commander. Unlike the previous titles, it is an online game, allowing interactions between players. Moreover, it means that any actions made by the player influence the world that other players are playing it, like economy, commodities, politics or game events.



I started my adventure with trading, the way how I remembered the game from the past. It is a pretty simple thing to do, as it is a matter of buying cargo in one station and sell it in the other. It’s a safe start, as no fight is involved and allowed me to learn the mechanics of the flight. With the progression of learning the mechanics, the simple trading may become a bit boring, however, the game offers a few things making it more interesting.
The first ones are missions, available at every starport, like transporting goods to a specific location, or obtaining them somewhere else and bringing back here or just delivering the data to some place. The last one is perfect for small ships as it does not require any cargo capacity. Those missions are better paid than just selling stuff by ourselves. They also become more challenging while progressing game due to time limits or pirates being sent to interrupt it.
If that becomes too trivial, it is possible to take some tasks to smuggle some illegal cargo or do other more risky trades…



Before moving forward with different play methods it is worth to mention what kind of settlements players can visit as there are few types to see.



spaceport inside

I guess the most popular would be the large spaceports (that were present in the previous series as well). They have different shapes but all characterize with a “mail-slot” entry that ship has to enter through in order to land in docks. The inside of the station is always round and the station itself is rotating to produce gravity. When I was learning the game it was actually pretty difficult to learn how to rotate the ship, approach the dock and land on it. Moreover, with the bigger ships, it may be also difficult to go through the slot.

The good thing though is that as of now, all the ships are equipped with docking computer that can do it automatically. Like in old Frontier, the auto-docking process is accompanied with The Blue Danube Waltz by Johann Strauss II πŸ™‚

planet base

planet city

Similarly to the old games, it is also possible to land on the planetary outposts (but as for now, only non-atmospheric ones). The platforms are located on the surface and they are rather easy to land on. What is more complicated here is the fact that in order to get to the landing bay, the ship has to first enter the planet orbit, then glide towards the city till achieving the close distance to it, and only then land.


Unlike predecessors, Elite Dangerous has also outposts which are smaller versions of the spaceports that do not have mail-slots but just expose landing platforms directly in the space. They neither rotate and they are much easier to land on especially for the beginners. Unfortunately, they do not have large landing pads, which means that they are not suitable for big ships.


Finally, there are also mega-ships, which are a kind of moving outposts but allows landing on them for all kinds of ships.

Passengers and tourists


Yet another type similar to trading, is transporting passengers between systems, including tourists that require a player to take them to a few different places on the route and often makes additional demands during the journey. The tourism itself is a new concept (not existing in previous games) and has a few interesting aspects. First of all, the ship has to be equipped with passenger cabins, which are of different types: economy, business, first and luxury class. The luxury class can be only fit on a special, passenger ships like Beluga Liners. The tourists themselves ask to bring them to various sightseeing locations such as Earth-like planets, gas giants with beautiful rings, geysers or some historical places. Other variations of passenger transport missions would be rescuing people from damaged stations, helping refugees to reach better worlds, transporting business people, helping criminals to escape or transporting prisoners or even slaves.


sightseeing beacon



While most of the commodities are created in the colonies or space stations, the minerals and raw materials can be mined from the asteroid fields or planet surfaces. Mining is actually pretty interesting, especially with the expansion added to the game last year. The simplest way to do it is to buy a mining laser and refinery and fly to the asteroid field or planet ring, then fire at the asteroid. After a while, little rocks with minerals will start breaking apart from the asteroid surface. When collected, they will get refined into pure minerals. With more advanced tools it is possible to extract also sub-surface minerals or even detonate the whole asteroid to get access into profitable core minerals.




Very quickly, I have realized however that Elite allows much more than trading or mining. While all the system locations are generally known, it does not mean that all the planets belonging to the system are visible immediately. I noticed it when I accepted a mission to deliver cargo to a station that I could not locate in the target system! The game offers a set of tools and mechanics to explore the galaxy – jumping between systems, scan stars, discover planets and map their surface from the orbit. It was an amazing experience to see different star types, which are beautiful in the game. I still remember the thrill when I saw a white dwarf with its jets in front of my ship when left hyper-cruise – I was definitely not prepared for it!
Of course, like in the old games, it is also possible to refuel my ship directly from the star πŸ™‚
If that is not enough, it is also possible to get on the planets surface, discover strange structures or settlements, as well as get off the ship and drive Surface Recon Vehicles directly on the planet.
Also, I have to mention here the asteroid fields and planet rings, which look lovely but also allows visiting them for mining or hunting pirates.
After enjoying your discoveries, you can sell the data at the station and get quite a lot of credits.




Finally, the Elite cannot be described without mentioning fights. It is possible to play it without fights but the space is huge and various systems have various government types (including anarchy) and security levels. Sooner or later it will happen that our journey gets interrupted by the pirate who would try to steal our cargo. Of course, there are a set of actions in the game that are riskier and involves fight explicitly.

When I started playing I tried to avoid battles until the point I learn my ship well. It was generally an easy task – just a matter of flying between secure systems and doing low-risk jobs. Elite has also ranking systems where you have separate ranks for your fight, exploration, trading skills (and one unrelated to the standard gameplay). Starting with Harmless rank makes the game spawning similarly weak opponents, which allowed me to learn the combat skills slowly.

However, at some point, I got interested in a bit of action, so I started checking what is possible to be done.

The first thing I did was to buy FSD Interdictor device and started hunting pirates. The device is pretty cool as it allows us to intercept ships traveling within the star-system. So, I started scanning the ships cruising around me, looking for ones that have a WANTED bounty and intercepting them. Why checking the wanted status is important here? Well, first of all, after destroying a wanted ship, it is possible to claim the bounty at the starport. The second, more important reason is however that shooting to ship that has not been scanned first (and which is not a hostile ship) is a criminal offense, meaning that your ship will become wanted. Here, I desired to be a good guy, so I was playing as a bounty hunter.

Quickly I learned that bounty hunting it cool, especially when combined with missions about hunting pirates – it basically means credits for the bounty rewards, credits for accomplished missions and a lot of reputation within the factions that made pirates wanted. A good place for doing such tasks are extraction points within the planetary rings, as they usually lure a lot of pirates.

bounty award

Once, I decided to try the opposite and become a pirate for a while, ARR!!!
As I did not want to face full consequences of getting the System Authority Vessels on my head, I chose the system with anarchy government, where everybody is lawless and any action is allowed. After a few successful interceptions, I could just go to the near-by starport and sell the “borrowed” cargo on the black market.

The game itself offers many more options to check and improve battle skills. There are various missions available for fighting pirates, doing piracy, assassinations or just helping one side of the local conflict by entering the conflict zone and cruising enemies for some combat bound vouchers and fame πŸ˜‰

Finally, I would like to mention one more thing that shocked me during one of the fights. The enemy ship broke and shattered my front window! It would not be a big deal, but first of all, my ship switched to emergency oxygen supply, sufficient only for 5 minutes, I stop hearing any voice messages from my ship computer, and most importantly I stopped seeing any details where I was flying because this information is displayed on the window screen that was gone!

After flying blind for a while, I managed to get to the nearest outpost and replenish the oxygen, but the fun did not finish here. I realized that this station was not offering the repair services so I could not fix my window! Hopefully, I could buy a bit better life support system for my ship, so I got an additional 25 minutes to find the next station and fix my ship! Here is the screenshot from that time:

broken canopy

My little expedition


Recently I decided to take a long journey and play as an explorer. Usually, my journeys were about jumping to neighbor systems, taking only a few jumps. This time, I took a passenger that wanted to fly to a system located around 10000 light years away, which required over 270 jumps with my current ship before getting to the destination. I got excited as most of this journey means flying in a totally empty, uninhabited space.


I am aware also that any mistake may cost my ship, all the discoveries made on the way and the life of the passengers, so to minimize the risk, I decided to fly only between the stars where I can refuel ship and avoid any neutron stars or white dwarfs, that could make my journey quicker but also destroy my ship.


So, to make my journey interesting but not that long, I removed any equipment that would make my ship too heavy (and shorten my jump distance), while I added a surface scanner and Surface Recon Vehicle hangar to make the most of this discovery journey. I also left a few weapons and good shields on my ship in case of meeting some pirates on the way.

So the journey began!…

geological signs

After the entire evening of flying, I managed to get through 1/3 of the journey. I updated my route to omit also the M class stars (red dwarfs) to get a chance of spotting systems with more interesting planets. I discovered quite a few water worlds as well as one ammonia and one Earth-like planet. All of them have been already discovered by someone so my name won’t be visible in charts, but I also found systems that nobody visited yet. The nice thing though is that one of this planet has geological activity so I decided to land on it and use my SRV for the first time in my game. It’s a lot of fun to drive a rover on the surface of some distant planet!

driving srv


After two more evenings, I got to the destination. The discoveries paid off with a very nice injection to my credit balance. On the way home I decided to take a few shortcuts thought and updated my route plan to include neutron stars and boost my engines. It was an interesting experience after all!


In a few days, I will try again something else and try to discover ancient ruins of the extinct alien race!

PS. I did all the attached screenshots during my game – just love it!


LightBDD 3


I am happy to announce the LightBDD 3.0!

Before I go to the post details, here are some quick links for those who just want to start looking into it:

In LightBDD project, I am trying to follow semantic versioning guidelines. I was able to develop the LightBDD 2.x series for almost 2 years without introducing a major breaking changes.
It means however that more and more methods and classes become obsolete during the project evolution, making the code base growing bigger and more difficult to maintain. Over that time, I observed also that some of the design decisions introduced unnecessary complexity or confusion. Finally, with the recent request to provide a signed version of the packages (which is a big breaking change), I have decided that it’s the time to make LightBDD 3.x.

So what’s new?

As I mentioned above, the main driver of going to version 3.0 is the introduction of few breaking changes, but from the usage perspective there is actually not that many changes as people might be expected. In this sense, the LightBDD project is taking an evolution path not the rewrite one πŸ™‚

What’s new then?

No more confusing scenario namespaces

I guess the biggest usability improvement is that all the different scenario styles are located now in one namespace LightBDD.Framework.Scenarios.
No matter of what scenario style is your favourite dear reader, you can just specify using LightBDD.Framework.Scenarios; in your code to get access to all the extension methods on the Runner property.

A similar simplifications have been made as well in the other areas of LightBDD.

Fluent scenarios are first-class citizens now

In LightBDD 2.x, scenarios could be built in fluent way by calling Runner.NewScenario() method (and using proper namespace). While it was not difficult to do, such boilerplate code had to be repeated in every scenario using fluent syntax.

In LightBDD 3.0, the NewScenario() method has been removed, and it is possible now to start composing scenario in fluent manner by using Runner directly:

Removed obsolete code and simplified internals

With LightBDD 3.0 release, over 5000 lines of code has been deleted!

Besides obsolete code removal and many simplifications in the internal code, there are two visible areas of LightBDD that have been removed:

  • LightBDD.NUnit2 integration has been removed in favor of LightBDD.NUnit3,
  • Runner.RunScenarioActionsAsync() methods have been removed in favor of easier fluent scenario integration.

Signed assemblies

The last, a bit less visible but significant change is that all the packages except LightBDD.Fixie2 are signed now.
It means that LightBDD dlls can be put to GAC now, but more importantly, LightBDD can be used now to test signed test projects.

Why is that change a breaking one then?
Well, the signed assemblies are not binary compatible with non-signed ones. More over, the binding redirection will not work for them neither, which means that all projects using LightBDD have to be updated to use LightBDD 3.x and recompiled.

Lessons learned

Working on LightBDD gives me a lot of satisfaction. First of all, it is amazing to see people using it and finding the project useful. The second bit I love is that I learn on how to efficiently maintain library code.

When worked on LightBDD 2.0, I designed it to be modular, with clear separation of the engine (LightBDD.Core), the framework (LightBDD.Framework) and all the integrations (LightBDD.NUnit3 etc).
Having this separation in place, helped me to create solution that is easily extendable, due to proper separation of concerns. As example, I could easily introduce LightBDD.Fixie2 integration or add compact scenario syntax.
However, such separation of projects uncovered poor design decisions as well…

Using interfaces vs abstract classes

The LightBDD 2.x relies heavily on interfaces to expose only crucial information between LightBDD.Core and Framework. Going with interface approach allowed easy extension of the framework without exposing too much of the internals, allowing to refactor them easily as needed.

Such approach lead however to interesting observations, that depending on who is providing implementations of the interface, the architecture is or is not extendable.

TL;DR: Use interface when providing API together with implementation to the user. Use abstract classes when expecting user to provide the implementation.

Let’s take a look at two examples.

Example 1

The LightBDD.Core exposes following interfaces:

The IStepDecorator allows to define a decorator and configure LightBDD to call it for step methods.

The implementer of the decorator can use IStep interface to access step details:

The code presented above is the original LightBDD 2.x version of that interface.

During the LightBDD 2.x evolution, there was a need to expose more information about the step. As the implementation of that interface was provided by the LightBDD.Core itself, not the users, it was safe to just add more members to the interface, ending up with the latest version:

This change was easy to ship to the users without worries about backward compatibility of LightBDD packages.

Example 2

The IIntegrationContext is an interface defined in LightBDD.Core that has to be implemented by every integration project such as LightBDD.NUnit3.

Over time, it appeared that a new feature has to be added to return IDependencyContainer DependencyContainer { get; } property.
Unfortunately, adding this property to the interface will be a breaking change, as now, all of the implementers will be providing not full implementation of the interface.

Of course, all LightBDD implementations would get updated at the same time, but backward incompatibility could still pop-up to the user world, if users would only upgrade the LightBDD.Core package but no others.

So, how did it get implemented?
Actually, the interface was left unchanged, but instead, the abstract IntegrationContext class was updated with the new property:

It is worth to note, that the new property is virtual and contains the default value, in case it’s not overridden yet. This way, it is safe to update LightBDD.Core package and still successfully use it with older versions of the LightBDD.Framework and integrations.

The final note here is that in LightBDD 3.0, there is no more IIntegrationContext interface and IntegrationContext is now the base type for all contexts.

Extension methods

The extension methods are widely used concept in LightBDD.
They are used for extending IBddRunner with new scenario writing styles, as well as, for working with LightBDD configuration.

While this mechanism worked very well in LightBDD 2.x, there was one flaw in the design.

There is nothing more annoying than seeing such code working very well in one source file:

… but failing in other file with errors like:

Such problems were happening often when wrong usings were specified in the source file. While using LightBDD.Framework.Scenarios.Basic; will make the code compiling,
the using LightBDD.Framework.Scenarios.Extended; will fail in this case.

In LightBDD 3.0, all the namespaces have been simplified, and now all the flavors of the scenario styles are located in one namespace: using LightBDD.Framework.Scenarios;.

No more confusion!



It was a while since I wrote anything here, but today I would like to announce the LightBDD.Tutorials repository and the first LightBDD tutorial I have made: Web Api Service Tests!

The LightBDD exists since 2013 year and it has evolved a lot since then. For all of that time I tried to maintain the documentation wiki as well as set of example projects, but both were more focused on presenting what LightBDD is capable for. What it is lacking is a full example of how it can be used with the technologies like WebApi, NServiceBus, Selenium and others.

With the introduction of LightBDD.Tutorials this is finally changing πŸ™‚

At this moment there is one tutorial on how to use LightBDD to service test AspNetCore WebApi project, but over time, the repo will grow with other fully working examples.

Happy reading and testing!


This summer is very busy at work. The Summer of Craft has been started to strengthen our development culture. Each day there is something interesting going on, where people can learn new things, share knowledge or meet with peers. One of the event is called Techfast, which is a 30 minute chat on a given topic over breakfast and I had an opportunity to drive one about asynchronous programming and execution. After the discussion, I received a feedback that it helped to understand what async is about, so I thought it would be worth to share it here as well.

Asynchronous programming is not a new concept. Asynchronous programming patterns exist in .NET in various forms since version 2.0, however with language support of async / await keywords in .NET 4.5 asynchronous programming became very popular and it is used now almost everywhere.

Now I’m using async features at work as well as my projects (LightBDD 2.0 is based on async) I still remember how confusing it was when I started exploring it. I am coming from C++ / C# background where in the past, if I wanted to implement some functionality to be more responsive or to process things faster, I was using threads. So when started working with async methods I had questions like:

  • How does async method work?
  • Does async execution involve multi-threading?
  • How async is different from it?

After spending more time digging into the implementation details, reading documentation as well as simply using it for a while (which involved finding a few surprising behaviors) I got a better understanding of how it works. I realized however that it would be much simpler for me to grasp it by finding a simple real life example describing it well – that is why I initiated that discussion on Techfast.

Let's make some burgers

And now let’s make some burgers

Let’s forget about all the programming languages and syntax and think for a while about something much nicer – food – to be more precise preparing food.

We would be preparing a burger from: buns, beef patties, onions, lettuce, tomato, cheese slices and ketchup.

Preparing a burger:

  1. First, wash all the vegetables, peel off onions, slice onions and tomatoes and finally put them on grill.
  2. Then put beef patty on a grill, remember to flip it a few times in order to make it well done (I don’t like blood in my food) and when it is almost ready, put a slice of cheese on top of it to melt a little bit.
  3. In the meantime, put buns on grill for a moment to toast them.
  4. When everything is ready take it off the grill, put meat in bun followed by onion, lettuce and tomato, add some ketchup on top and cover with other part of bun.
  5. The burger is done!

So what does it have with async? Let’s think for a while. I had a few distinguish tasks there:

  1. Wash, slice vegetables and grill them;
  2. Grill beef patty, flipping it few times and melting cheese at the end;
  3. Toast buns;
  4. Put everything together to finish burger.

Asynchronous vs synchronous operations

Now, did I do those tasks one after another? Did I wait till vegetables roast before I put patty on a grill? No not really. As soon as I put onions and tomatoes on the grill I started grilling the patty as well as buns. I have done it asynchronously! As soon as I realized that I will have to wait to finish my current task (grilling veggies) I switched to another one (grilling patty) then another (toasting buns). I could do that because I was not involved in the process of the grilling – it did not matter if I would stand next to the grill or not, all the ingredients will be roasting.

So what is the difference between synchronous and asynchronous operation? The synchronous operation is the one where I am fully involved. The example here is washing and slicing vegetables. I am actively doing it. I cannot walk of the sink or table hoping that vegetables wash and slice themselves. Also as I am actively doing that work, there is no waiting element here so no reason to start other task. I do it from the beginning to the end – synchronously.

So the first observation is: async is about utilizing time to do other task where in other case we will have to wait (do Thread.Sleep() or Semaphore.Wait() methods ring a bell?)
Second observation: async is about dividing the operation into a set of smaller tasks that then could be executed asynchronously.

Does async mean multi-threading?

So how about multi-threading? Does async operations involve multiple threads?

Let’s modify our example a bit:
This time we would be making the same burger(s) but there would be two people preparing it.
The first cook takes the first available task (washing and slicing vegetables).
At the same time the other cook goes to the BBQ and start grilling the patty and buns.
When vegetables are sliced the first cook puts them on the grill as well…

We could continue this story with talking that cooks will move on to prepare other burgers or we could add more tasks for cooks, but I think the example describes the clue of the story.
The cook represents a thread. In first scenario we have managed to prepare a burger with one thread / cook where in second scenario tasks were distributed between multiple cooks.

As in both scenarios we managed to make a burger the conclusion is: async execution is independent from multi-threading. Multiple threads can support / speedup the asynchronous operations but single thread is enough to perform async operations as well.


When we cook, we have to perform various small tasks to finish with our favorite burger in a hand. Some of the tasks we do require our full involvement and attention such as slicing or washing vegetables. Other tasks however (such us grilling patties or roasting buns) does not require our full attention allowing us to start them, move on to something else in the meantime and come back to finish them when ready.

In the programming world it is very similar. Some of the operations such as typical collection sorting algorithms can be executed immediately, while others such as network or disk I/O operations requires executing thread to wait in order to finish.
The async programming with async / await model allows to build methods in a way that they are composed of tasks where executing thread can move on to execute different task if current task does not allow further processing without waiting.

As one cook is enough to make a burger, one thread is also sufficient for asynchronous execution. Similarly to more cooks, more threads may make things faster, but they are not necessary – for example:

  • Javascript is single-threaded and supports asynchronous processing,
  • .NET console applications are asynchronous and multi-threaded by default,
  • .NET WinForms applications uses one thread by default to process async methods called from UI thread.

I’m happy to announce that LightBDD 2 is released and ready to be used.

New platforms and frameworks support

The LightBDD has been reworked to allow support for various platforms and frameworks.
With version 2, the LightBDD packages are targeting both, .NET Framework (>= 4.5) and .NET Standard (>= 1.3) frameworks which allows it to be used in platforms like regular .NET Framework, .NET Core or even Universal Windows Platform.

New testing framework integrations

The testing frameworks integration projects has been reworked as well to leverage from cross-platform frameworks capability as well as remove LightBDD 1.x integration drawbacks.

A following list of integrations is available with LightBDD 2:

  • LightBDD.NUnit3 – integration with NUnit framework 3x series,
  • LightBDD.NUnit2 – integration with NUnit framework 2x series (to simplify migration from LightBDD 1x),
  • LightBDD.XUnit2 – integration with xUnit framework 2x series,
  • LightBDD.MsTest2 – integration with MsTest.TestFramework, a successor of MsTest.

Asynchronous scenario support

The LightBDD 2 runners are fully supporting async scenario execution.

The example below shows scenario execution for steps returning Task:

It is possible also to mix synchronous steps with asynchronous ones with RunScenarioActionsAsync method:

New configuration mechanism

The LightBDD configuration mechanism has been changed too. In version 2, all the configuration is now done in code, and framework has been changed to allow more customizations than version 1x.

More details

For more details, feel free to visit project home page.
In order to jump quickly into the code, a quick start wiki page may be helpful
Finally, there is also a wiki page describing how to migrate tests between LightBDD major versions.

Happy testing!


It has been almost 3 and a half year since first version of LightBDD has been released on Nuget (1.1.0) and almost half a year since last update (1.7.2).
Since the beginning of the project, new C# language features have become more popular like async/await and new platform such as .NET Core or standards like .NET Standard emerged.

Due to a fact that LightBDD 1.X is targeting .NET Framework 4.0 and its few implementation details (like usage of ThreadLocal<>, StackTrace or CriticalFinalizerObject), it made it difficult to adapt to new trends.
Also, with project evolution some of its features became obsolete.

Because of these reasons, it is time to make bigger changes in the framework and take it to version 2.

What will change?

Full async support in core engine

The version 2 engine will be designed to run scenarios and steps in asynchronous manner. The async scenario and step methods will be supported but ultimately it will depend on step syntax implementation. It is planned to support async execution in extended step syntax (parameterized steps) but not in simplified one.

Support for other platforms and frameworks

The first release of version 2 will be targeting .NET Framework 4.6 as well as .NET Standard 1.6 making LightBDD available for new frameworks and platforms.
LightBDD will be officially supporting .NET Core and .NET Framework platforms and possibly more in future.

After release, an additional investigations and tests will be made in order to check possibility to extend support to .NET Framework 4.5 (the lack of AsyncLocal<> class makes it problematic).
Currently, there is a plan to drop .NET Framework 4.0 support.

Testing framework support

LightBDD version 2 will support following testing frameworks:

  • NUnit3 (.NET Framework and .NET Core),
  • XUnit2 (.NET Framework and .NET Core),
  • MsTest (.NET Framework and .NET Core).

MbUnit support will be dropped as the project is dead – it may be however added later if there would be a need for it.

Framework modularization

The LightBDD projects have been reworked in order to separate LightBDD features from the core engine.
Features such as step execution syntax, step commenting or summary generation will be separated into dedicated packages enabling:

  • an ability to version and evolve features independently,
  • users to pick features they really need.

In code configuration

Due to a fact that app.config is not available across all platforms, the LightBDD engine will be now configured in code.
For each testing framework, a LightBDD scope initialization code will have to be present (the scope initialization may look differently, depending on testing framework) and it will allow to configure the runner, including:

  • customizing summary report generation,
  • enabling additional features and extensions,
  • customizing framework core mechanics like culture info, step name formatting method and more.

The code below may change on release but it visualizes how configuration will be done:

Less workarounds

The new version will eliminate some of the caveats of LightBDD 1.X implementation.

It will be no longer needed to apply [assembly: Debuggable(true, true)] workaround to properly format scenario names in release mode – instead, it would be required to use LightBDD specific [Scenario] attribute to mark scenario methods instead of test framework specific attribute like [Test], [Fact] or [TestMethod].

Also, the explicit LightBDD scope makes summary files to always generate, where in version 1.X summary files were not generated if their creation was taking more than 3 seconds (limitation of CriticalFinalizerObject).

Migrating LightBDD 1.X to 2.0

The upgrade to version 2 will require test code update, however the number of changes are reduced to minimum and most likely will cover:

  • namespaces update,
  • framework specific test method attribute update to [Scenario] attribute,
  • LightBDD configuration change from app.config to in-code configuration,
  • updates in context based scenarios.

LightBDD 2 will be no binary compatible with LightBDD 1.X.

When LightBDD 2 would be available?

The current state of the project is that all the implementation changes are finished, however other tasks have to be done before the release, including finalizing the layout of projects, updating CI pipeline, documentation and wiki page.

The new release should be available in next few weeks.


Octopus Project Builder

In my last post I wrote about my plans to create the Octopus Project Builder, a tool allowing to configure Octopus Deploy projects from yaml files, like Jenkins Job Builder does for Jenkins.

Since last month, I managed to progress with this work and I would like to share the outcome.

The Octopus Project Builder allows to configure:

  • Project Groups,
  • Projects,
  • Lifecycles,
  • Library Variable Sets (including Script Modules).

As I mentioned previously, the Project definitions can be very verbose, especially in the deployment actions section, that is why OPB allows also to define templates for Projects, Deployment Steps as well as Deployment Step Actions. The templates can be parameterized as well as it is possible to override template values when template is used in resource definition.

So how does it look like?
Below there are example yaml files with sample configuration.

The Project Builder, Lifecycle and Library Variable Set definitions are self explanatory.

The Project definition yaml is very simplistic however. It is because it uses the parameterized template to install the nuget package on target boxes.

So how does the template look?

The project template has specified the most common properties (so it is not needed to define them in each project). It has defined two template parameters as well, one to specify the package name to be installed and the other to specify on what machines the package will be installed. Further in the template definition, theΒ template parameters are used with ${param_name} syntax. The template itself is also using an another template to define the deployment step action. This example shows that template parameters can be passed further to the inner templates.
Finally the deployment action definitions shows escaping sequences.
Normally, any existence of ${param_name} would be treated as the template parameter usage. If this behavior is not desired, then $ symbol has to be escaped with \. However in this example we want to compose the Octopus.Action.Package.CustomInstallationDirectory from installation directory and package name, that is why there is \\ that represents directory symbol.

Yaml configuration description

The yaml configuration offers much more options than ones presented in the example. The OctopusProjectBuilder project home page contains a configuration manual describing the full configuration model.

Finally, a nuget package is available as well: OctopusProjectBuilder.Console

Feel free to take a look at it and give it a go.
Also, any feedback is welcome.

Have fun!


Since last few months we have started redefining our CI/CD pipelines to use Octopus Deploy for deployment. Octopus is a great tool to define and manage deployment environment and deployments. It allows to nicely separate environment details (like number of boxes, box names), the environment related settings (like URLs, Connection Strings) and the deployment process (steps that have to be performed) from project executables. Moreover, Octopus offers out of the box all the tools needed to propagate packages to all target boxes, install them as windows services or IIS applications – it is a great benefit, because before we had to develop and maintain a quite complicated scripts for doing the same.

Over this few months of work we have found however one deficiency in this tool. All of the project, process, variable or environment configuration has to be done through UI, and as in every UI some of the operations are not easy to be done. Scenarios like moving variables from the project to the variable set, duplicating steps within the process or applying same changes on multiple processes are time consuming and a bit irritating over time.

Jenkins Job Builder

We have had a similar issue with other tool in the past: Jenkins, but we have found a great solution for it: Jenkins Job Builder. The JJB allows to define Jenkins Job in human friendly YAML format, and the beauty of it is that:

  • it is a text, so all the operations like moving variables to a different scope, changing definitions, renaming etc are as simple as text copy-paste/replacement operations,
  • it can be put into a source control system, which allows to see the change history and gives an easy way of restoring previous versions,
  • it can be easily applied to other Jenkins instances (which is very handy in case of migrations and box rebuilding).

Octopus Project Builder

Inspired by Jenkins Job Builder, I decided to spend some time on creating similar a tool for Octopus, the Octopus Project Builder, hosted on GitHub: https://github.com/Suremaker/OctopusProjectBuilder.

It is a very early stage of the project, but I managed to explore the Octopus API a bit with Octopus.Client and Yaml serialization with YamlSerializer.

So, how does it look like?

I have a Project Group with test project:

Project Group

After I run the OPB download command:

OctopusProjectBuilder.exe -a download -d c:\temp\octo -u http://localhost:9020/api -k API-XXXXX

I got the file ProjectGroup_My group.yml with content:

…so the OPB managed to generate YAML for my project group.

Then, I edited the file with this content:

and run the upload command:

OctopusProjectBuilder.exe -a upload -d c:\temp\octo -u http://localhost:9020/api -k API-XXXXX

Finally I got my project group renamed and a new group created:

Updated Project Groups

Managing more data

Now there is a time for more complicated stuff, the projects themselves. This is the current work in progress. So far, I noticed that Octopus stores the step action definitions in a bit different key-value format. Here is a sample of how the project may look like:

Future plans

Playing with Octopus and YAML is an interesting experience and I would like to explore it a bit more.
So far I have a few thoughts what I would like to implement here.

First of all, none of the samples have any IDs put in YAML. I want to build all the correlations basing on the human friendly names. Above I presented a scenario where Project Group has been renamed. Basically it would be possible to specify that the current name is Y, while the previous was X. When OPB will be uploading definitions to Octopus, it will first look for name Y and then X in order to rename it to Y if it is not renamed yet.

The Actions section look a bit complicated here, with a long named keys that are not really user friendly, like Octopus.Action.Package.AutomaticallyUpdateAppSettingsAndConnectionStrings. I worry that they are also not that well documented, so it may be a bit difficult to find them. To overcome this problem I would like to implement some macro/templating mechanism, a bit similar to JJB macros, that would allow to define a template of an action and then easily apply it in various projects.

The next thing, is that OPB will support multiple input files, so it would be possible to split definitions of projects, variable sets etc. On download, it will also put all the definitions separately.

An another thing would be the sensitive data representation. I would like to implement a feature allowing to keep the sensitive data in YAML encrypted where OPB will decrypt it before uploading to Octopus.

Finally, I would plan to support following configuration in OBP:

  • Project Groups,
  • Projects (with process and variables),
  • Variable Set Libraries.

More updates will be posted soon…


Acceptance testing service depending on Web API

Today, my new blog post has been published on tech.wonga.com.

I am describing there how we are acceptance testing services that depends on Web Api.

%d bloggers like this: