The Bug Cap

I’ve just watched the Agile at Microsoft video, that shows how the Visual Studio Team Services (VSTS) team at Microsoft adopted an Agile mindset and culture. The whole video is interesting but there was one item in particular that caught my eye, which is named the Bug Cap.

The Bug Cap is a simple strategy to keep the bug count low, in order to preserve the software quality. In other words, if your bug count exceeds your bug cap you stop working on new features until you’re back under the cap.

This is the formula suggested by the VSTS Team to calculate the bug cap:

For example, having 5 engineers in one team the bug cap would be:

5 X 5 = 25

Let’s suppose that the bug count at some stage is 30 – this means that the team should stop working on new features until the bug count is less than 25.

This is a very simplistic rule but I guess this is a good starting point in order to limit the number of bugs per sprint and avoid chaos. I guess you can try it with different formulas as well, such as multiplying per 3 or 4 – whatever works for you 🙂

You can watch the whole video here (the bug cap topic starts around 00:29:01):

 

Advertisements

On smoke tests

So you have configured a new build for your ASP.NET application: the source code compiles without errors, there are no unit tests failing, the deployment package is generated and published as an artifact – great! Now it’s time to deploy it. Everything seems to be fine (no errors logged), but when you try to run your application you get an YSOD like this one:

ysod1
There are many things that can go wrong with a deployment, so it is important to configure your deployment pipeline to verify that the deployment itself was successful. You can do it by using smoke tests.

Continue reading

Running tests in Bamboo after a deployment

I’ve been using Bamboo CI Server for the last few months to automate builds and deployments. I like the tool because it has good integration with Jira (both tools are from Atlassian), it’s easy enough to configure new builds and deployments, triggers, notifications, etc.

But I realised that something important was missing: Bamboo allows you to add a test runner task in a build project but not in a deployment project! This means that you can’t run tests after a successful deployment (smoke tests, integration tests, …), at least not without a workaround.

The trick is to configure your test runner as an executable in Bamboo. These are the steps in order to configure NUnit and run tests in a deployment project (it should work for any other test runner):

 

1. Add a new executable for NUnit

Go to Bamboo Administration and click on “Executables” on the left panel.

01 bamboo administration

Click on “add an executable as a server capability

02 click link

Add the path to NUnit Console and a label for the new executable. It is important to set the type to “Command” in order to use it in a Deployment project:

03 add-executable

Click on the “Add” button to save the new command.

 

2. Add a new deployment task to run the tests

You can either add a new task for the tests to an existing deployment or add a new deployment project that will only run the tests.

I decided to add a new deployment project that will be triggered after a successful deployment because it’s easier to understand if there is actually a problem with the deployment itself or if the integration tests are failing. Also, this way I am able to run the tests at any time without having to deploy the application.

Whatever your choice is, add a new “Command” task to the deployment project:

04 - add-new-task

In the “Executable” dropdown you should be able to find the command you configured for NUnit. Add arguments and environment variables if necessary:

06 - configure-nunit-task

Save the task and run the deployment. This is an excerpt of the generated log that contains the test results:


NUnit-Console version 2.6.4.14350
Copyright (C) 2002-2012 Charlie Poole.
Copyright (C) 2002-2004 James W. Newkirk, Michael C. Two, Alexei A. Vorontsov.
Copyright (C) 2000-2002 Philip Craig.
All Rights Reserved.

Runtime Environment - 
   OS Version: Microsoft Windows NT 6.2.9200.0
  CLR Version: 2.0.50727.8009 ( Net 3.5 )

ProcessModel: Default    DomainUsage: Default
Execution Runtime: net-4.0
..F.F.F.F
Tests run: 5, Errors: 0, Failures: 4, Inconclusive: 0, Time: 6.8491962 seconds
  Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0

Errors and Failures:
1) Test Failure : GivenAnUrl_WhenGettingPage_ShouldreturnSuccessStatusCode("/Home.aspx")
     Expected: True
  But was:  False

2) Test Failure : GivenAnUrl_WhenGettingPage_ShouldreturnSuccessStatusCode("/Services/Activate.aspx")
     Expected: True
  But was:  False

3) Test Failure : GivenAnUrl_WhenGettingPage_ShouldreturnSuccessStatusCode("/Administration/LostPassword.aspx")
     Expected: True
  But was:  False

4) Test Failure : GivenAnUrl_WhenGettingPage_ShouldreturnSuccessStatusCode("/Shop/Product/List.aspx")
     Expected: True
  But was:  False

Failing task since return code of [C:\Program Files (x86)\NUnit 2.6.4\bin\nunit-console.exe integration-tests-uat.nunit --config="release"] was 4 while expected 0
Finished task 'Run integration tests' with result: Failed
Finalising the build...
Stopping timer.
Build 12484609-16973828-16613398 completed.
Finished processing deployment result Deployment of 'release-16' on 'UAT - Integration Tests'

That’s it! The output is not nicely formatted as in the build tasks but it does the job – you can see how many tests were run and how many have failed (if any).