Cucumber is NOT a testing framework!

I have been repeating this statement over and over on the Cucumber forum, but apparently with no good result. To me this is a simple statement and I fail to understand what’s difficult to understand in its simplicity: Cucumber (JVM, Ruby, C, the flavour doesn’t matter) is not a testing framework.

I’ll try to further explain the statement, not because I believe it needs to be explained, but hoping an explanation will solve doubts. To do that I’ll try to state the facts in way as much practical as I can.

JUnit, TestNG and DBUnit are examples of a testing framework: they provide facilities to put a software in a certain condition, leverage some parts of the software and assert the outcome responds to the expected parameters.

The goal of those frameworks is to provide the tools to achieve the result (testing the software) with the least possible effort and the maximum outcome. As such, they have programming structures for asserting conditions are met, for setting up the test rig and tear down the test rig as well, ensuring the system is always in a clean condition before a new test is run.

Cucumber doesn’t have any of those facilities, simply because it is not a testing framework.

Cucumber is a specifications tool, something completely different. It uses Gherkin (the language) to describe in a natural language the software specifications.

Does anyone of you remember the Use Case Template? use-case-template-visio

Well a Cucumber feature is much more like one of those than a bunch of tests, would you agree?

There was once a time when we were describing software systems by collecting a few of those use case templates in a Word document, right?

Now, it wasn’t much of a trouble to describe a system like that, at least at first instance, but after a couple of maintenance cycles the above documents were not describing the system any more: I haven’t found one single situation where the above documents were maintained and kept in sync with the implementation. So the system documentation was just obsolete after a few months of operation.

Why was that? Because the cost of maintaining such documentation was not justified by the short term benefit of doing so: why should I spend 30 minutes to find the correct place where to apply the patch in the system documentation if the fix itself costs me 15 minutes?

So, Cucumber steps in and tries to transform the above specifications into something valuable and alive. How?

The specification format (or structure, if you prefer) is almost free, so to leave the writer the freedom to express himself freely. This is even less structured than the template above, but there are good practices telling us how to maximize the result, like providing the list of actors and the general goal at the very beginning.

The file is plain text, so to avoid any requirement on the tool used to open and modify the document and doesn’t add any formatting to avoid reader to be distracted or writer being dragged into endless make it prettier sessions.

A file represents a use case (feature in the Gherkin language), so you end up having multiple files, each one representing a piece of the software. This greatly simplifies version management, collaboration and merging, enabling multiple writers and revisors working on a single system. It’s not uncommon to use the same version control system in use for the system to store the documentation.

Files can be structured in a hierarchical fashion via folder structure, so big systems with lots of features can organize their specifications.

Files can be annotated via tags so to create parallel organizational structures, with the folder structure still being the prominent one: this enables additional categorizations, useful to track other associations between use cases (features), as much as what the introductory Use Case Diagram was doing in the Word document.

What makes the biggest confusion, though, is Gherkin files can be executed.

That is what Cucumber provides: the support software structures and an execution environment for Gherkin files.

Why? To create a connection between the system documentation and the documented system, so to ensure the documentation is aligned to the system it describes.

How? By mapping each statement in each scenario with some lines of code in the language you like (Ruby? C++? Java? Scala?). If the code ensures somehow the system does what it is expected to do, than the documentation is in sync with the implementation.

Only this last part resembles a test. Only at this stage there is some code involved. And usually testing frameworks and automation libraries are used to do such verification, like JUnit and Selenium.

So, if you need to test your system, please use a testing framework! If, instead, you want to document your system, you are welcome to use Gherkin and Cucumber!

The “Maven and Github on Windows” hell!

I’m sure this is not the first time I prepare a post like this, but I might have decided to last minute drop it: this time it’s not going to happen.

Sadly I have to use Windows at work and when I have some spare time I do contribute to Open Source Software and I have once again came face to face with the hell caused by trying to run a Maven Release Plugin on a Github hosted project. In this particular case it was a project I started myself called SmartUnit, but you can bet the same applies to any other Github hosted project using Maven and willing to perform releases on the Maven Central Repository.

Where do my problems start? Well, when I run the infamous mvn release:prepare obviously.

First I started encountering failures with the missing commands:

  • gpg.exe must be installed and available on the PATH to be able to sign the packages for the Maven Central Repository
  • git.exe must be installed and available on the PATH to be able to commit the tag and updated pom to Github

Installing the above though does solve only a fraction of the problem as you need the corresponding keys:

  • your GPG key should go into %APPDATA%\gnupg and it’s easier if it is the first key in your GPG key store
  • your SSH key should go into %USERPROFILE%\.ssh and, to avoid further complications, just name the file as rsa_id

As if it wasn’t enough hassle, you have to manually start an SSH agent to provide the SSH key, so don’t forget to (commands should be already on the PATH at this stage):

  1. run the Git’s bash shell with bash
  2. run the SSH agent with eval$(ssh-agent)
  3. add your SSH key with ssh-add ~\.ssh\id_rsa and input the key passphrase

If your key wasn’t already listed among your Github SSH Keys, don’t forget to add it.

Now the Maven release:prepare goal should complete successfully, but problems might arise during the release:prepare one as:

  • the PGP key must be publicly verifiable, so it must be published like on (you can use gpg --armour --export to obtain the key signature to submit)
  • your Sonatype nexus credentials must be in your Maven settings.xml, under a server directive using an id matching the distribution repository (it should be sonatype-nexus-staging, but it might change over time)

Do you believe it’s all? Well, it’s not! Now you need to:

  1. log into your Sonatype Nexus account
  2. search your newly created Staging Repostory
  3. select the repository and close
  4. wait and refresh the view until it is reported as closed
  5. select the repository and release it!

Only at this point your artifacts will be available on the Maven Central Repository…

This is what I’ve twice learnt about the release process and what I hope I will be able to retrieve here the next time I’ll have to set this thing up again.

Might the force be with you!

Resistor color decoder

Moving along on my previous Ohm’s law calculator I decided to add another little feature, a resistor color decoder. I know, there are many out there already, but you know… this is mine!

This was more an exercise on SVG manipulation rather than anything else, but I still believe it’s something I will use in the future for my own Arduino and Spark projects.

Enjoy my Resistor color decoder!

resistor color decoder

Generalization pitfalls

Experienced developers, including me, tend to prefer generalized code over highly specialized one, but they usually love very simple and highly readable code much more and the two don’t always pair nicely.


I’ll use Java and Cucumber to make my point clear (I hope) but what I’m going to assert is not strictly related to neither of those, actually is not even strictly related to programming!

I’ve smashed my face into this problem when I started using Cucumber and my team decided a single generalized method to map to a UI button click should be sufficient: I believe many of you would agree the code below isn’t a bad idea.

@When("I click on \"(.*)\" button")
public void buttonClick(String buttonLabel) {
  // ui specific code to click the button

I now tend to advice against such generalizations because it’s very rare a single method can suffice your needs because they are too generic to be implementable. Let me explain.

The very first step to implement such method is to resolve the label into some sort of reference to the UI button to click and then click such component. Now, unless you establish some sort of very strict convention you’ll end up with something like the following:

@When("I click on \"(.*)\" button")
public void buttonClick(String buttonLabel) {
  UIButton button = null;
  if ("OK".equals(buttonLabel))
    button = this.getOkButton();
  else if ("Cancel".equals(buttonLabel))
    button = this.getCancelButton();
  else if ("Save".equals(buttonLabel))
    button = this.getSaveButton();
  else if ("Edit".equals(buttonLabel))
    button = this.getEditButton();
  else if ("Delete".equals(buttonLabel))
    button = this.getDeleteButton();
    // button unknown!;

I’m obviously simplifying the code above for sake of readability: the very first implementation of that method was 200 lines when we got to 20 buttons!

Please note the issue is not related to the Cucumber capturing group nor to the string parameter, but to the use we are doing of it: we are de-generalizing a generalized method!

Now, wouldn’t it been simpler and more readable to have something like the following?

@When("I click on \"OK\" button")
public void buttonOkClick() {
  UIButton button = this.getOkButton();;

@When("I click on \"Cancel\" button")
public void buttonCancelClick() {
  UIButton button = this.getCancelButton();
  else if ("Save".equals(buttonLabel));

@When("I click on \"Save\" button")
public void buttonSaveClick() {
  UIButton button = this.getSaveButton();;

@When("I click on \"Edit\" button")
public void buttonEditClick() {
  UIButton button = this.getEditButton();;

@When("I click on \"Delete\" button")
public void buttonDeleteClick() {
  UIButton button = this.getDeleteButton();;

The code is longer, I agree, but it’s way much clear, a lot easier to debug and, above all, if there’s a missing button mapping a clear error is returned!

You might argue this case is not applicable to you because a strict button naming convention is well established on your project and a button is always identifiable from it’s label (let’s suppose you are using an HTML based UI and each button has an id in the form of label-btn so that the above can be resolved into the following:

@When("I click on \"(.*)\" button")
public void buttonClick(String buttonLabel) {
  // enforce naming convention here and prepare the buttonLabel
  UIButton button = this.getButton(buttonLabel + "-btn");;

Lovely, isn’t it? Now a paginated data set comes in, something like the one in the picture below, and you suddenly need to assign an id of «PreviousPage-btn (note the initial angle quote) to an HTML tag!pagination

If you modify the above code into the following one you are falling into the same pitfall, no excuse granted!

@When("I click on \"(.*)\" button")
public void buttonClick(String buttonLabel) {
  if (buttonLabel.startsWith("«") || buttonLabel.startsWith("»"))
    buttonLabel = buttonLabel.substring(6);
  UIButton button = this.getButton(buttonLabel + "-btn");;

The code above might still look clean and neat, but it is indeed a source of problems: it’s your first shovel of dirt while digging your own grave. You should have gone with the following instead:

@When("I click on previous page button")
public void buttonPrevPageClick() {
  UIButton button = this.getButton("prevPage-btn");;

@When("I click on next page button")
public void buttonNextPageClick() {
  UIButton button = this.getButton("nextpage-btn");;

@When("I click on \"(.*)\" button")
public void buttonClick(String buttonLabel) {
  UIButton button = this.getButton(buttonLabel + "-btn");;

Again, a little more code but a lot cleaner and readable and, above all, it doesn’t imply you have to type EXACTLY the same button label characters into your feature files or do some killer loops to describe a test for a stupid button!

Now, if you read this post up to this point you have probably fallen into this pitfall multiple times and you probably still have doubts about what I’m saying, probably due to the fact you really love the generalization concept/method, so here is the generalized reason why method generalization is not always a good practice:

While performing step to stepdef matching Cucumber is performing a de-generalization, going from a generic string to a specific method.
By using stepdef parameters you are performing a generalization, allowing multiple strings to match the same method.

But when within the stepdef you add control logic on the parameters you are performing an additional de-generalization, trying to identify a specific match for a specific parameter value: something Cucumber already tried to do during stepdef matching!

It’s like trying to change gold for gold by transforming gold into silver and then back into gold, all with little or no gain and a lot of unnecessary confusion.

Let Cucumber do it’s work and don’t try to generalize everywhere, instead let the natural language unleash it’s expressive power and use it at your own benefit.

Now, while I used Cucumber steps and stepdefs as a demonstration, this concept is applicable in many other contexts, and here it comes my statement in a more generalized version:

Question yourself twice whenever you are trying to generalize something right after a de-generalization has just happened: you’ll discover you are doing the wrong thing 99% of the time.

Whenever you are certain your case falls within that remaining 1% then you are surely doing the wrong thing: doubting about yourself is your only salvation!

If you allow me the hyperbole, wouldn’t be messy to develop by always using Java Reflection? Well, Java Reflection is the generalization above the whole Java framework: one API to rule them all!

Multi environment artifacts

Too many times I’ve seen this anti-pattern applied. So many that I’m here writing about it with the hope some of those applying it will read this post and stop doing it.

The anti-pattern I’m referring to is the one I christened Environment Aware Artifact, also known as The Production Build. If you don’t understand what I mean it is very possible you are either working in a single environment project (very unlikely) or you are actually applying this anti-pattern. In both cases, please keep reading!

The principle is simple: ensure your deployables/builds are environment agnostic. What that means truly depends on your specific application, but usually it can be condensed into this simple statement: externalize your environment configuration so that it is NOT bundled within your deployable but deployed separately.

Let me try to explain the previous sentence by using a Java example, but please consider this anti-pattern does not apply to Java environments only.

Say that you are developing a web application (what about an eCommerce web site?) which needs to invoke some sort of external service (let’s say it is the ePayment system). Probably you have to use different URIs for the external system depending if you are in a test environment (you don’t want to use the real ePayment service for your tests, do you?) or in a production environment (you want to collect real money from your customers, right?).

If you are a supporter of this anti-pattern you will create a configuration file (if it’s a Java properties or an XML it doesn’t matter) which will end up within your WAR: by doing this you will tie your deployable (the WAR file) to a specific environment (the one the configuration file refers to).

If you are asking yourself “What’s wrong with that?” then you are one hundred percent contributing to this anti-pattern: you are the reason for this article! The problem is you will have to rebuild the deployable every time you move from one environment to another which introduces errors and issues you would never expect (you can either trust me or experience them on your own skin, your choice). On top of that you are exposing environment related, potentially security related, information (what if instead of a URI we were talking about a password?), to everybody who is involved into the build chain!

If you are asking yourself “How can I achieve that?” you are starting to understand it is indeed an anti-pattern and I’m glad you are! As you can imagine different tools, systems, platforms and languages have separate ways of achieving such goal, some being simpler others being more complex. In other words you have to investigate your own selection, but in case you are using one of the platforms I use here are some advises.

Please consider it is generally more important to have an environment agnostic deployable rather than a container agnostic one: more than often the type of environment surrounding your final artifact is either pre-determined (corporate or project decision) or highly definable (installation instructions). If you have to decide between adding a step to your installation procedure and building multiple artifacts the former is 99.99% preferable!

JBoss AS

Starting from version 4 of this wonderful application server you can provide system environment properties through the properties service: just put your environment specific configuration in a properties file and add that file to deploy/properties-service.xml to have them exposed to your deployables. Starting with version 7 the file name changed to standalone.xml or domain.xml, depending on the application server startup mode. Properties set as such will be readable as system ones.


I believe this feature has always been there, for sure starting from version 5.5 of the web container. It is part of the context definition and documented as, guess what, environment entries. Please note these configuration mode is readable through JNDI.

Other JavaEE containers

Most of the containers out there provide the ability to specify environment properties using their management interfaces, being them JNDI bound or available as system properties. Please refer to your container documentation.

Any JVM language in general

This is applicable to every language based on the JVM and any environment: deploy into the system JRE a library containing a configuration class which provides your configuration parameters. While this solution might be sub optimal in certain environments it’s definitely applicable to a broader set of cases, including desktop applications. With the proper adjustments it is applicable to many other platforms/languages as well, including C#, .NET and so on.

Smarter Eclipse quality friendly config

If you haven’t realized I’ve some sort of addiction to software quality then this should be the first time you read my blog: it doesn’t mind because you are reading it now!

Here is another of my famous (!!!) tips for a better Eclipse IDE configuration and this time I’m trying to help all those software developers out there that, willing or not, smash their faces against tests being them unit or automated ones. If you don’t write such tests don’t get desperate: this tip might still help you!

With introduction of static imports more and more libraries have converted or integrated their APIs with commodoty static methods, with testing libraries being one of the most populated category.

If you use JUnit, Mockito and/or any other library using static methods for building complex object structures you will certainly know the code you write should NOT look like the following:

  public void someTestMethod() {
    Mockito.when(mockedObject.someMethod(Matchers.any(String.class))).thenReturn(new Object());
    // some code here
    Assert.assertEquals(expectedValue, mockedObject.someMethod());

Good modern code should look like the following instead:

  public void someTestMethod() {
    when(mockedObject.someMethod(any(String.class))).thenReturn(new Object());
    // some code here
    assertEquals(expectedValue, mockedObject.someMethod());

Our favourite Eclipse IDE can be instructed to help you write those neat lines of code, actually recognizing you are using static methods and automatically add the corresponding static imports.

This not widely known preference is available in the form of a configurable list of packages and types candidate for static methods and attributes scanning, reachable at Window > Preferences > Java > Editor > Content Assist > Favorites (yes, not the easiest to find, I agree).

After configuring your favourite libraries in that list you can start forgetting about the Assert, Mockito and Matchers classes: I just start typing the method name and the IDE does all the borying stuff for me!

This is what I’ve configured at the moment in my STS


Selenium and the holy search for lost element

If you are using Selenium WebDriver you know it is a great tool for automated testing: no doubt.
After a while though you’ll end up running your tests in debug mode and dig into code to find out the reason for the very common NoSuchElementException which, surprisingly, tries to communicate the element you were looking for is missing.

Why do you end up debugging your test code? Because that exception doesn’t provide any information regarding the element you were looking for! Don’t you believe me? Here is the message you get by searching for an element with a myElement CSS class:

org.openqa.selenium.NoSuchElementException: no such element
(Session info: chrome=31.0.1650.57)
(Driver info: chromedriver=2.3,platform=Windows NT 6.1 SP1 x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 29 milliseconds
For documentation on this error, please visit:
Build info: version: '2.35.0', revision: '8df0c6bedf70ff9f22c647788f9fe9c8d22210e2', time: '2013-08-17 12:46:41'
System info: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.7.0_25'
Session ID: a31de0b94c4e6a9f2a072ec2190b9255
Driver info:
Capabilities [{platform=XP, acceptSslCerts=true, javascriptEnabled=true, browserName=chrome, chrome={chromedriverVersion=2.3}, rotatable=false, locationContextEnabled=true, version=31.0.1650.57, cssSelectorsEnabled=true, databaseEnabled=true, handlesAlerts=true, browserConnectionEnabled=false, webStorageEnabled=true, nativeEvents=true, applicationCacheEnabled=false, takesScreenshot=true}]

A lot of information, but nothing reporting which element the library didn’t find!

Ok, I finally got tired and fixed this! If you are interested in this fix I’ve implemented it in my open source (APL & LGPL) library SmartUnit: the feature is available starting from version 0.9.0, available soon on Maven Central Repo.

By using SmartUnit wrapping driver SharedWebDriver you’ll get the following instead:

org.openqa.selenium.NoSuchElementException: no such element
(Session info: chrome=31.0.1650.57)
(Driver info: chromedriver=2.3,platform=Windows NT 6.1 SP1 x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 30 milliseconds
For documentation on this error, please visit:
Build info: version: '2.35.0', revision: '8df0c6bedf70ff9f22c647788f9fe9c8d22210e2', time: '2013-08-17 12:46:41'
System info: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '1.7.0_25'
Session ID: b64b3a592241a4ba173235f1133e3cba
Driver info:
Find clause: By.selector: .myElement
Capabilities [{platform=XP, acceptSslCerts=true, javascriptEnabled=true, browserName=chrome, chrome={chromedriverVersion=2.3}, rotatable=false, locationContextEnabled=true, version=31.0.1650.57, cssSelectorsEnabled=true, databaseEnabled=true, handlesAlerts=true, browserConnectionEnabled=false, webStorageEnabled=true, nativeEvents=true, applicationCacheEnabled=false, takesScreenshot=true}]

If you find it useful don’t be shy: share your appreciation and interest with a mentioning tweet, starring the SmartUnit GitHub project and/or any other mean you find appropriate!

Eclipse: annoying JSP errors

A few weeks ago I’ve posted something about annoying Eclipse validation errors regarding minified JavaScript files. Today I’m here to solve the same issue with regards to JSP validations, specifically when those errors are due to non project related files.

I’ve just setup the Maven Cargo Plugin for my project and suddenly I got tons of errors due to JSP files within Tomcat 7 distribution not satisfying JSP validation rules: I hate those distracting errors/warnings and I decided to apply the same solution I used for JavaScript files.


It’s quite easy, but it has to be done on a per project basis: right click on your project then select Properties and in the upcoming pop up window select Validation then click on the ellipses (…) button for JSP Content Validator (you’ll have to repeat the same for the JSP Syntax Validator).

In the new pop up window that will be displayed you will have to add an Exclude Group and then add a rule for the target folder.


Run a clean build to free up your project from unecessary validation erros!

Java and self signed HTTPS certificates

I encountered this problem while trying to use the Maven Cargo Plugin to deploy a WAR onto a JBoss server having it’s access restricted to HTTPS, but with a self signed certificate.

This configuration creates a set of issues you need to solve in order for this configuration to work, but I believe this same set of steps is needed in other contexts other than the Cargo Plugin and JBoss.

First part of the problem was the server name verification as the certificate used for the SSL encryption was generated referring to a different hostname then the one I’m using to access the server. As a consequence I get a hostname verification error during SSL handshake.

To fix this I decided to go for a Java agent, a library you link to your runtime environment without changing your code. The library is available on Github courtesy of  Peter Liljenberg.

Once you have the JAR you just attach it to your JRE by using the option -javaagent:<PATH_TO_JAR>: for simplicity I’ve added such option within my MAVEN_OPTS as I need that while running Maven.

set MAVEN_OPTS=-javaagent:$MAVEN_REPO\com\sparetimecoders\hostnameverifier-disabler-agent\1.0-SNAPSHOT\hostnameverifier-disabler-agent-1.0-SNAPSHOT.jar

Once this part is fixed you’ll start getting a PKIX error, signalling the certificate you are getting cannot be verified against the trusted certificates list: being the certificate I’m getting a self signed one this is not a surprise.

To add the certificate to the trusted certificates list a simple command is what you need:

keytool -import -alias server-name -file file.cer -keystore $JAVA_HOME/jre/lib/security/cacerts

When prompted, use changeit as keystore password, unless you or the system admin have customized it .

After these changes I’m now able to remotely deploy my WAR artifact on my HTTPS secured JBoss AS.