Excuses for not writing automated tests

Originally posted on LinkedIn

excusesAt some point in your career you will have to deal with someone who’s totally reluctant to writing/prioritising automated tests.
It may be a colleague of yours, a contributor to an open source project, a product manager, it may be even yourself!

Even if the reasons behind this resistance may be varied, I noticed that most of the times they stem from a common root: ignorance.

Let me clarify: I’m not here stating that not writing/prioritising automated tests makes someone an ignorant;
what I am saying is that the benefits deriving from having a solid set of automated tests makes it very difficult to decide not to have them in place, therefore only who is not aware of such benefits would decide not to write tests.

So, what are these benefits?

The Benefits

Having automated tests in place doesn’t affect engineers only, it affects what engineers make and how they make it: it affects the quality of the final product, and how the whole project is organised.
Therefore, let’s list how life would look like with a solid suite of automated tests in place from the product, the project, and the engineering standpoint.

NOTE: Keep in mind this list is not intended to be (and it’s really far from being) exhaustive, if you have some sound example please post it in the comments, I’ll gladly incorporate it here.

Product benefits

  • A final user would notice a reduced number of new bugs (potentially none) affecting the existing product features
  • A product owner\manager would enjoy a much faster and consistent quality assurance feedback when compared to manual QA (feedback measured in days usually shrinks down to hours)

Project benefits

A project\engineering manager

  • would be able to deliver faster by unleashing the skills of multiple engineers working on the same project without blocking each other with unexpected behaviors
  • would find that handing a project/task off to another team/engineer requires less documentation and support

Engineering benefits

An engineer

  • would experience the joy of fearlessly refactoring and perfecting the existing solution
  • would easily detect stable code, making its modularisation just a mere mechanical consequence

Usually the majority of the people agree with all the points above, but there’s always a “but”, a reason… an excuse I’d say… that prevents them from fully vesting automated tests.

Reasons excuses not to write tests

“Is it tested?”.

Over the last 3 years I found myself asking this very question many times in multiple occasions (in code reviews, pair programming sessions or during stand-ups\retrospectives, etc) to many different people. Nonetheless, the answers I got were surprisingly similar, so I grouped the most common ones by theme (or symptom if you want).

Time constraints

  • I’ll do it later on
  • We’re already late, there’s no time to write tests
  • We should ship first
  • This is just a “quick and dirty™” solution

Everybody is time-crunched, but taking shortcuts by not writing tests doesn’t save you time, on the contrary.
Let’s start from this: unless we have a reckless soul we need to test a feature going to production.
So, if we measure the time taken for a feature to be developed, tested, debugged and deployed by a team, the total time will be much less if we reduce the time taken for testing and debugging it to the minimum.
Therefore if we disseminate the code with automated tests as we go (not “later on”)

  • we create a safe environment that prevents bugs from sneaking into our code
  • we don’t waste human time by checking things that can be automated

In other words: the “quick and dirty™” solution is a myth. There’s just the “dirty” solution, or better the “dirty and time consuming” one.

Unclear roles and responsibilities

  • This class/block of code didn’t have tests already
  • QA will find the issues after we’re done
  • Writing tests is responsibility of another team (usually QA)
  • I will be the only person touching this code

If you are a one man band and you’re not thinking about working in a team ever in your life the latter may be a good one for you, but the majority of us work don’t work alone, therefore I’ll move over to the other points.

It’s quite common to find people “passing the buck” to others for something labelled as unpleasant, so getting rid of the responsibility of testing and focus only on “the real code” is just one example (supporting the live infrastructure – a.k.a. on-call policies – may be another).
But what happens when someone else is responsible for our actions? We don’t care anymore about the consequences of what we do, and this reflects on the quality (and the delivery schedule) of the final product.

Quality products are created by people who cares.
Engineers should be responsible of producing quality code.
Quality code is covered by automated tests wherever possible.
The QA team should be helping to test what cannot be tested automatically (localisations, graphics…).
QA should find nothing™”.

Technical issues

  • This class\feature is impossible\too difficult to test
  • I don’t know what this class\function is supposed to do
  • Tests make the build too slow

If like me you’re managing or leading an engineering team, this is something we should sink our teeth into.

We should establish some practices to allow our team to spot and resolve these issues as a team. Code reviews, pair programming, brown bag sessions are just a few examples.
If no one understands what a function is supposed to do we should expect this feedback to be given to the author of such function during a code review. Fail fast.

We should build our team’s engineering culture one practice at the time.
We should talk about practices, not policies. We should never mandate a practice unless we don’t have any other choice, and if we don’t have any other choice it’s probably a failure we should learn from.

Our team should have access to training material. Buy some books, some videos, send people to conferences, host some meetups.
If a class is too difficult to test we should expect an engineering team to find out why, come up with a solution, and share the knowledge within the team so not to fall into the same issue again.

We should provide our team with a healthy CI infrastructure and development machines that allow tests to be executed easily and frequently.
If some tests make the build too slow we should expect the author to notice while running them and optimise them for a good build time. Once more, fail fast and iterate until it works.
Automated tests are code, and engineers should be responsible of producing quality code.

Wrapping up

Most of the times deciding not to invest in test automation is not convenient for the business. Whether you’re an engineer, a product manager or even the business owner you should consider to establish this practice in your life\team\company as it pays back with a huge product quality improvement, a happier customers, and a happier development\product\QA teams.

I appreciate that we just covered the tip of this huge iceberg shaped topic, so I want to end with a question for you… what’s your excuse not to write tests?

Piergiorgio Niero
Director of Engineering @GSN London

Community, Javascript

What ES6 features can be used in Node with no transpiler [Part 2]

Let’s continue our investigation on what ES6 features are available today (v4.1.0) in Node.
In part 1 we covered block scoping variables  and Classes, in this post we’ll cover collections.


A good way to start understanding collections in ES6 is reading this great post on from Jason Orendorff.


Here’s a true game changer. Map makes decoupled object-to-object association finally possible in javascript.
As stated in the reference docs “Objects have been used as Maps historically”, but it’s really important we have clear the difference between Map and Object.
While Objects can only map literals to values, with Map can associate value to value.


Let me play the role of Capitan Obvious and say it again: we can now associate objects to objects with no string association example

Other builtin perks are:

  • size  and clear come for free, so we don’t have to keep our own counter as a property (and then to exclude it from the count itself…).
  • NaN uniqueness , even if NaN === NaN is false, we can use NaN as a unique key
  • iterators (entries, values, keys) and a handy forEach method
  • has(key) replacing the quirky hasOwnProperty method to check if a key is present


A Set is a collection of non duplicate values.


The main benefit provided by Set is exactly the main purpose it was designed for: filtering out duplicate objects.
I would bet (too) many times we have found ourselves writing code like the following in order not to add twice the same value to an array…

Screen Shot 2015-10-07 at 19.55.18

… or even to an object…

Screen Shot 2015-10-07 at 19.55.29

And we found out that, unless we put in place a hashing algorithm, or our collection allowed us to use a binary search, we ended up with a linear lookup time that may be not good enough as our collection grows.

Set solves this providing constant lookup time out of the box, and a simple API.

Screen Shot 2015-10-07 at 19.55.37

Check yourself this little benchmark, you will notice the difference as the collection grows. (Run it with: “node <filename> <elements amount> <iterations amount>”).

On top of its main purpose, Set comes with several handy utility methods (similar to Map btw…) such as entries, keys and values (keys and values are the SAME function!!!) iterators, size\clear, has to check if the set contains a given value, and the usual forEach.

WeakMap and WeakSet

I’m grouping these up as we already covered Map and Set individually and these two variants of those data structures are very similar to each other.

WeakMap or WeakSet don’t hold a strong reference to the objects they store, hence if the only existing reference to an object is stored in one of these collections the GC will kill it and free that memory location as it passes.

To get a better understanding of garbage collection in take a look to the examples in the reference docs.


When coming to the benefits these weak collections come with, the typical usecase we will stumble upon is about keeping a reference to a DOM object without impeding the GC from collecting it. Even if that gives us an idea of the functionality, it doesn’t fit well to Node.

An example sitting better in this context could be filtering existing collections without keeping a strong reference to each element of the original collection. Consider the following code:

Screen Shot 2015-10-07 at 22.07.29

What next?

Part 3 will come soon! Watch this space 😉

Community, Javascript

What ES6 features can be used in Node with no transpiler [Part 1]

As soon as I started attending the great London JS community meetups I immediately noticed mixed feelings around ES6.
It seems people is definitely looking forward to it becoming mainstream and they’re hungry for information about it, but at the same time they tend to stand clear of it in production.
Skepticism? Lack of success stories? maybe…
At the time of writing ES6 is not fully supported everywhere and our code needs to be transpiled to older versions of Javascript in order to run properly on all the targets.
Hence, where ES6 is not supported the code we wrote is NOT what gets executed, and all the new cool features provided by ES6 become mere syntactical sugar that comes at a high potential cost:

  • bugs\incompatibilities difficult to debug (as, again, the code we wrote is not the code that is executed)
  • performance issues difficult to overcome (as a new performance enhancement can be transformed in a not-so-performing piece of “old” javascript)

I can imagine some pioneers’ smirk right now. Yes we can run our node app using the ES6 staged features, or even the features currently under development, but is it really a choice? Are we confident enough to run our production app on a beta (or even alpha depending on the maturity of the single feature we plan to toggle) ?

So, let’s consider the bright side for a minute…
Last time I checked the Node docs I read that some ES6 features are ALREADY supported with NO RUNTIME FLAG REQUIRED.
So, I would not go for yet another post on how to fake… ehm…”transpile” ES6 in your Node project, nor I would suggest you to enable all the WIP features.
I’d rather check what is possible to use AS TODAY in Node (version 4.1.0), and identify what are the benefits they come with.

Our trip is going to be long, so let’s start from listing what is available from the menu:

  • Block scoping
    • let
    • const
  • Classes
  • Collections
    • Map
    • WeakMap
    • Set
    • WeakSet
  • Typed arrays
  • Generators
  • Binary and Octal literals
  • Object Literal extensions
  • Promises
  • New String methods
  • Symbols
  • Template strings
  • Arrow functions

Block scoping

let & const

These new features are real game changers (even if you are a “var” aficionado).
While the memory allocated by var lives within the function scope, the memory allocated by let and const lives within the code block.


Syntax sugar aside, a much more granular memory management is VERY welcome.
Before ES6, each time a var was declared a memory location was kidnapped for the entire function scope.
Check the examples in the reference docs, but also consider this example using files and repeat with me: “yes, my app won’t be memory hungry anymore”.
Screen Shot 2015-09-30 at 21.45.47


As the docs reads Classes “are syntactical sugar over JavaScript’s existing prototype-based inheritance. The class syntax is not introducing a new object-oriented inheritance model to JavaScript.”


Syntax sugar, no memory management nor performance benefits.

Where are the others?!

Read Part 2 to get some insight about ES6 collections!

Community, Haxe

WWX 2014 follow up – thoughts on haxe

I just came back from the World Wide haXe conference held in Paris this weekend and want to share my thoughts.

I’ve been evaluating haxe as a viable option for my company’s daily development other times in the past as I really think the language is great and the technology has a huge potential.
What always made me choose for a different solution was the lack of tools, the focus on the gaming industry, and the lack of documentation.
So I left for Paris with a defined set of questions and got a range of answers from very disappointing to mind blowing.

Let’s start with the good news, and let’s talk about

Case histories

what to say… I’ve been mind blown, two huge success stories and both OUT  of the gaming industry.


The haxe foundation (finally) decided to focus on documentation and rolled out


I’ve been VERY bothering throughout the whole conference about tooling, I asked almost all speakers about the tools they used in the daily development.
Coming from 5 years across IntelliJ, Visual Studio, Xcode and Eclipse I feel the need for a major IDE and a working automated build pipeline for my technology of choice.

I got quite disappointed by the choice of the haxe foundation to focus the effort in sponsoring yet another haxe IDE , only for haxe development, namely HIDE, instead of focusing on a major IDE support.
I got even more disappointed when more and more speakers and attendees were trivialising what I felt as a major issue and were pointing out that sublime text is simply “good enough”.
Luckily business stories are choosing a different approach and both TiVo and Prezi showed they’re using intellij for development, and the guys from TiVo said that the work to be done on that plugin is still quite a lot, for instance there’s no unit test runner.
Tools are far from being at the level of other languages, but luckily all of them are open source and can be improved by the community, and at the time writing it is possible do what a mature language environment should allow:

Even if the road is still uphill we can spot the top getting nearer.

Other news

More news were presented, briefly:

  • there’s a brand new python target
  • it’s now possible to script Unity from haxe, thanx
  • there’s plenty of new libs and macros  are just few
  • an interesting “twisted” point of view of “haxe as compiler target” bubbled up from the community in the person of
  • new version of OpenFL (even though it wasn’t presented at the conference it’s worth mentioning) 
  • NME is still alive! I thought NME became OpenFL but they’re just two different projects and now the haxe foundation is trying to merge the effort of the two teams
  • Date and String encoding issues are the next thing to fix in the TODO list of Haxe for the upcoming year
  • short lambdas!!! (no, just kidding)

here’s the link to the slides of the keynote


I had good time in Paris, got answers to all of my questions and I came back thinking Haxe is a viable option for a business, even if tools are still quite not there the power of the technology is worth enough to give it a shot.
If I had to set a TODO list for haxe it would look like this:

  • fix language problems on String encoding and Date object
  • focus on getting BIG PLAYERS involved, hire a CEO, move to the US. Exit the startupper garage and think big.
  • focus on tools:
    • intellij plugin experience should be near to java\as3
    • maven mojo is still behind
    • focus on conversion tools from * to Haxe (TiVo moved to haxe also because most of the “dirty job” was automated)
  • focus on community: encourage the creation of user groups in key locations

Last but not least, once again I was very impressed by the vibrant community that drives this technology, thank you all guys for your effort to make this technology grow better and better, and a special thanx the whole crew of SilexLabs to have organized the event.


Writing and Debugging a Maven Mojo

I set up my environment to develop a mojo and be able to debug it on intellij (version 13 at the time writing).

Here’s a very lean list of the things you need to do the job:

create the mojo project from org.apache.maven.archetypes:maven-archetype-mojo

  1. create your mojo entry point, it’s just a java class extending AbstractMojo and override execute() method
  2. comment your mojo class with a javadoc “@goal whatever” and “@phase whateverphase” . Those apparently useless comments are actually used to determine which goal and phase are associated with your mojo. You can ignore @phase and determine that later, but @goal (AFAIK) is mandatory
  3. create a test project that uses your brand new mojo as a plugin, specify an execution and a goal to be invoked (the same goal you wrote in the comment at step 3). At this point you can specify the phase you want your mojo to be executed into if you didn’t specify it in your comment.
  4. mvn install your mojo
  5. mvnDebug install (or specify the phase you need) your test project. mvnDebug will listen on port 8000 on localhost for incoming debug connections.
  6. in intellij, create a “Remote” run\debug configuration and change the settings as follows:
    1. transport: socket
    2. debugger mode: attach
    3. host: localhost
    4. port: 8000
  7. set a breakpoint on execute() and start debugging your mojo!


Good to knows:

  1. you can map a (private) variable of your mojo to a configuration parameter on your pom, just decorate your private variable with a javadoc comment such as @parameter alias=”whateverParam” (this will map <configuration><whatever> to the private String(?) decorated with that comment.
  2. you can map whatever class to manage configuration parameters of your mojo: just create a class and make it implement The properties of the class are resolved and populated by reflection (it’s a very useful if you need nested objects as configuration!!!)
  3. more to come… 🙂
Services, Tools, Tutorials

Twitter streaming APIs returning unauthorized message to cURL

Twitter recently published the new 1.1  Streaming APIs discontinuing the previous one, with this new APIs the authentication method changed and the user:pass pair is no more enough, now you need to:

  1. Create your app at
  2. Create the access token from your app’s details page (refresh after a while, it takes few seconds and the page doesn’t autorefresh)
  3. Go to the OAuth tool tab and insert the URI you want to query, the query parameters and the request type, then click the button to generate the OAuth signature for your request.
  4. The page is now showing the cURL command, copy and paste on your terminal and you’re done