Regression Dandelions Be Gone…

Do you like dandelions? I hate them. You mow or pluck a few, yet many others crop up. The main reason that happens is because chances are that you left some behind, they matured and spread their seeds. What needs to be done is to remove them from the roots  using a weed plucker. This way you will reduce the occurrence of weeds and in turn dandelions.  Of course this is hard 🙂

Regression issues stemming from browser based user interface code are just like dandelions. Our code, however well written it may be, using best styles, conventions and readability best practices etc., still has the potential of having such weeds creep in. You go and fix one thing and it breaks something else in another place. This is especially true in cases where development teams are fast paced and work is JavaScript intensive, but strategies to check regressions at development-time are either weak or missing complete. They are agile and there is pressure to get features in quick. Refractors are applauded. Code is constantly shifting. And while all of this is happening, risk of regression keeps on increasing.

As I grew into JavaScript over the years (I now work with AngularJS), I also found myself dealing with this problem more often. But then, the more I relied on unit tests while focusing more on end user instead of just the tests, the fewer regressions I witnessed.

See, it is a very simple strategy and many have vouched for it. Say you want to release 1.1. Changes and improvements are abound. You touch JavaScript code in many places. You update the unit tests in place (God be with you if you do not) to account for new code and they pass. Then QA starts manual and final regression testing, mainly running their awesome test suites full of use cases and viola! They find and log stupid UI issues. Buttons not clicking, DIV tags not populating, DOM updates wrongly, browser console errors etc.; all things that were working before you made the changes. This happens more often because like a good engineer you did not repeat yourself and wrote a componentized code, but missed out on good refactoring and those components are bloated with options that now ,work in different ways.

So what did you miss? Well, leaving the argument about simple and stupid components aside, you missed covering for those new or updated use cases. Now, you have broken dependencies and chain reactions there in! Bloated UI code breaking is but a symptom. You were not only supposed to update the tests, but also make sure the tests were representative of the use cases and workflows, i.e. mainly from the point of user flows. Then complete the tests, rerun the suites to seek out regressions. This is more or less  Behaviour Driven Development or BDD. This cycle has to be followed ruthlessly every time you touch the code . This way you will easily be able to catch behaviors that broke due to code changes. Silly unit test failures should be expected but the idea is to catch them way before integration tests or QA manual testing cycle comes into play. On top of this, you make sure you have good coverage numbers making sure you have hit a lot of the code branches with your units tests. This builds confidence within the team.

I have paid attention to unit testing a lot over the years, but BDD was not on my radar for the longest time. We just look at coverage numbers and tests on new code, but often get lazy focusing on the bigger picture. The crux is that when you add features to your product, it usually adds to and/or affects user behaviors. Someone has to keep an eye on that, from engineering point of view and make sure the tests reflect that, with every release. Someone has to play the role of, what I term as “Night Watchman!”.

So here are some practical mini-strategies that have worked well in my experience:

  1. If cost and time permits, use every development iteration as an opportunity to refactor UI code into smaller and smaller manageable chunks.
  2. Assuming you are Agile, make it a point that in every sprint, a team member steps up to become the night watchman. Rotation of this role helps team member, especially junior ones, to become empathetic towards end-users, QA team and their own time!
  3. Bring the developers on the same page so that they write unit tests to be as representative of a user task as possible.
  4. The night watchman works with Product Owner and/or just references well written QA test suites to code review the unit tests.
  5. Enable JavaScript code coverage and maintain high but achievable and acceptable metrics on classes, functions and lines.
  6. Track the regression metrics over time and determine what code is usually more susceptible to breaks. That code is your prime candidate for your next refactor.
  7. Tag team and review each others test suites just like you would review code. Over time these tests become living artifacts.

Now, there are other theories and tools that help you follow these strategies. But I have found that BDD to be the one that will get you closest to how the user thinks and performs his/her tasks. If our tests are written to compliment that, we have a better chance of clamping down regressions with successive development iterations. This also adds speed when engineering effort in iterations is milder. I have been using Jasmine for writing BDD with my AngularJS code. Give it a try, if you have not already!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s