Wednesday, January 16, 2013

My Thoughts on Automated Testing

I recently attended an Automated Test Driven Development (ATDD) course led by Jeff "Cheezy" Morgan.

I started using his Ruby-based testing code to build a automated test suite for the Scala-based application I was developing.  The results are amazing.

I've been part of many software development efforts and seen the good, the bad and the ugly.

This is the good.  Actually, it's great stuff.  It rocks!

So, in this article, I'll present my current thoughts on Automated Testing and hopefully get others thinking about it.

Benefits of Automated Testing

(1) AT Save Time
(2) AT Improves Accuracy
(3) AT Makes Continuous Integration Feasible

(4) AT Improves Application Code Quality

(1) Automated Testing (AT) saves time, compared to manual testing.  Once the test code is written and running, the time it takes re-running that test code is minuscule  compared to re-doing the manual test cycle.

(2) AT is "automated".  Assuming the tests are properly written you don't need to worry about accuracy or reliability.  The same cannot be said for manual testing.

(3) Continuous integration (CI) is the practice, in software engineering, of merging all developer workspaces with a shared mainline several times a day. Its main aim is to prevent integration problems.  Build servers, which automatically run the tests periodically.  A software shop can configure it's CI infrastructure as need be, but typically unit tests run after every commit, integration tests are run periodically, combining several developers' code commits and regression tests are run on a less frequent basis due to the time it takes to run them.

When you are able to run your AT framework and verify that the code that was just deployed to your development environment runs properly, you can then more confidently move it to the QA environment and then on to production.

(4) In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general — small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development. This is very similar to the original idea of integrating more frequently to make integration easier, only applied to QA processes.

Choose the Right Test Framework / Technologies

(1) Built upon a dynamic language
(2) Built upon a language that makes code re-use easy
(3) Abstracts low level communication and browser/dom manipulation

(1) Ruby is an example of a dynamic language, one that does not require an extra compilation step to see results of code changes; which provides rapid feedback.

(2) Ruby is object oriented and has features, e.g., mixins, that make writing re-usable code "easy".

(3) This is a tricky one.  You should choose a test framework depending on what you want to accomplish.  For example, if you want to create Acceptance Tests, where the Business Owner needs to communicate requirements and verify that the code delivered at the end of the Sprint meets all of those requirements, then Cucumber is an excellent choice;  However, if you need to write Regression Tests, e.g., for code that has already been delivered, then using RSpec would be a more appropriate choice.

Get the Biz Owner, Developer and Tester on Same Page

(1) Use the Agile Development Process when possible
(2) Get the Biz Owner, Developer and Tester working together

(1)  Agile software development is a group of software development methods based oniterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle.

(2) The main reason for project slippage is poor communication.  When you get the Business Owner, Developer and Test together, speaking the same language, using the same terms and truly understanding each other, then you have greatly eliminated the risk of failing to deliver expected results.

Use the Right Development Process

(1) Use Acceptance Test Drivent Development Process
(2) Developer writes comprehensive Unit Tests 

(1) Acceptance Test Driven Development is the way to go.  

Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: first the developer writes an (initially failing) automated test case that defines a desired improvement or new function, then produces the minimum amount of code to pass that test and finally refactors the new code to acceptable standards. 

In TDD, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented. (If it does not fail, then either the proposed “new” feature already exists or the test is defective.) To write a test, the developer must clearly understand the feature's specification and requirements. The developer can accomplish this through user stories that cover the requirements and exception conditions. This could also imply a variant, or modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.

TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, looser coupling, and cleaner interfaces. The use of the mock object design pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and "real" versions for deployment.

(2) It is in the best interest of the developer to write comprehensive unit tests.  Not only because it helps to focus the developer on exactly what to code, but also when it is complete and delivered, those unit tests serve to prevent code from other developers, who may not be fully aware, from breaking their functionality.  

Write the Right Tests the Right Way

(1) Validate Interfaces
(2) Validate Core Business Functionality
(3) Write the Proper Amount of Test Code

(1) Every system I've worked on has inputs and outputs.  Most have interfaces to external data sources.  Those are important interfaces one which to focus testing efforts.  If a system is not getting good data, it cannot be expected to behave properly.

(2) Test code is just that:  Code.  Writing and maintaining code comes with its intrinsic costs and should be architected, designed and implemented with care.   

(3) The more test code you write, the more code there is to maintain and the more work you'll have to do when your real application code needs to change.  So, it's best to focus on testing core business functionality.  When you start writing a lot of regression test code you should ask yourself, "How much value does this code provide?"  Always ask yourself, "Will this test/code help me sell more product/services?"

Costs of Not Having Proper Automated Testing

(1) Time
(2) Quality

(1) I've been part of software development teams that performed manual testing.  That worked okay for delivering the code the first time, but it was often not repeated when the code was refactored or requirements changed.  When it was repeated, it took about the same about of time the second, third and fourth time to test as it did the first time.

I've been part of software development teams that had sub-teams of developers that worked independently and did not attempt to integrate other sub-teams' code that interfaced with their code until very late in the development process.  When that effort began, work on the core application code had to stop because it became impossible to integrate different code modules when they are in flux.

Moving integration to time of code delivery moves that integration effort way up in the development process.

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed. See

Typically, the earlier a defect is found, the cheaper it is to fix it.

(2) The better architected an application is, the more modular it's design will be.  It's interfaces will be better thought out.  All of this helps tremendously when working in a team development environment.

AT helps with communicating application requirements and validating that the software delivered meets those acceptance requirements.

Show me a software development organization that has quality Automated Testing and I'll show you quality software product.

The following few sections are include for clarity.  I discussed topics from each above, but Iinclude them here for clarity:

Software Testing

Software testing can be stated as the process of validating and verifying that a computer program/application/product:

  • meets the requirements that guided its design and development,
  • works as expected,
  • can be implemented with the same characteristics,
  • and satisfies the needs of stakeholders.

Testing Levels

(1) Unit Testing
(2) Integration Testing
(3) System Testing
(4) Acceptance Testing

(1) Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.

These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other.

(2) Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed.

Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.

(3) System testing tests a completely integrated system to verify that it meets its requirements.  System tests test application interfaces.

(4) At last the system is delivered to the user for Acceptance testing.

Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code.  These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."

Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the flake point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.


The more software engineering projects and teams I have been exposed to, the more I have come to appreciate Automated Testing.

The good news is that it usually does not matter what software stack was used to develop the web application;  Once you have a solid AT platform, you can re-use it for many application development efforts.

As with anything in life, you get what you pay for.  If you invest properly in your AT framework, you will save time and money and have a higher quality product.


Sponsor Ads


  1. Well said, Lex! I commend you for accurately drawing up this list on how projects will benefit from automated testing. I wouldn’t reiterate, as I couldn’t have said it any better, but I do hope that many companies would consider how test automation could improve their business’ functionality and productivity. Thanks for sharing your thoughts!

    Matt Wynan @ Innovative Defense Technologies

  2. Thank you for the kind comments, Matt :-)