Strengthen your Test Baseline process to counter high density code change

Everything about Change & Digital
5 min readDec 31, 2021

To support demand for code change even during late stages of testing

UAT Testing with instructions on the whiteboard

The essential tenets for an effective DevOps pipeline are:

  1. Changes between various testing environments are immutable — No alteration to be introduced when the testings are done at Unit Testing, SIT and UAT before production.
  2. Changes made at the development stage must be traceable between environments to ensure that different stakeholders and testers have the same features and functions. The baselining for features and functions includes consistent security updates and fixes. The requirement is to ensure technical integrity is maintained and protected with every testing done — updates and changes in the artifact repository such as Nexus Repo or JFrog Artifactory. The binary storage can only be uploaded or changed via a pipeline for audibility.
  3. Changes to artifacts in a pipeline should only be done via the source code repository. A developer will need to check in to a code repository like GitHub or Atlassian’s Bit Bucket.

DevOps’ underpinning principle should be complete transparency and traceability with a control change entry point for any artifacts!

Scenario 1: Building a Brand New Applications

The task on hand is for the team to develop an application from the ground up. Rarely is ground-up development done today, but most of the time, the team will leverage different opensource or paid packages to set up the foundations of the system. The base code will also include reusing old codes from existing systems within the enterprise to maintain compliance. Development for Technical and Business functions by the development team

The characteristics of such a project are:

  1. Rapid changes will create a high technical debts environment throughout the project as the number of changes is made on the technical and business requirements.
  2. The amount of custom code vs. Framework is higher. It will take time for code-based to mature refactored back to the Framework.
  3. Massive demand for integration work, where using many new packages will drive changes across all artifacts.

Scenario 2: Working with external or community testers

There are times when the system is to be used by many external partners. In the business world, where ecosystem collaboration is essential, one organization’s functionality is no longer used exclusively. However, it will be shared to generate a new revenue stream with the Network Effect. Many digital natives such as Uber have relied on their growth.

The typical behavior of such collaboration are observed:

  1. Integration carried out via APIs is not concerned about the underlying code changes. At the same time, the partners are also building their code to consume the APIs.
  2. They are consuming various services between different partners via a User Interface, where test scripts are tailored and built by the partners.
  3. Timing the testing window between partners is not impossible because the delivery of the appropriate features is different.

Scenario 3: Refactoring code changes with Framework development and enhancement

As part of the maturity of the codebase, the source code needs to be reviewed and refactored. One of the activities is to identify standard code and functionality and rebuild them into the framework. The goal is to improve code maintainability and reusability. The aim is to increase the velocity of the feature release. Over time, the code quality and consistency improve as senior developers can continuously enhance the tightness of code and framework and allow the lesser experienced developers to work on functionality on top.

Typically this is done after:

  1. Focus on speed of delivery for an initial MVP or pilot launch. It is safer to do this early as the amount of code is less, and the impact of the framework change has a smaller footprint than the business functionality.
  2. Perform updates to the technical architecture with changes to re-platform the underlying infrastructure that requires changes as libraries and functionalities previously used might be deprecated.
  3. Enhancement on performance and security vulnerability-related and scope within the framework changes where possible.

Key challenges:

  1. Each code changes and “touches” impacts many sources and functionality.
  2. Code freeze is not possible until the end of the UAT testing.
  3. There is no way to synchronize the partners to test while developers still make changes to the code base.
  4. It will be too rigid about enforcing incremental code changes when previously tested code and validated is not being changed.
  5. Interface design internally within the system is still changing as code drops are done via binary or single source code file instead then code bundle

Test Baseline + Code Coverage = Detecting Functionality and NOT Code

Robust Test Baseline

Test-driven development (TDD) depends on converting software requirements to high-quality test cases before the software is fully developed. The tracking of all software development by repeatedly testing the software against all test cases. It should be robust and fast to be executed whenever needed by anyone.

Software projects need to pass functional and non-functional testing to meet requirements. Test baseline records functional metrics of a software application when it undergoes testing. When the code base is updated, it goes through a set of predefined use cases. Comparing the results with the previous user testing metrics and documenting the test metrics from every test for future comparison. The overall objective for baseline testing is to maintain a software application’s consistent quality and functionality.

With Functional testing baselined covering key functionalities, and various testing types

  1. Functional testing — black box, unit, integration system testing, regression testing, smoke testing
  2. Non-functional testing — stability, scalability, reliability, load, stress testing
  3. Performance testing — meeting quality standards and SLA

Code Coverage

Code coverage is a measurement of code covered by executing your baseline test. Its results evaluate the test run’s execution on the source code’s statements and which code has not. A typical code coverage system collects information about the running program. It will generate a report that combines source information on the test suite’s code coverage at the same time.

Code coverage is key to forming a feedback loop in the development and testing processes. Tests execution alongside where the system will capture the code coverage highlights aspects of the code that may not be adequately tested and require additional testing. This loop will continue until coverage meets some specified target.

In Conclusion

It depends on how you manage your deployment and environment. The single artifacts will consist of all the changes. However, it will also collect the output from your testing to match the baseline. The key is not the code but the test baseline.

The key is not to focus on your code is the test baseline that matters. However, it also depends on your baseline. However, suppose the change velocity is high. One can consider leveraging UAT as the baselining environment for integrated and user testing. However, this decision and design will be hugely governed by your test strategy implementation

A lot of these discussions can go either way unless everyone follows the guideline, then it is hard to say who is right or wrong. What is important is why the test strategy is critical or covers functional and security aspects to make sure no BAD work is released. That’s important

--

--

Everything about Change & Digital

Keen to drive growth and strategy, create new value or renewal by transforming an organization’s traditional analog business into digital with intelligent tools