There are many types of changes that a business or a project can go through. This article focuses on a narrow area: the “small” change request that a product owner requests during development.

Business analysts may often find themselves in this situation: the scope of a feature is clearly defined and agreed upon with the product owner, but during the development phase, the product owner wants a “small thing” changed.

For sure, from their perspective, such a small change should not affect the feature too much, nor does it need too much analysis effort. But we, the BAs, know that usually the impact of a change request is not as light as the POs expect.

I have been in this situation many times. And I sometimes missed important aspects that came up only too late in the process. To avoid repeating the same mistakes in the future, I searched for lists of steps to be considered when analyzing a change request. But none that I’ve found seemed to cover the steps that I was missing during my analysis. So, I decided to use the information that I found and to enrich it with my own experience. This is how this article came to be created.

My main inspiration was Joe the IT guy, who described some steps to be considered when analyzing the risk of a change. He also applied a scoring system to evaluate the risk. I then adjusted those steps and scoring system to fit our needs. And you can see what they are below. 

Aspects of a Change Request

There are some aspects that drive the analysis paths:

  1. How many teams within the project are involved in implementing the change?
  2. Has this type of change been applied before, and with what degree of success?
  3. How much development and testing effort would be needed?
  4. Delivery impact:
    1. When should this change be applied: before having the feature live or after?
    2. If we are delivering the change after the feature goes live, what would be the impact on the existing data?
  5. If the change fails, what are the possibilities and risks of a revert? 

We are going to take a deeper look at each of these questions to explain them and to identify possible answers. While doing this, I’ll associate a points system to arrive at some form of overall score of the risk.

The Scoring System

The scoring system starts at 2 for the best possible scenario and answer. It progresses in increments of 2 up to 10 for the worst scenario. The scores for each question are totaled to arrive at a final ‘total risk factor’ score.

Q1. How many teams would be involved?

The number of teams involved in a change can indicate its complexity, and difficulty.  Involving more teams requires maintaining communications and coordinating the collaborative efforts. Thus, the more teams involved, the greater the risk that one of these facets may fail. 

In terms of possible scoring ranges, one team involved scores 2 points. If there are two teams involved it’s 4 points, and so on until five or more teams involved equates to a score of 10 points. Such that the score increases as the number of teams involved, and thus the associated risks, increase.

The scoring:

The scoring:

Q2. Has this type of change been applied before, and with what degree of success?

When a certain approach was already used by the team, then there are more chances to be used successfully again in the future, especially if the people doing it in the past are still in the team and available to help.

The scoring:

The Prior Experience

Q3. How much development and testing effort would be needed?

A change might be small from a POs perspective but could imply many code changes. This must be considered when deciding whether to proceed with the change. Besides the development, the testing process could also impact the decision.

Even if the development requires a small effort, the testing process might bring regression and integration testing costs that could impact the decision of implementing the change.

The scoring:

Total effort needed

Q4. Delivery impact

A change to a feature that is not live is far less risky than a change to a live system. Changes to a live feature might affect real users’ data which is far more costly. 

Also, even if the change is planned to be delivered at the same time with the feature, delivering the change request could involve resources that might impact the initial delivery calendar.

The project managers have means to mitigate the risk for the delivery calendar, but the mitigation effort should also be considered.

The scoring:

Delivery impact

Q5. If the change fails, what are the possibilities and risks of a revert?

If we’ve decided to deliver the change after the feature is already live and there is user data impacted, then we must have a back-up plan in case of failure.

If user data is updated or altered in any way, there is a risk to corrupt it and this risk has to be assessed properly.

The scoring:

 Failure assessment

Conclusion

All the above criteria must be considered when a change request is analyzed. A scoring that marks 10 on any criteria should be a good indicator that the change involves a high risk of costs or failure.

The total value of the risk should be evaluated by every team depending on their appetite for risk.