Recently I responded to a question at about UAT metrics.

"What user acceptance testing metrics are most crucial to a business?"

Here is an expanded version of my answer, with some caveats.

The leading caveat is that you have to be very careful with metrics because they can drive the wrong behavior and decisions. It's like the unemployment rate. The government actually publishes several rates, each with different meanings and assumptions. The one we see on TV is the usually the lowest one, which doesn't factor in the people who have given up looking for work. So, the impression might be the unemployment situation is getting better, while the reality is a lot of people have left the work force or may be under-employed.

Anyway, back to testing...

If we see metrics as items on a dashboard to help us drive the car (of testing and of projects), that's fine as long as we understand that WE have to drive the car and things happen that are not shown on the dashboard.

Since UAT is often an end-project activity, all eyes are on the numbers to know if the project can be deployed on time. So there may be an effort my some stakeholders to make the numbers look as good as possible, as opposed to reflecting reality.

With that said...

One metric I find very telling is how many defects are being found per day or week. You might think of this as the defect discovery velocity. These must be analyzed in terms of severity. So, 10 new minor defects may be more acceptable than 1 critical defect. As the deadline nears, the number of new, critical, defects gains even more importance.

Another important metric is the number of resolved/unresolved defects. These must also be balanced by severity and should be reflected in the acceptance criteria. Be aware, though, that it is common (and not good) practice to reclassify critical defects as "moderate" to release the system on time. Also, keep in mind that you can "die the death of a thousand paper cuts." In other words, it's possible to have no critical issues, but many small issues that render the application useless.

Acceptance criteria coverage is another key metric to identify which criterion have and have not been tested. Of course, proceed with great care on this metric as well. Just because a criterion has been tested doesn't mean it was tested well, or even passed the test. In my Structured User Acceptance Testing course, we place a lot of focus of testing on the business processes, not just a list of acceptance criteria. That gives a much better idea of validation and whether or not the system will meet user needs in the real world.

Finally, stakeholder acceptance is the ultimate metric. How many of the original acceptance criteria have been formally accepted vs. not accepted. It may be the case where just one key issue holds up the entire project.

As far as business value is concerned, a business must see the value in UAT and the system to be released. Here is an article I wrote that address the value of software quality: The Cost of Software Quality - A Powerful Tool to Show the Value of Software Quality.

I hope this helps and I would love to hear about any metrics for UAT you have found helpful.


Powered by Spearhead Software Labs Joomla Facebook Like Button

Buy The Book!


Randy's book, Surviving the Top Ten Challenges of Software Testing, will help you solve some of your toughest testing problems: people problems!
Now in Kindle format!
Click on the image to buy it from

Go to top