SPC Introduction for Medical Physicists: What Should I Be Measuring?

Matt Whitaker  Image Owl, Inc.
Copyright © November, 2015. Image Owl, Inc.

In this series of articles I want to give the practicing medical physicist a basis for understanding issues Statistical Process Control (SPC) can address in their work, how to get started and some further resources to gain deeper understanding of this powerful group of techniques.

This is the second article in the series following up on ‘Why Does Variability Matter?’

What Should I Be Measuring?

In this article we discuss how to select key performance indicators for your system and why you want to be selective in the first place.

Why not measure everything?

In this age of easy access to data it is tempting to just track and trend everything possible. It is even possible to do an automated first order evaluation on the data being gathered and generate alerts and alarms for users.

 So why not?

The main drawbacks to this firehose approach are:

  • SPC methodology recommends that when our system produces a statistically significant signal that we investigate it for an assignable cause. In most organizations this is the limiting bandwidth. You and your colleagues probably don’t have enough time to do this for more than a handful of key indicators.
  • This indiscriminate approach ignores the fact that just because we can measure a variable it does not necessarily follow that the measurement has any relevance to us.

Trying to measure everything without discrimination is a recipe for almost certain failure in your program. You will spend valuable time chasing down irrelevant signals and the value of SPC techniques will be lost on you and your colleagues.

General Principles for Choosing What to Measure

So now that we have decided to limit ourselves to a manageable number of relevant measures how do we decide on which key measures make the cut?

First we need to decide on what are the key outcomes we need to control. In a medical physics environment this is generally involved reviewing and mapping our treatment and imaging processes and conducting a risk assessment to determine the elements posing the greatest risk due to a combination of frequency and severity of consequences of variation or failure. One of the tools suggested in the soon (hopefully!) to be released AAPM TG100 report is FMEA analysis.

Following that we need to establish actual measurements that serve as proxies for the outcome. This could be a list of existing measurements or proposed measurements. You will probably still come up with a list that is too long to implement initially.

In order to prioritize which key metrics should be implemented first an organized system of evaluating the value and effort of each measurement. We recently completed this exercise with a diagnostic imaging clinical partner and a wide ranging group of stakeholders. In this environment the outcomes desired encompassed patient and staff dose exposure, image quality and regulatory compliance. Performance indicator values were rated on factors including:

  • Relationship to dose risk in patients.
  • Relationship to dose risk in staff
  • Predictors of physician perceived image quality
  • Regulatory requirements.

Performance indicator effort was rated on factors including:

  • How well defined the measurement was.
  • How much effort does the measurement take (e.g. is it automated?).
  • How much processing does the data take to before presentation.
  • Do we have a track record on this measurement to review?

Your value and effort metrics will of course vary with the exact process you are evaluating.

The outcome of this exercise greatly increased clarity around:

  • High value/ low effort measurements: These are clear candidates to fast track for your SPC program.
  • Areas of significant disagreement: Where ratings vary widely on measurement value and effort this is an opportunity to gather varying perspectives and align on a common understanding of the proposed metric.
  • High value/ high effort measurements: Here we have potentially valuable measurements that require excessive effort to gather or the data gathering method is simply unclear or undefined. Here we will want to review what steps could be taken to better understand, simplify or automate the measurement.

Other considerations as you evaluate potential measurements:

  • Adequacy of Measurement Units: If the measurement units are too coarse your SPC control charts will generate excessive false signals. Wheeler (see references) indicates that this problem appears as the measurement unit interval exceeds the actual process standard deviation. To correct this issue you will need to either see if the data can be gathered with finer measurement units or gather the data over a longer period of time so that the variability increases to a point where charts are useful.
  • Time Scale of Measurement vs Time Scale of Failure Mode: Where possible you want to measure parameters at a rate that allows you to be signaled and take action before clinical consequences occur. Evaluate the identified failure modes. In cases where process degradation occurs over time (e.g. a slow output drift on a linear accelerator) it should be easy to determine a reasonable measurement interval. SPC control charts are generally not suited for detecting very sudden failure modes. It’s not that the chars will not detect the event but rather they may not do so in adequate time to prevent serious clinical consequences. Here implementing other alarm mechanisms will have to be considered.
  •  Try to convert count data to rates: Although specialized SPC charts do exist for evaluating count type data (e.g. number of artifacts in images) it is usually preferable to convert this discrete count data to a numerical rate where possible (e.g. mean # of artifacts in  images). This will simplify the eventual SPC charts. We will explore count data in depth later in the series.

Finally once you have established your key performance metrics make sure that you schedule a regular review to determine:

  • Does the measurement continue to provide the value you anticipated?
  • Are there better (easier, more accurate) ways to provide measurements that link to your desired outcomes?
  • Do the people gathering the measurements in your organization understand the significance of the measurement and the correct method for performing the measurement?

Irrelevant ‘zombie’ data gathering will quickly sap the effectiveness of your program.

Tools to Help Determine What to Measure

Good overviews and examples of the FMEA methodology applied in a medical physics environment can be found here and here.

There are many software tools for process mapping, fault trees and FMEA analysis online. Dr. Alf Siochi has created a very nice basic fault tree creator software tool available here. An overview of the use of the tool can be found here.

Google Forms is an excellent tool for creating quite sophisticated questionnaires during the measurement evaluation phase.

If you are looking for a more formal tool to relate outcomes to measures you may want to consider a methodology like Quality Function Deployment  (QFD). It was originally designed to link user demands for products to internal manufacturing tools but could be adapted to the medical physics environment.

Coming Up…

In the next installment we will construct a basic control chart and explore what ‘Control’ really means in SPC.

Further Reading

A streamlined failure mode and effects analysis
Ford et al, Medical Physics, Volume 41, No.6, 2014

Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy
Broggi et al, Journal of Applied Clinical Medical Physics, Vol. 14, No. 5, 2013

Quality Function Deployment: Linking a Company with Its Customers
R. Day, ASQC Quality Press, 1993

Statistical process control for radiotherapy quality assurance.
Todd Pawlicki, Matthew Whitaker and Arthur L. Boyer, Med. Phys. 32, 2777 (2005)

Understanding Statistical Process Control
D. J. Wheeler and D. S. Chambers, SPC Press, 1992.

Advanced Topics in Statistical Process Control
D. J. Wheeler, SPC Press, 1995.