SPC Introduction for Medical Physicists: Why Does Variability Matter?

Matt Whitaker  Image Owl, Inc.
Copyright © August, 2015. Image Owl, Inc.

In this series of articles I want to give the practicing medical physicist a basis for understanding what problems Statistical Process Control (SPC) can address in their work, how to get started and some further resources to gain deeper understanding of this powerful group of techniques.

I intend to spend as much time on the ‘why’ of SPC as the mechanics, as I am convinced that the mechanics come quite easily once you understand the reasoning behind the tools. Having a firm grasp of the reasoning also allows you to make effective decisions when setting up an SPC system and avoid some of the traps.

With the guidelines in AAPM’s TG-100 report due to be released soon there will be an increased need to apply effective industrial QA tools to the practice of medical physics. SPC is one such family of tools designed to measure process variation and detect statistically significant changes.

This first article examines why we care about process variability at all:

Why Does Variability Matter?

Traditional approaches to quality control define quality as adhering to a set of standards and tolerances. Everything within tolerances is good. If our measurement falls slightly out of tolerance our process is bad and we must do something about it. We often measure retrospectively to determine fitness for use and treat each measurement point in isolation without reference to past behavior. This view is basically a ‘compliance’ mindset. As long as the ball wobbles between the goalposts we are ok. If we miss we need to make an adjustment (or as is often the case) assign blame.

This binary view of quality is fundamentally flawed. It fails to take into account the costs of variation, the nature of random process variation and our desire to proactively control our process.

Traditional Approach to Variability Costs

When we put a process in place we select a set of characteristics that we are going to measure to determine whether our process is meeting its goals. These could be a set of dimensions in the case of a process producing a mechanical assembly or perhaps the x-ray output of a linear accelerator for radiation therapy.

We set nominal targets for these measurements because we have an ideal picture of what we want the process’s product to be. For the mechanical assembly the goal may be for the assembly to move smoothly without excessive slop or binding. For the linear accelerator we use the accelerator’s nominal output as a key input for our treatment planning process and we want the treatment to match the plan.

Traditionally we then set acceptable tolerances around the measurements. What are we doing when we set these tolerances? In a traditional mindset we are saying that this is the point where we have to deal with the risk or cost of deviation. Note that costs do not have to be monetary. In a medical physics setting these can include costs such as decreased probability of tumor control or increased risk to surrounding organs.

In the diagrams below (figures 1 & 2) we see how this model plays out with a probability density function reflecting the mean and standard deviation of an ongoing process (the red line) overlaid on a set of goalpost specifications (blue line).  The cost curve reflects that we ignore variability costs within specifications.

Figure 1: Traditional Cost of Off-Target Process

Figure 1: Traditional Cost of Off-Target Process

Figure 2: Traditional Cost of Process Variability

Figure 2: Traditional Cost of Process Variability

Once the process produces a measurement that exceeds a specification we incur a cost (retake measurement, adjust equipment etc.).  The total expected cost is the product of the process’s probability distribution and the cost curve. This is shown in the green area on the plot. As we can see until the process aim or variability reaches a point where it is likely to produce measurements exceeding specifications total costs are zero (i.e. we are denying they exist).

Behaviors that the traditional model reinforces:

  • Since no cost is acknowledged until the process has a significant likelihood of exceeding a specification limit our actions are likely to be purely reactive instead of proactive. After all we see no cost until we go into the ditch!
  • Once the variability of our process is sitting within the specifications our incentive for further improvement is greatly diminished as we receive no return on our efforts to center the process and reduce its variability.
  • In a traditional mindset quality improvement activities will almost certainly be viewed as a net cost to your organization

A More Realistic Cost of Variability Model

Is a constant cost for all deviations from target within specifications a realistic model? For most processes we would expect degraded performance in a monotonically increasing fashion as we deviate from the target. For example as our linear accelerator output deviates from its nominal output the delivered dose will deviate increasingly from the planned dose resulting in a small but definable deviation in the tumor control and organ sparing characteristics. In the diagrams below we see such a cost curve (blue) where costs increase as we deviate from our nominal measurement.

This is the classic insight made by Genichi Taguchi, a Japanese industrial statistician.   Without going deeply into the math, empirical observation of the interaction between the probability curve and Taguchi loss function reveals that to minimize the total cost we need to minimize variability and center the process on the target.

In the diagrams below (figures 3 & 4) we have a probability density function reflecting the mean and standard deviation of an ongoing process (the red line) overlaid on a Taguchi type loss function (blue line).  The cost curve reflects increasing losses (costs) as we deviate from the nominal measurement.

Figure 3: Taguchi Loss from Off-Target Process

Figure 3: Taguchi Loss from Off-Target Process

Figure 4: Taguchi Loss from Process Variability

Figure 4: Taguchi Loss from Process Variability

In the Taguchi model any deviation of the process from its nominal value or increase in variability will result in higher cost.  The total expected cost is the product of the process’s probability distribution and the cost curve. This is shown in the green area on the plot. By acknowledging that costs of deviation are realistically a continuous rather than a step function we get a far more complete picture of the pernicious effects of process variance.

Behaviors that the Taguchi model reinforces:

  • Since costs continuously rise as our process variability increases or our process drifts from the nominal desired state we are far more likely to take action before truly serious consequences arise.
  •  Since costs decrease continually as we center our process and reduce variability we have an incentive to engage in continuous quality improvement. While there may realistically come a point where costs of improvement outweigh gains we at least have a more realistic view of the tradeoffs involved.
  • Improvements to quality performance are more likely to be viewed as net benefit to the organization.

In Conclusion…

Understanding that any deviation from nominal target values in a process carries a cost is a key insight that will help you intuitively understand many of the mechanics of SPC.

As we go forward in our exploration of SPC tools it is important to keep this view of the effects of variation in mind. Doing so will help keep you from falling into many mental traps that the compliance mindset will lead you into.

The bottom line is that to achieve world-class quality we must have as our process goal:

On-Target with Minimum Variance

Coming Up…

In the next installment we will examine how one chooses what to measure for an SPC program.

Further Reading

Statistical process control for radiotherapy quality assurance.
Todd Pawlicki, Matthew Whitaker and Arthur L. Boyer, Med. Phys. 32, 2777 (2005)

Understanding Statistical Process Control
D. J. Wheeler and D. S. Chambers, SPC Press, 1992.

Advanced Topics in Statistical Process Control
D. J. Wheeler, SPC Press, 1995.

Taguchi Techniques for Quality Engineering
Phillip J. Ross, McGraw-Hill, 1988

Variation and control of process behavior
Todd Pawlicki, Matthew Whitaker, International Journal of Radiation Oncology • Biology • Physics, Vol. 71, Issue 1, S210–S214 , 2008

PS: We would love to keep you up to date on all of our news and developments. Consider following us on LinkedIn to get our latest news.