Image Owl held its very first Users’ meeting and Vanderbilt-Ingram Cancer center. on August 1st.! What a great way to finish off a busy week at AAPM! A big than you to our speakers and for all the support and help our users provide us!Read More
If you are ready to turn your incoherent mix of machine QA tools into a powerful, informative system come see our Total QA® system at table 44.
Image Owl's Total QA® provides an extensive array of tools to customize your radiation therapy and diagnostic imaging QA interface to produce the system you need. The Total QA® API provides the ultimate in flexibility for customization and extension. That powerful tool has been made even easier to use with the TQA Connector package.
The Total QA Application Programmer Interface (API) provides a REST type architecture providing a simple, scalable interface to the core functionality of the Total QA platform. It allows:
- Remote access to data in the Total QA database for custom analyses.
- Ability to push data remotely into the Total QA database
- Ability to control many aspects of account management
The REST architecture can be used by nearly any modern programming language that has capabilities or add-on libraries to make HTTP calls. Several basic examples of connecting with service through the API using a variety of languages can be found here.
To further simplify the connection for MATLAB users we created the TQA Connector wrapper package.The wrapper simplifies calls using MATLAB to the Total QA API and helps with some HTTP calling tasks that can be challenging even for experienced MATLAB experts. In the interests of fostering innovation, we are happy to provide the source code and invite interested parties to build upon this work.
The Image Owl Team looks forward to seeing what new extensions and uses for Total QA that our users will dream up. If you would like to see for yourself how Total QA can transform your QA system into a coherent whole just request a demo and trial below.
The Department of Radiation Oncology at UT Southwestern (Dallas, TX) has adopted the Total QA (Image Owl Inc.) service to consolidate its therapy machine QA into a single coherent system.
The department began its search for such a system in late 2016. Dr. Yang Park, an assistant professor at UT Southwestern, led the search process. He and his colleagues did an exhaustive review of all the QA management products in the radiation oncology space.
Their selection criteria included having the system accessible from anywhere including mobile devices, the inclusion of automated EPID based image analysis, and the ability to customize and extend the system to meet their unique QA needs.
Dr. Park commented, "Not only did Total QA meet all of our criteria, it did so at a reasonable price compared to the alternatives".
Having recently moved the majority of its radiation oncology operations to a new facility, UT Southwestern began by transferring monthly QA activities to Total QA including the automated analysis of phantom images, EPID based tests, and dosimetry. Dr. Park noted "We have been pleased with the user interface, workflow, and the quick responses from Image Owl's technical support. We had some unique situations that Total QA had not dealt with before but these were resolved quickly."
After setting up monthly QA schedules, Dr. Park and his colleagues expanded the use of Total QA for consolidating data from the department's daily QA devices and machine warmup QA.
UT Southwestern plans to use Total QA for all its machine QA needs including its non-conventional treatment machines such as CyberKnife, Gamma Knife, and imaging equipment.
Dr. Park and some of his colleagues plan to use the consolidated data in Total QA in data mining research for predictive analytics and other improvement activities. The complete API included with Total QA also affords UT Southwestern physicists the ability to upload numerical and graphical results from custom analyses.
Matt Whitaker, Image Owl Marketing Director, commented "Not only are we delighted at having a prominent institution like UT Southwestern as a client, their comprehensive array of equipment and interest in data analytics make them a valuable collaborator as we strive to make Total QA the most flexible and comprehensive QA data integration platform in the market".
For more information on Total QA please contact Image Owl (firstname.lastname@example.org) or visit us at ASTRO 2017 in San Diego in booth 2417 (The Phantom Laboratory)
Total QA version 2.0 was released last week with a new user interface and streamlined workflow for data entry and QA template creation. This tops off an exciting year of development for Image Owl's cloud based Total QA service.
Total QA is the convenient and complete service for transforming your complex QA system into a seamless whole.
2016 saw many additions to Total QA's capabilities based on feedback from our users and partners. Some of the highlights:
- A completely revised report format with the ability to create baselines from report data, directly link to longitudinal graphs and hide data points.
- A complete API for the Total QA system that allows advanced users and developers to control all aspects of the service.
- Many additional imaging tests have been added to the service including starshots, MLC transmission, dosimetric leaf gap, and asymmetric leaf tests.
- Added the capability to manage and test HDR sources and output.
- New abilities to compare test performance across multiple machines and sites within your organization.
- Enhanced support for pulling data from multiple rooms within one QA schedule for Standard Imaging's QA BeamChecker and the Sun Nuclear Daily QA3
- Added preset templates that can be copied to make developing your QA program even faster.
- Enhanced custom test types including energy dependent tests have been added.
We have many new enhancements in the works for 2017 as we continue to make Total QA the most convenient, cost-effective and powerful tool available to integrate your facility's physics and biomedical QA system into a seamless whole.
The medical physics and bioengineering community is full of great ideas for novel QA devices and powerful analyses. The Total QA® Application Programmer's Interface (API) provides your application with access to a complete and convenient cloud based QA platform that frees you from the task of creating extensive infrastructure for gathering , tracking and distributing your ideas to your colleagues and peers.
Total QA provides a customizable cloud-based platform for creating , managing and implementing a QA system that integrates manual input, image processing results and device input in a coherent whole. The Total QA API allows you to extend the core system with new analyses, file imports and even whole new front end applications.
The Total QA Application Programmer Interface (API) provides a secure REST type architecture providing a simple, scalable interface to the core functionality of the Total QA platform. This allows outside developers remote access to data in the Total QA database for custom analyses and the ability to push data and images into the Total QA database and image processing engine. Administrative functions can also be controlled through the API thus allowing complete applications on top of the core capabilities.
The REST architecture of the API can be accessed by nearly any modern programming language. The documentation has examples illustrating connections to the service from MATLAB, Python and C#. To simplify the connection even further Image Owl has recently made available a complete MATLAB wrapper. Toolkits for Python, C# and Java are being prepared.
We would love to introduce you to Total QA and discuss how to bring your ideas to life through the API.
For further information please contact us at email@example.com
This newly published paper in the Journal of Applied Clinical Medical Physics by Dr. David Goodenough et al. details the physics behind the Wave module contained in the The Phantom Laboratory's Catphan® 700 phantom and automatically analyzed by the Image Owl Catphan QA service.
The Wave phantom can be used to sample the 3D resolution properties of a CT image, including in–plane (x,y) and z-axis information. The key development in this Wave Phantom is the incorporation of a z-axis aspect of a more traditional step (bar) resolution gauge phantom.
Image Owl announced today that it has released a number of new analyses to enhance its suite of convenient , automated radiotherapy QA image analyses for the Total QA service.
The Total QA service pulls the complex clutter of radiotherapy QA into one complete system that allows you to effectively manage your QA workflow and analysis. From daily checks to annual inspections.
New analyses include:
MLC Leaf Transmission and Dosimetric Leaf Gap: The MLC transmission and dosimetric leaf tests use the standard Varian transmission and leaf gap plan. The static leaf transmissions are calculated from comparisons between open and closed fields in both the A and B leaf banks. For the dosimetric leaf gap calculation a series of increasing leaf gaps is swept across a 10x10 cm field. The EPID readings for each gap are then corrected and regressed to calculate the leaf gap.
Starshot Analysis: Total QA contains routines for measuring gantry, collimator and table starshot images to quantify the accuracy of rotation for each of these critical subsystems. For gantry starshots simply upload TIFF image scans of the starshot exposure. For collimator and table starshots EPID DICOM images for each beam are composited to create the starshot image.
Asymmetric Field Measurements: The asymmetric field measurement routine measures the accuracy of field delivery using two asymmetric fields using the Standard Imaging FC-2 phantom.
These new routines complement Total QA's extensive existing image processing capabilities. To see a complete overview of Total QA's analysis routines please register for our upcoming webinar: Total QA: Convenient Analysis Solutions for your Machine QA ( 2:30 PM EST, June 2nd, 2016)
We are getting very excited about exhibiting at the ESTRO 2016 meeting in Turin, Italy in booth 1700. We would love to be able to show you our latest solutions to support your radiation therapy QA efforts.
Do you ever ask:
- How can I get my radiation therapy QA systems to talk to each other?
- Why does the maintenance and installation on local PCs take forever?
- It is 2016, shouldn’t the analysis process be automated?
- Where can I get a large enough file cabinet to house the binders and binders of QA data we collect?
- How am I going to be able to analyze and track all the imaging, MLC and machine tests required by professional and regulatory requirements?
Do you spend an excessive amount of time manually analyzing your Catphan® CBCT images or struggle with software that still makes you find slices manually and fails on basic analysis tasks?
Catphan® QA is fully automated and requires no installation or maintenance. Simply upload a CT series of any Catphan® model and receive a complete report of your CT system's performance on any web enabled device.
Are you thinking about adding MR planning capabilities to your facility or investing in one of the emerging MR guided radiotherapy systems? If so, then MR image distortion becomes a critical concern for accurate treatment delivery.
Image Owl's MR Distortion Analysis service has a 10 year track record in the distortion analysis and reduction of 2D and 3D MR image acquisitions. During that period , Image Owl software has been instrumental in analyzing MR distortion in thousands of image sets using The Phantom Laboratory's Magphan® Quantitative Imaging Phantom.
Come see how you can achieve true 3D analysis of your MR scanner's distortion performance.
Book your dedicated demonstration appointment today
Simply fill out the form below or contact us at firstname.lastname@example.org and we will reserve you a time to see how we can help you transform your therapy QA into the 21st century.
Image Owl announces the release of the HDR QA module for its Total QA service. The HDR module includes the ability to manage Ir-192 or CO-60 source exchanges and adds templates to perform daily and monthly HDR QA in conformance with NRC regulation 10CFR35 parts 35.633, 35.643, & 35.647.
The source exchange functionality includes the ability to:
- Record source calibration activity and time and other characteristics.
- Attach and store manufacturer's calibration documentation.
- Track and display total current activity on site.
- Record shipping dates
- Display source strength
- Display complete history of source exchanges for any HDR device.
- Default units may be set as Curies or Gigabecquerels.
Total QA's convenient cloud based architecture means no software installation or maintenance and your data is accessible on any web enabled device.
The daily and monthly QA HDR templates can be used as is or customized with Total QA's unique test customization features to match your facility's workflow. The daily HDR QA includes a console check against the current sources calculated decay. The monthly QA includes a temperature corrected source strength check. Both daily and monthly QA templates include standard safety checklist items based on NRC regulations.
What Should I Be Measuring?
In this article we discuss how to select key performance indicators for your system and why you want to be selective .Read More
Richard Mallozzi, Ph.D.
The Phantom Laboratory, Inc.
Image Owl, Inc.
Copyright © August, 2015. The Phantom Laboratory, Inc.
A number of MRI applications, such as MRI-guided stereotactic radiation therapy planning, are much more sensitive to geometric distortion than most diagnostic radiology applications. It is widely advised that to reduce geometric distortion, 3D sequences rather than 2D multislice sequences should be used whenever possible. In both cases we are referring to full 3D volume acquisitions, but distinguishing between MRI sequences of the type ‘3D, ‘ which use a phase-encoding gradient along slice direction, and those referred to as ‘2D,’ which use a slice-selective gradient and RF combination 
It is frequently not practical to use a 3D pulse sequence, however, and so it is worthwhile to understand the limitations of 2D pulse sequences and how they differ from 3D sequences. Understanding the difference sheds light on why 2D sequences should receive particularly high scrutiny in a quality control program aimed at controlling geometric distortion, as well as issues associated with correcting distorted images.
As has been described in earlier white papers [2,3], 3D and 2D sequences have an important difference in how distortion correction is applied. For both 3D and 2D sequences, in-plane correction is always applied by default. In the slice direction, however, the correction is generally performed by default only for 3D sequences, and not for 2D sequences. To understand more deeply why this is the case, we delve into how the data from 2D and 3D sequence is different, and why that affects geometric distortion in the slice direction.
3D and 2D MRI pulse sequences
The distinction between three-dimensional and two-dimensional pulse sequences lies in how the information in the slice direction is encoded. In two-dimensional sequences, each slice is excited in isolation by a combined gradient and RF pulse. The data from that slice is acquired, and then a different slice is excited. During the time that one slice is being excited and acquired, the other slices are undergoing the T1 relaxation that re-establishes the longitudinal spin equilibrium.
In three-dimensional sequences, the entire imaging volume is excited with every data acquisition. The spins along the slice direction are spatially encoded with a phase-encoding gradient pulse, just as they are for one of the in-plane directions. Phase-encoding gradients apply a particular spatial frequency modulation to the spin population.
The major benefit to the 3D approach is increased signal-to-noise, as all the spins contribute to the signal during every acquisition. The tradeoff is that one must wait for a long time (characterized by the T1 of the tissue) for the spin population to relax, or else use small flip angles, both of which reduce the signal-to-noise. The quantitative details of the trade-off depend therefore upon the T1 of the tissue and the desired repetition time (TR).
Three-dimensional sequences are more often used when T1-weighting is desired, as the time between excitations (TR) is much shorter anyway. If one attempted to use a 3D sequence when T2 weighting was sought, the excitation flip angle would have to be very small, or the wait between excitations would have to be very large, that the scan would be extremely long and have very low signal-to-noise.
Geometric Distortion in 3D vs 2D sequences
In both 2D and 3D sequences, the shape of the slice before any correction is applied is a warped plane as in the figure below. The main source of the distortion are nonlinearities of the gradient field along the slice direction. Although both types of sequences would exhibit similar distortion without any correction, there is an important difference that renders 3D sequences easier to correct than 2D sequences.
In 3D sequences, the entire volume imaging volume is excited with a single RF and gradient pulse. The spatial information along the slice direction is determined by using a phase-encoding gradient, which is an inherently bandwidth-limited method. The only spatial frequencies that are acquired are those that were deliberately generated by a phase-encoding gradient pulse. The data, therefore, has the proper bandwidth limit needed so that a discrete sampling at the slice spacing interval can completely characterize the underlying continuous function. A simpler way to understand this is to say that because of the limited bandwidth of the acquired data, enough information exists to correctly interpolate between slices and describe what the data would have looked like if the slices had been slightly shifted by an amount less than the slice thickness. It is important to note that this does not mean a higher resolution can be attained, only that one can interpolate exactly to view what would happen if the slices had been placed at different locations.
This differs from the 2D situation, in which each slice is acquired separately. Inherent in the signal acquisition is an averaging along the slice direction of the signal from excited spins. The nature of this averaging is such that the acquired signal does not have the same clean bandwidth-limited property as in the 3D case. The result is that interpolation is not exact -- if one attempted to reconstruct what a slice would have looked like had it been acquired at a shifted location relative to the original set, that calculation cannot be performed with complete accuracy as it can in the case of a 3D acquisition. The complete information to do an exact interpolation is not contained in the original set of slice data as it is in the case of a 3D pulse sequence.
Implications for Distortion Correction
Because 3D sequences can be exactly interpolated in the slice directions and 2D sequences cannot, MRI equipment manufacturers first implemented the full 3D correction on 3D sequences, even though many 2D sequences are acquired over a full volume with contiguous slices. More recently, full 3D correction of volume acquisitions using 2D sequences has also been made available on many MRI scanners. However, because the interpolation in the slice direction is not exact, there is greater reluctance to have such a correction performed by default. Because the vast majority of diagnostic radiology applications are not sensitive to MRI distortions up to a few mm, the rational choice for MRI manufacturers may be to leave the through-slice direction uncorrected in volume acquisitions performed with 2D sequences by default, and provide a setting for users to enable the correction if desired. Therefore those who are performing MRI studies sensitive to distortion, such as radiation therapy planning, need to be aware of the capabilities and the settings of their scanner regarding distortion correction for 2D pulse sequences.
There is a strong need for this through-plane distortion to be monitored as part of a quality control program for distortion-sensitive applications. Such a program should be designed both to assess the performance of the 3D correction for all sequences and to detect cases where the slice-direction correction may not be turned on for a 2D pulse sequence. This information is available from phantom data accompanied by the appropriate analysis.
Matt Whitaker Image Owl, Inc.
Copyright © August, 2015. Image Owl, Inc.
In this series of articles I want to give the practicing medical physicist a basis for understanding what problems Statistical Process Control (SPC) can address in their work, how to get started and some further resources to gain deeper understanding of this powerful group of techniques.
I intend to spend as much time on the ‘why’ of SPC as the mechanics, as I am convinced that the mechanics come quite easily once you understand the reasoning behind the tools. Having a firm grasp of the reasoning also allows you to make effective decisions when setting up an SPC system and avoid some of the traps.
With the guidelines in AAPM’s TG-100 report due to be released soon there will be an increased need to apply effective industrial QA tools to the practice of medical physics. SPC is one such family of tools designed to measure process variation and detect statistically significant changes.
This first article examines why we care about process variability at all:
Why Does Variability Matter?
Traditional approaches to quality control define quality as adhering to a set of standards and tolerances. Everything within tolerances is good. If our measurement falls slightly out of tolerance our process is bad and we must do something about it. We often measure retrospectively to determine fitness for use and treat each measurement point in isolation without reference to past behavior. This view is basically a ‘compliance’ mindset. As long as the ball wobbles between the goalposts we are ok. If we miss we need to make an adjustment (or as is often the case) assign blame.
This binary view of quality is fundamentally flawed. It fails to take into account the costs of variation, the nature of random process variation and our desire to proactively control our process.
Traditional Approach to Variability Costs
When we put a process in place we select a set of characteristics that we are going to measure to determine whether our process is meeting its goals. These could be a set of dimensions in the case of a process producing a mechanical assembly or perhaps the x-ray output of a linear accelerator for radiation therapy.
We set nominal targets for these measurements because we have an ideal picture of what we want the process’s product to be. For the mechanical assembly the goal may be for the assembly to move smoothly without excessive slop or binding. For the linear accelerator we use the accelerator’s nominal output as a key input for our treatment planning process and we want the treatment to match the plan.
Traditionally we then set acceptable tolerances around the measurements. What are we doing when we set these tolerances? In a traditional mindset we are saying that this is the point where we have to deal with the risk or cost of deviation. Note that costs do not have to be monetary. In a medical physics setting these can include costs such as decreased probability of tumor control or increased risk to surrounding organs.
In the diagrams below (figures 1 & 2) we see how this model plays out with a probability density function reflecting the mean and standard deviation of an ongoing process (the red line) overlaid on a set of goalpost specifications (blue line). The cost curve reflects that we ignore variability costs within specifications.
Once the process produces a measurement that exceeds a specification we incur a cost (retake measurement, adjust equipment etc.). The total expected cost is the product of the process’s probability distribution and the cost curve. This is shown in the green area on the plot. As we can see until the process aim or variability reaches a point where it is likely to produce measurements exceeding specifications total costs are zero (i.e. we are denying they exist).
Behaviors that the traditional model reinforces:
- Since no cost is acknowledged until the process has a significant likelihood of exceeding a specification limit our actions are likely to be purely reactive instead of proactive. After all we see no cost until we go into the ditch!
- Once the variability of our process is sitting within the specifications our incentive for further improvement is greatly diminished as we receive no return on our efforts to center the process and reduce its variability.
- In a traditional mindset quality improvement activities will almost certainly be viewed as a net cost to your organization
A More Realistic Cost of Variability Model
Is a constant cost for all deviations from target within specifications a realistic model? For most processes we would expect degraded performance in a monotonically increasing fashion as we deviate from the target. For example as our linear accelerator output deviates from its nominal output the delivered dose will deviate increasingly from the planned dose resulting in a small but definable deviation in the tumor control and organ sparing characteristics. In the diagrams below we see such a cost curve (blue) where costs increase as we deviate from our nominal measurement.
This is the classic insight made by Genichi Taguchi, a Japanese industrial statistician. Without going deeply into the math, empirical observation of the interaction between the probability curve and Taguchi loss function reveals that to minimize the total cost we need to minimize variability and center the process on the target.
In the diagrams below (figures 3 & 4) we have a probability density function reflecting the mean and standard deviation of an ongoing process (the red line) overlaid on a Taguchi type loss function (blue line). The cost curve reflects increasing losses (costs) as we deviate from the nominal measurement.
In the Taguchi model any deviation of the process from its nominal value or increase in variability will result in higher cost. The total expected cost is the product of the process’s probability distribution and the cost curve. This is shown in the green area on the plot. By acknowledging that costs of deviation are realistically a continuous rather than a step function we get a far more complete picture of the pernicious effects of process variance.
Behaviors that the Taguchi model reinforces:
- Since costs continuously rise as our process variability increases or our process drifts from the nominal desired state we are far more likely to take action before truly serious consequences arise.
- Since costs decrease continually as we center our process and reduce variability we have an incentive to engage in continuous quality improvement. While there may realistically come a point where costs of improvement outweigh gains we at least have a more realistic view of the tradeoffs involved.
- Improvements to quality performance are more likely to be viewed as net benefit to the organization.
Understanding that any deviation from nominal target values in a process carries a cost is a key insight that will help you intuitively understand many of the mechanics of SPC.
As we go forward in our exploration of SPC tools it is important to keep this view of the effects of variation in mind. Doing so will help keep you from falling into many mental traps that the compliance mindset will lead you into.
The bottom line is that to achieve world-class quality we must have as our process goal:
On-Target with Minimum Variance
In the next installment we will examine how one chooses what to measure for an SPC program.
Statistical process control for radiotherapy quality assurance.
Todd Pawlicki, Matthew Whitaker and Arthur L. Boyer, Med. Phys. 32, 2777 (2005)
Understanding Statistical Process Control
D. J. Wheeler and D. S. Chambers, SPC Press, 1992.
Advanced Topics in Statistical Process Control
D. J. Wheeler, SPC Press, 1995.
Taguchi Techniques for Quality Engineering
Phillip J. Ross, McGraw-Hill, 1988
Variation and control of process behavior
Todd Pawlicki, Matthew Whitaker, International Journal of Radiation Oncology • Biology • Physics, Vol. 71, Issue 1, S210–S214 , 2008
PS: We would love to keep you up to date on all of our news and developments. Consider following us on LinkedIn to get our latest news.
Continuing from his white paper on the sources of MR distortions Dr. Richard Mallozzi explores distortions that can remain even after gradient corrections have been applied to MRI images. With MR image distortion measurement being critical for MR guided RT this white paper provides timely guidance on this issue.
PS: We would love to keep you up to date on all of our news and developments. Consider following us on LinkedIn to get our latest news.
With the current developments and interest in using MR images for RT planning and guidance this white paper by Dr. Richard Mallozzi provides a timely and valuable overview of the causes of MR image distortion. Image Owl provides an accurate and convenient way of analyzing your MR image distortion with our MR distortion analysis service :
PS: We would love to keep you up to date on all of our news and developments. Consider following us on LinkedIn to get our latest news.
Your RT QA data is very important to your organization. QA data represents an enormous commitment of physicist and therapist time to measure and analyze. You want to ensure that sensitive machine performance data is not shared outside your organization. Furthermore you need to retain and produce that data to demonstrate compliance with accreditation requirements and regulatory authorities.
To this end we expect our data to be:
- Available when and where we want to see and use it. It is not acceptable to be told that the network is down for expected reasons such as maintenance or unexpected reasons such as power outages. Users want access to their data on a variety of devices, not just on a dedicated workstation.
- Secure from unintended use or abuse. We don't want to worry that we have missed some crucial security loophole. Most of us are not world-class security experts and don't aspire to become an expert. We have enough on our plates.
- Durable and not subject to becoming lost or corrupted over time. Not only might we want to retrieve historical data for further analysis but increasingly we need to maintain such data in a highly durable fashion for regulatory and accreditation requirements.
Our data is under constant threats both external and internal that threaten to compromise its availability, security and durability.
These threats can be categorized broadly as:
- Physical threats: Threats ranging from power outages through unauthorized physical access into facilities to catastrophic natural events.
- Network threats: These are perhaps the most ubiquitous threats in our interconnected world. These run the range from blunt denial of service attacks to more sophisticated intrusions of data systems.
- Organizational threats: Vulnerabilities caused by poor organizational practices or sometimes simply a lack of depth and expertise in an organization.
Image Owl has years of experience in providing and managing secure online analysis services and databases. We have chosen to use Amazon Web Services (AWS) to host and manage our cloud infrastructure. This allows us to focus on providing the user with unparalleled QA data management and analysis solutions while maintaining the highest levels of availability, security and durability.
A resilient QA system from Image Owl based on a modern cloud platform such as Amazon Web Services can address these threats through:
1. World-class Physical Security:
The first question to ask yourself is whether your data is all contained in one physical location and whether that location is truly secure? Even if you have backups in remote locations how accessible are they and how easily could you restore your system to its previous state?
Some threats to consider:
- Fire: Not only do you want to consider fires in the actual facility itself but also mandatory evacuations due to events such as forest fires, train derailments, industrial chemical fires and explosions.
- Flooding: Over the years a number of therapy and diagnostic facilities have experienced catastrophic flooding due to hurricanes or river flooding. The tendency of RT facilities to be in basements does not help here!
- Power Outages leading to loss of climate control: Extended power outages with lack of environmental controls can play havoc with data storage systems.
- Physical intrusion: Intruders may not care what they damage, destroy or remove. Or perhaps they are well aware and this is a targeted attack. How easily can an unauthorized person access your data storage and networks.
- Decommissioning Risks: An often underappreciated threat is data leaks during decommissioning of devices where data is not completely removed and destroyed.
AWS employs a dispersed strategy to mitigate physical threats. Its data centers are in physically diverse locations within each of its service regions. AWS ensures that the centers are not subject to common natural disaster threats and are not dependent on common services such as power utilities and water supply. Users of AWS can further diversify their exposure by electing to use multiple regions for their services around the world for both performance and security reasons.
Access to these facilities is strictly controlled even within the Amazon organization and they are continuously monitored 24/7 by both human and automated security systems. The facilities have state of the art fire detection, prevention and suppression capabilities and can generate power independently for extended periods of time.
Most importantly your data is always held at a minimum of 3 dispersed locations. If any of the copies goes off -line or is destroyed new copies are automatically and seamlessly generated. Should an instance of your server and other AWS services go down or become excessively slow new instances will be initiated. To the end user this switch is completely transparent.
2. Superior Network Security
Network threats abound no matter whether your infrastructure resides in a cloud service or in your own server facility. These are just some of the more obvious threats to networked systems.
- Distributed Denial of Service Attacks seek to overwhelm the service with torrents of requests. Recognized denial of service attacks reached levels of 28 per hour in 2014 (Preimesberger, Chris (May 28, 2014). "DDoS Attack Volume Escalates as New Methods Emerge". eWeek.)
- Man in the Middle Attacks occur when communications between two trusted parties are intercepted and manipulated. This is often to gain sensitive information and access privileges. Often this malware comes in the guise of spam emails.
- Packet sniffers (sometimes referred to as protocol analyzers) can intercept and decode individual data packets in a network. Originally developed to analyze and troubleshoot network problems the have also been used by law enforcement and national security agencies to eavesdrop on network traffic. They can be used to reconstruct exchanges of data between parties especially when the traffic is unencrypted.
- Access points to networks represent potential vulnerabilities both in hardware and software weaknesses.
- IP spoofing happens when a server presents itself with a trusted IP address that does not belong to it in order to gain access to other systems or sensitive information.
- Port Scanning refers to the practice of probing a server’s port addresses to find unsecured open ports to exploit.
This is merely the tip of the iceberg and network threats are continually evolving.
The AWS service originated in Amazon’s need to quickly scale its operations to meet seasonal needs in the Holiday season. As the world’s largest online retailer Amazon’s success is absolutely dependent on its systems’ reliability and security. This operational experience translates to an almost unparalleled depth of knowledge and skills in maintaining uptime and security. By using AWS we tap into those capabilities for a nominal cost.
- AWS services employ state of the art practices when designing secure networks. Special attention is paid to external network access points to ensure that hardware and software vulnerabilities cannot be exploited and that redundant backups exist should they be subject to attack.
- A key component for network security is encryption both while the data is stored and at all times during transmission. Image Owl makes full use of encryption services offered by AWS to ensure data security.
- A corporate firewall exists between Amazon’s retail and corporate operations and its AWS services. Although Amazon is a large company access to AWS infrastructure is strictly controlled with multiple layers of authentication required.
- Amazon systems are designed to be fault tolerant. Sections of the service could go down with minimal impact on customer experience.
- The distributed and independent nature of AWS resources mitigate the effects of denial of service attacks. As one of the clouds most prominent entities Amazon is experienced at defending its networks from attack and has developed numerous proprietary technologies to combat these threats.
- Again encryption at all points along the data transmission chain ensure that eavesdropping ad packet sniffing will be fruitless enterprises.
- The AWS systems operate on the basis that all access is denied unless explicit permission is given. Access permissions can be managed at a very granular level to limit users to very specific actions.
- Crosstalk between server instances is strictly controlled even if they are owned by the same user.
Ask yourself if your organization’s IT department has the same resources and depth of experience and track record as Amazon.
They may be very good but are they absolutely world–class?
3. Superior Network Security
Organizational threats can undermine your system even if you have excellent physical and network security. Organizational security is largely a matter of organizational discipline and sound practices.
- A small IT organization can be severely hampered by the departure of one or two key people with deep institutional and network architecture knowledge.
- Data breaches have often occurred by accidental or deliberate removal of data by employees.
- Unsafe or out of date systems (often the result of poor hardware and software maintenance practices ) expose significant vulnerabilities. It is surprising for example how many XP desktops are still in use in the medical field despite being no longer supported by Microsoft.
- Simple data access practices such as password strength requirements, password expiration and the adoption of multi-factor access help prevent some of the most common causes of data breaches.
AWS has a number of state of the art access control technologies that it employs internally and makes available to its customers Image Owl make use of all these technologies. Access to AWS services is very strictly controlled with extensive background checks and monitoring of employees . Credentials are limited and expire automatically when no longer needed.
Think about the maintenance and testing budget your organization’s IT department has . Does it maintain a continuous testing program to thoroughly test software and hardware upgrades?
AWS ensures that hardware and software is fully tested and up to date before becoming operational. AWS’s maintenance budget and scale allows for the continual testing and upgrading of equipment to the latest standards.
AWS also provides logging services that provide a very detailed trail should an incident occur.
4. HIPAA/ HITECH Compliance
The cloud infrastructure provided by AWS is certified to be HIPAA and HITECH compliant.
HIPAA requires that Patient Health Information (PHI) be maintained securely. AWS’s encryption throughout the process ensures that PHI cannot be read or exposed. Access policies limit access to trusted individuals . Your data is encrypted and AWS cannot decrypt it.
HIPAA also requires disaster recovery plans to be in place and AWS's physical and network security ensure that these are up to date and effective. Image Owl makes full use of these tools.
Even though Image Owl’s service does not cover patient specific information the policy is to comply with HIPAA policies just in case the system should come in contact with patient information.
Securing your data is not a simple task and requires extensive resources and dedication. Although your organization may have many of these elements to one degree or another, ask yourself whether data security is really your institute’s core competency or does your organization’s true calling lie in another area such as delivering outstanding patient treatment and care?
Where do you really want to place the focus of your organization’s innovation and resources?
In our view maintaining your QA data on the cloud with Image Owl products is not only convenient but the responsible choice from a security and reliability point of view. It allows you to concentrate on delivering world-class treatment and care to your patients without distraction.
Image Owl is pleased to offer a complimentary benchmarking service for users of Catphan® phantoms. This convenient service produces a complete image quality report from any of the currently manufactured Catphan® models.
Whether you are commissioning new imaging equipment or improving your QA process on existing equipment this service provides accurate and insightful results that will get your program off on the right foot.
The service could not be simpler to initiate. Simply go to our registration page and register your Catphan® phantom. If you qualify for this offer you will receive credentials for the service and instructions for uploading your phantom images to the Catphan® QA service. There is no software to install and the process is completely automated. Your report will be ready to view shortly after uploading.
Image Owl’s engineers and scientists have decades of experience on the cutting edge of CT imaging to bring you the highest level of accuracy. Every algorithm and measurement in Catphan® QA has been extensively reviewed and validated to ensure that best in class practices are employed throughout the analysis process. The analysis algorithms are clearly explained in the supporting documentation so users can be confident that they understand the science behind the measurements.
We hope that you enjoy this service and that you consider making us a permanent part of your QA toolkit.
PS: We would love to keep you up to date on all of our developments. Consider following us on LinkedIn to get our latest news.
Thank-you to everyone who took the time to drop by the Image Owl booth at AAPM in Anaheim this week. We really appreciate your feedback. We learned some interesting things from you this week:
1. The rise of MR planning and guidance in RT has certainly increased interest in MR image QA solutions and in particular MR distortion. We showed our automated MR distortion analysis service for the Magphan® Quantitative Imaging Phantom made by our partners at The Phantom Laboratory.
3. We were very pleased at the response to our analysis service for the the Phantom Lab's Tomophan Digital Breast Tomosynthesis phantom. This is the first phantom to completely address the QA needs for this rapidly growing imaging modality. We heard from many of you that our solution meets an unmet need to do comprehensive QA in this field.
4. With the number of RT sites seeking accreditation on the rise many of you are seeking ways to manage your QA information without compromising your ability to choose the QA hardware and software most appropriate to your facility. The interest in the QA Pilot product from our friends at Standard Imaging was very strong. This complete and convenient service allows you to customize your QA program to your needs using equipment you choose .
Learning from our customers is a key part of attending the AAPM meeting as an exhibitor.
Thank you from all of us at Image Owl.