COMP Report: CPQR Technical Quality Control Guidelines for Data Management Systems

Abstract The Canadian Organization of Medical Physicists (COMP), in close partnership with the Canadian Partnership for Quality Radiotherapy (CPQR) has developed a series of Technical Quality Control (TQC) guidelines for radiation treatment equipment. These guidelines outline the performance objectives that equipment should meet in order to ensure an acceptable level of radiation treatment quality. The TQC guidelines have been rigorously reviewed and field tested in a variety of Canadian radiation treatment facilities. The development process enables rapid review and update to keep the guidelines current with changes in technology (the most updated version of this guideline can be found on the CPQR website). This particular TQC details recommended quality control testing of radiation data management systems.

The fundamental definition of a data management system (DMS) has not changed since the publication of the Canadian Association of Provincial Cancer Agencies (CAPCA) quality control document for data management systems in 2008: a DMS is the information infrastructure which is directly related to the planning, delivery, quality assurance, and archival of patient treatments. In its simplest incarnation, a DMS can be a single computer. However, in the typical radiation treatment clinic, a DMS is comprised of many separate entities or systems that manage, store, and exchange information of many types and formats via various methods and protocols. The level of complexity of computer systems in radiation oncology clinics has seen a tremendous increase over the past several years. In part, this increase is due to the evolution of radiation treatment technologythe increasing complexity of treatment delivery systems which themselves contain multiple computerized systems, the ongoing evolution of onboard imaging systems, the increased variety and quantity of diagnostic and simulation imaging studies involved in the planning process, and the ever-increasing evolution and scope of record/verify electronic medical record systems. The ongoing transition of many clinics toward a paperless or "paperlight" environment is even further increasing the quantity and variety of data stored and managed by computerized systems in the clinic. And of course, beyond the walls of our clinics, the world of information technology and data management is expanding at a relentless paceso that the hospital infrastructure and systems that often form the backbone of our radiation clinics' data management systems are also evolving rapidly.
A comprehensive quality assurance program for a DMS should consider all of the separate components in the DMS, the exchange of data between components, and the procedures governing that exchange. Accordingly, the program could have three general categories: 1. Quality assurance of computerized systems: performance and functionality of each individual component in the DMS, data integrity within each component; 2. Quality assurance of data exchange: data exchange between components in the DMS (multiple formats, multiple protocols, via interface, or manual data transfer); and 3. Quality assurance of procedures (including data entry and data interpretation).
Key features of a quality assurance program should include: assembling a multidisciplinary team with regular meetings and clearly established roles and responsibilities; project management of scheduled upgrades and systematic tracking and evaluation of hardware and software failures and issues, and subsequent root-cause analysis.
Each radiation treatment clinic's DMS is unique, making it impossible to prescribe a universal or one size fits all quality assurance program. Instead, this guidance document offers a step-by-step approach to aid the medical physicist in designing a tailored, context-specific quality assurance program for each unique DMS (see Appendix 1). The lists of test categories included in Tables 1 and 2 and the specific tests detailed in Section 5 are meant to be comprehensive but not prescriptive, serving as a recipe box from which the qualified medical physicist can select the appropriate tests for their unique DMS. Furthermore, testing frequencies must be established based on in-depth knowledge of the relevant clinical processesthe suggestions made in this document serve as a reasonable baseline that should be modified to suit a given DMS. Some of the tests chosen for the DMS quality assurance program will likely be the responsibility of IT personnel. Others will be the responsibility of the medical physicist. It is probable that some of the tests will require collaboration and input from the appropriate vendor. The approach described here is adapted in part from that sug- In many clinics, IT personnel will be responsible for physical networks and hardware, as well as existing software and hospital information management systems. Consequently, it is entirely possible that the medical physicist responsible for a radiation treatment clinic's DMS will not have the necessary resources, knowledge and/or access to design and implement the DMS quality assurance program on their own. In a comprehensive report, 3 the IPEM highly recommends that management of the radiation treatment DMS be maintained by medical physicists, with the support of dedicated IT specialists. Another model requires responsible medical physicists with solid IT skills to act as application owners for all "medical device tier" systemsthose clinical systems directly affecting patient care. The maintenance of physical and virtual networks and computers and associated software falls under the responsibility of IT personnel. For safe and effective delivery of radiotherapy, a high degree of collaboration and a certain degree of knowledge-overlap is needed between the responsible medical physicist and IT personnel. This requires the assigned IT support staff to be on site. 4 For further guidance regarding appropriate training for radiotherapy IT professionals, see Siochi et al., 2009, Information Technology Resource Management in Radiation Oncology. 4 It is recognized that existing organizational structures vary widely between individual radiotherapy clinics. While in some cases, the IT specialists are members of the radiotherapy department, this is certainly not always the case. It is recommended that, as a minimum, consultation and ongoing collaboration with responsible IT specialists will be required. A multidisciplinary team should be established to share responsibility for the DMS quality assurance program. The roles and responsibilities of all team members must be clearly defined to ensure accountability. 5 This collaboration will ensure that responsible IT personnel are aware of the details, complexities, and critical nature of the systems used for radiotherapy as well as ensure appropriate resources for management, testing, maintenance, security, trouble-shooting, training, and support. 3 3 | TESTING TOOLS

3.A | Software
While many of the tests suggested here can be executed manually using test data, certain categories of tests are better suited to automated testing, specialized testing tools exist that have the capability to manage, script, and automate soak or endurance tests, network performance tests, and/or data integrity checks. Examples include, but are not limited to, Microsoft Visual Studio Testing Tools and Services, AgileLoad, HeavyLoad, Solarwinds, and Inflectra SpiraTest.
Some tools are designed for specific types of testing, whereas other more comprehensive software suites offer tools to script and execute tests based on a schedule and include project management components to store and track the results of each test.
In some cases, manual end-to-end testing with a large number of test scenarios can be resource-intensive. Automated testing tools are available that mimic a user's interaction with a workstation by sending data as if a user were interacting with the environment. A set of test data can be constructed and should be chosen to include the range of clinically relevant scenarios. Network communication from a sending computer to a receiving computer can be replicated. Some testing software will record a user-interaction and store it as a test script, which can then be modified and/or replicated. Certain types of tests cannot be fully automated and require user interaction. For these manual tests, the software can prescribe the specific steps a tester is required to follow, including test data and order, state the expected behavior at each step and allow the tester to document the results.
The integration of such testing tools in a clinic's DMS requires collaboration with IT personnel and the appropriate vendors. Also note that the timing of automated tests should include sufficient delays between executions to avoid placing an artificial load on the system (which could artificially produce errors).

3.B | Checksums and data validation
A checksum is a type of redundancy check that can be used to evaluate data integrity following transmission across a network (or data link), or following any other manipulation that could introduce error.
An algorithm is used to calculate binary or other values that represent the data packet that can be compared at the beginning and end of each test point. If the outputted values are not identical, then the data being tested have changed in some way. Checksums algorithms can be executed against various data, including very large data items, which are otherwise difficult or time-consuming to compare. Usually using a cryptographic hash (or similar) function, given a specific data value, it will always produce exactly the same result.
Further, there are other approaches that operate similarly to checksums, for example, Cyclic Redundancy Codes (CRC). Each has its benefits and weaknesses and the choice of tools must be evaluated based on reliability, cost, and criticality of the data/component/link being tested. These data validation methods are not usually necessary on a routine basis, but are useful when verifying the integrity of user-specific data, such as treatment planning beam models during upgrades, software revisions, and maintenance releases.

3.C | Virtualizations
Generally, virtualization refers to the creation of a virtual version of a device, such as a server, network storage device, operating system, application, or service. Virtualization should only be performed with the support of the appropriate vendors as not all DMS components lend themselves to virtualization. In some cases, DMS performance can be negatively impacted.
When testing virtualized environments such as, but not limited to Citrix virtualization, the unique architecture of these services needs to be taken into account. These environments reduce the need to manage many applications and/or desktop environments across multiple workstations and operating systems and thus allow for workstations that span multiple network security domains (or other complex configuration differences) to access the applications or possibly entire virtual desktops through a single managed network port and protocol. Application virtualization relies on a locally installed receiver application that communicates from the remote workstation to the application server, where the application is running. Modern receiver applications generally are add-ons to the workstation's web browser.
Virtualized environments require special testing considerations.
The interface presented to users is virtualizedthe user is presented with an image of the user interface of an application or desktop that is running on a remote server. This can introduce latency and lead to possible data input errors from the user, particularly in graphically intensive tasks such as contouring, registration, and dose display and manipulation. Virtual environments add complexity to automated tests that are meant to replicate user inputas most of these tests are configured to input data and submit as if from a user's workstation. Tests can be operated from within a virtual desktop through the virtualization services, but this does not replicate data passing through the "screen scraper" running on a local workstation.
Since remote applications are running from a server (virtualized or not), soak or endurance testing (see Designator L5 in Table 1) against the virtualization service is importantas if this service falters or fails, then any application in the DMS provided through the virtualization service may be impeded or unavailable.
When testing for failure (see Designator L4 in Table 1), the effect of a session timeout on an application when a user attempts to reconnect should be investigated. What is the state of the session and its data after the connection times out and is it recoverable? For the majority of applications on a modern virtualization environment, sessions should be recoverable for reconnection, within some period of time, and ideally harmonized with security and automatic disconnection settings (e.g., screen saver settings, automatic log-off after inactivity). Modern DMS infrastructures now virtualize servers as they offer lower cost of ownership, reduced footprints (hence reduced power and cooling requirements), more efficient data migration and backup/restores, and easier customization of hardware. They do, however, require additional layers of maintenance (hypervisors) and management which can affect overall DMS performance. Because they are often hosted through shared storage (SAN) environments not necessarily within the confines of the radiation oncology treatment center or location, they are susceptible to data loss due to a catastrophic events, large-scale system degradation and outages.
While the management of virtualized servers falls within the scope of IT professionals, given the risks to the integrity of the RO-IT infrastructure, changes to virtualized server settings should be communicated to the DMS multidisciplinary team.

CONTROL GUIDELINES
In order to comprehensively assess data management system performance, additional guideline tests for integrated systems, as outlined in related CPQR Technical Quality Control (TQCs) guidelines must also be completed and documented, as applicable. Related TQC guidelines, 6 available at cpqr.ca, include: • CT Simulators • Treatment Planning Systems • Medical Linear Accelerators and Multileaf Collimators.

5.A | Notes on tests for DMS links
Tests in this section are applicable to each data link (joining two computerized systems within the DMS). In addition to the tests suggested here, vendor recommendations for commissioning, acceptance testing, and regular quality assurance should be followed.

L1 Data transfer integrity of general/demographics and treatment parameter data
Test: For an appropriate range of clinical data, compare data sent vs. data received. Manual or automated tests can be performed using checksums or custom scripts using a bank of test data.
Tolerances: Different systems may have different levels of accuracy and also may have differing naming conventions. This can lead to errorsfor example, due to rounding or truncation of data. Tolerances need to be established (whether zero or non zero) wherever data transfer occurs. To facilitate data transfer integrity tests, it is very helpful to construct data transfer tables for each data link. A data transfer table should include a full list of all parameters transferred, and the tolerances associated with the transfer. It is important to also be aware of any differences between the internal format or convention of the raw data and that displayed to the user. Data dictionaries can be valuable resources in the construction of these tables. Note that the selection of an appropriate range of clinical data is a nontrivial task requiring careful consideration of all clinically relevant treatment scenarios. A library of test cases can be constructed in the treatment planning system for use as needed and should be updated to reflect new and emerging treatment scenarios.
Suggested frequency: At commissioning, and following any change to the DMS components connected by the data link that could affect clinical data (including data formats, storage, display, tolerances, transfer protocols, etc.). A data transfer integrity test may be appropriate as part of routine patient quality assurance for certain clinical protocolsthough it is likely this test will be of more limited scope. An example could be to compare critical treatment data given by the treatment console against a screen capture of approved plan data.

L2 Data transfer integrity of images and imaging data
A Geometric integrity (scale, accuracy) Test: Use a geometric test phantom with known dimensions and compare data before and after transfer, including appropriate scaling and/or processing.

B Coordinate frame and patient orientation
Test: Use a test phantom whose orientation is clearly identifiable. For all relevant orientations and positions, confirm that images are correctly transferred and interpreted.
Suggested frequency: At commissioning, and following any change to the DMS components connected by the data link that could affect imaging data (e.g., upgrade of CBCT software). This test is often part of existing quality assurance of imaging systems.

C Data transfer integrity of images and imaging data
Test: Image quality: Using an appropriate phantom, evaluate image contrast, noise, and image intensity (e.g., HU value).
Identify data degradation or distortion (e.g., due to compression). Compare values before and after image transfer. Compare against baseline or tolerance values as appropriate.
Test: File integrity: Using checksums or other tools, evaluate the integrity of the imaging files before and after transfer.
Note that this test is required in addition to the above tests as it is possible for errors in integrity to be introduced that will not be visually apparent or detectable within the software used for image analysis.
Suggested frequency: At commissioning, and following any change to the DMS components connected by the data link that could affect imaging data (e.g., upgrade of CBCT software). This test is often part of existing quality assurance of imaging systems.

L3 Data transfer integrity of electronic documents
Test: Verify that transfer of electronic documents occurs as expected and that data format and integrity is maintained.
Test should include all relevant document formats. Checksum or appropriate tools should be used in addition to visual inspection as errors can be introduced that will not inhibit document processing software from opening and manipulating the file.

ACKNOWLEDG MENTS
We would like to thank the many people who participated in the production of this guideline. The production of this manuscript has been made possible through a financial contribution from Health Canada, through the Canadian Partnership Against Cancer.

CONFLI CT OF INTEREST
The authors have no conflicts of interest.

METHODOLOGY FOR BUILDING A DMS QUALITY ASSURANCE PROGRAM
This appendix provides additional information on how to develop a robust quality assurance program for a DMS.

STEP 1: IDENTI FY THE COMPUTERI ZED SYSTEMS IN YOUR DMS
A DMS is usually composed of multiple computerized systems. The components of a DMS are specific to each center and may include one or more computerized systems from the following categories: • Treatment delivery systems, onboard imaging systems and associated control computers, and other critical computer systems that are directly involved in delivering, monitoring, or controlling the delivery of radiation.
• Imaging systems such as CT, PET/CT, or magnetic resonance simulators and other diagnostic imaging equipment.
• Ancillary radiation oncology software within the DMS (e.g., independent monitor unit calculation software, quality assurance tools, patient safety, and event tracking systems, etc.).

STEP 3: CATEGORIZATION OF EACH DATA TRANSFER LINK
The most comprehensive approach to designing the quality assurance program for a DMS would include testing each of the components and data links identified in the prior two steps. In the context of limited resources, however, the responsible medical physicist may be forced to design a program of more limited scope. To ensure the highest possible effectiveness and efficiency of the quality assurance program, consider first performing a risk analysis. As suggested by the IPEM's Report 93, 3 one possible approach is to categorize each data link by two simple criteria: 1) How important is the parameter to the treatment process? a. Critical importance: Parameter must be transferred accurately and without delay, an error or delay directly impacts the safety of the delivered treatment.
b. Moderate importance: Parameter should be transferred, but a minor delay or error does not directly impact the safe delivery of the treatment or a work-around is available which limits or eliminates the impact on the delivered treatment.
c. Low importance: The parameter is not necessary for the safe delivery of the treatment or a delay or error in the transfer of this parameter has no effect on the safe delivery of treatment.
Note that each center should independently assess the criticality of each parameter for the safe delivery of patient care as this is highly dependent on the configuration of a given DMS. Also note that the categorization of certain parameters may be different in an emergency vs. nonemergency treatment scenario.
2). Consider the probability or risk of failure of each data link and assign a level of "High," "Medium," or, "Low" risk. Factors that could lead to a higher risk of failure include: manual entry of data or wherever human error can be introduced; incomplete data transfers or exchanges (where some correction or manual entry is required); exchange based on proprietary methods that may less transparent to the user; exchange with systems outside of the clinic where many more variables may be unknown; exchange over custom interfaces (vs. "off-the-shelf", rigorously tested interfacesthough these also can lead to the introduction of errors); and corrections or changes to original treatment data (requiring manual correction or re-import of partial treatment data). The availability of support and the known stability of the data link or systems involved could also be considered.
The extent of data redundancy and network rerouting capabilities in the event of catastrophic failures may also be factored in the risk analysis for more complex architectures.
A data link table can be constructed. For each data link, the sender, receiver, data type, and method of transfer can be included, as well as the assigned level of importance and level of risk. The table can then be sorted based on the combined importance and risk "score" of each data link. An example is included in Table A1.
Other risk analysis methods, such as failure mode effects analysis (FMEA) could also be utilized. Regardless of the method, the goal is to establish clear priorities for which elements of the DMS should be tested when it is not possible to develop an exhaustive program. The risk analysis also aids in establishing testing frequencies later in the quality assurance program design process, and can help define the scope of responsibilities for medical physicists, IT personnel, and vendors.

ASSURANCE PROGRAM
The next step of the process is to establish the scope of the DMS quality assurance program using the system map and Table 1  Systems that are usually within the scope of a DMS quality assurance program: • R&V/EMR, including application servers; • Radiation therapy databases, storage and archival systems; and • Any other computerized system in the radiotherapy network that handles clinical data and is excluded for the reasons outlined above.
Systems that may not be within the scope of a DMS quality assurance program: • Treatment delivery systems and associated control computers Where these systems are included in existing quality assurance programs, the physicist should evaluate whether the existing procedures cover all relevant aspects of data management and quality assurance. Where appropriate, consider additional or modified quality assurance procedures as needed (refer to Step 5). Consider that the transfer of certain data between DMS components may be validated as part of patient specific quality assurance. Where this is the case, ensure that the patient-specific quality assurance procedure is documented and that all relevant aspects of data quality are addressed (see Step 5 for guidance on the types of tests that may apply).
Finally, identify external systems that are maintained by hospital IT staff or manufacturers through service contracts and are therefore outside the scope of your clinic's quality assurance responsibilities.
Remember that application servers and hardware may be physically Interdepartmental policies and procedures that formalize this communication pipeline should be in place and should be revised on an annual basis or whenever a major change to the DMS occurs.
It may be useful to update the data link and component tables to include only those elements that are within the scope of the DMS quality assurance program; however, it is recommended to document the responsible party and/or applicable quality assurance program for each element that is considered out of scope. Note that appropriate tests and testing frequencies are highly dependent on the specific configuration and processes that govern your DMS. As such, this document cannot be prescriptiverather it can list possible tests for data links and DMS components, and can give guidance regarding testing frequencies. It is the responsibility of the medical physicist, in collaboration with IT experts and manufacturers, to determine what is appropriate in the context of each unique DMS. An example of a resulting quality assurance program corresponding to the example DMS presented in Figs. A1-A6 and Tables A1-A3 is presented in Table A4.

Quality assurance of procedures
Quality assurance of the procedures governing the exchange of data between components of the DMS, including procedures for generating, entering, and interpreting the data. Procedures must be designed to be robust in the presence of potential data errors. 5 End-to-end testing based on clinical processes is perhaps the single most important test of a DMS and should be performed at commissioning and acceptance and following a change to any component of the DMS with the potential to affect treatment data. Equally importantly, this approach requires a clear understanding of the clinical processes that rely on the DMS. Ideally, all clinical processes that generate patient data will be documented. Documentation of the clinical processes greatly facilitates standardization and documentation and standardization of processes is known to reduce errors and improve quality. Examining the documented processes in conjunction with the data management system "map" allows the development of a quality assurance program following a risk based approach.
When developing a quality assurance program for a DMS, it is important to build in mechanisms for adapting to changeswhether to a single component of the DMS, to a clinical process, or a change affecting the entire DMS. Process and system maps become obsolete quickly and it is important to maintain these as living documents.

Contingency planning
One of the challenges of a radiation oncology DMS is the provision for contingency planning in the event of periodic, planned, or unexpected outages. The quality assurance of the DMS from such outages is a function of the risk and frequency associated with that outage, along with the clinical needs for that centre. For example, during a scheduled DMS component upgrade when components may be offline, there may remain the need for emergent radiation therapy treatments. The inaccessibility of the patient database may limit the functionality of the R&V system such that the linear accelerator may only be used with a direct connection to the R&V system. Provisions of "offline" treatment should not only include patient treatment records, but also explore the reliance of connectivity to authentication servers and the EMR, which may or may not also be offline. Testing of such contingencies is best performed when connectivity to databases, authentication, image and document servers have planned upgrades and are expected to be offline.
The EMR may be reliant on document servers and data redundant architectures which themselves may be subjected to periodic, planned, or unexpected outages. Again, testing of back up servers and fault tolerant systems are best performed when there are planned outages.
The same strategy for contingency testing holds true for inter/intranet connections between the components of the DMS.

APPENDIX 2 SITE-SPECIFIC DMS QUALITY ASSURANCE PROGRAM EXAMPLE
This appendix provides an example of how the principles of the guideline may be applied to a specific DMS.

TOLERANCES
The specific tests required will depend highly on the infrastructure and configuration of the institution's DMS, as previously discussed.  Table 2 in this document. W2 = Completeness of schedule status and workload data is verified via built-in audit reports and/or custom reports within MOSAIQ. From designator C2 in Table 2 in this document. W3 = Monitor user and system logs for unusual activity. From designator C2 in Table 2 in this document.
POMERLEAU-DALCOURT AND BASRAN | 361 Step 1: Identify the computerized systems in your DMS