Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T14:43:38.469Z Has data issue: false hasContentIssue false

Commissioning a new CT simulator II: virtual simulation software

Published online by Cambridge University Press:  01 September 2007

D. Kearns*
Affiliation:
Department of Medical Physics, Northern Ireland Cancer Centre, Belfast City Hospital, Belfast, N. Ireland
M. McJury
Affiliation:
Department of Medical Physics, Northern Ireland Cancer Centre, Belfast City Hospital, Belfast, N. Ireland
*
Correspondence to: D. Kearns, Department of Medical Physics, Northern Ireland Cancer Centre, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, N. Ireland. E-mail: denise.kearns@mpa.n-i.nhs.uk
Rights & Permissions [Opens in a new window]

Abstract

This paper continues the discussion on the commissioning tests performed on a new GE Lightspeed RT wide-bore computed tomography (CT) scanner, focusing on the GE Advantage Sim software (version 6.0).

The tests performed and phantoms used to assess the virtual simulator functionality, including the 3D image display, contouring, treatment unit beam parameters, digitally reconstructed radiograph generation and image quality, isocentre generation and multi-modality image registration, are described.

The series of tests performed showed the virtual simulation software to be working within acceptance tolerances suggested in the literature and baseline data have been obtained against which future comparisons of system performance have been made. Where no tolerances were available, we have suggested suitable values.

Type
Original Article
Copyright
Copyright © Cambridge University Press 2007

Introduction

Simulation is an important step in the radiotherapy process, whereby patient data are acquired for treatment planning purposes and treatments verified. Virtual simulation software enables volumetric data obtained from patient computed tomography (CT) scans to be used together with a ‘virtual treatment unit’ to simulate the patient treatment. This simulation may take place in the absence of the patient and also offers the possibility of missing the physical verification step in the process.Reference McJury, Foran, Conway, Dixon, Wilcock, Brown and Robinson1 Digitally reconstructed radiographs (DRRs) can be produced and are the digital equivalent to conventional simulator films.

When a new virtual simulator system is installed, steps must be taken to ensure that the virtual simulator is working within the same tolerances as a conventional simulator and all its virtual functions are performing accurately. This report continues the discussion on commissioning a new CT simulation system (GE Lightspeed RT wide-bore CT scanner with GE Advantage Sim software 6.0) installed at the Northern Ireland Cancer Centre, focusing in this report on the virtual simulator software. Details on the assessment of the laser marking system, CT hardware and the interfaces between each component of the system have been discussed in the earlier companion paper to this report (Kearns and McJury, manuscript submitted).

Phantoms

We have constructed in-house a simple Perspex and wire cube phantom to test the CT simulation and treatment planning process (Figure 1). This solid cube phantom (15 cm along each side) has a set of embedded wires that reproducibly define a planning target volume (PTV) of known size and volume. Two ball-bearings positioned on the lateral faces and a centre line etched on the anterior face assist with alignment with the lasers. We also obtained a phantom (kindly loaned to us by Dr John Conway, Weston Park Hospital, Sheffield) to test the DRR reconstruction algorithm. A more detailed description can be found in the section entitled ‘DRR generation’.

Figure 1. Schematic diagram and CT image of the Perspex and wire cube phantom.

Virtual simulator functionality

A summary of tests carried out to assess the virtual simulator functionality can be found in Table 1. The tests are described in detail below.

Table 1. Summary of the tests carried out to assess virtual simulator functionality

3D image display

All commercially available systems offer sophisticated 3D image display with a varied range of tools. Clinicians are able to visualise the extent of tumour targets using the data reformatted using multi-planar reconstruction to visualise the patient’s skin with surface rendering and so forth. These tools are generally used qualitatively and, using a phantom with known geometry, correct image display can be qualitatively checked. In addition, phantoms with embedded radio-opaque markers/wires can be used to check other tools such as maximum intensity projection (MIP). We found all aspects of the 3D image display to function correctly with a satisfactory screen-update speed. Advantage Sim calculates and displays measurements with a resolution of 1 decimal place (e.g. 0.1 mm, 0.1°, etc.), but measurement accuracy is generally considerably less than this and is limited by the resolution of the 3D model and other factors such as display settings, acquisition errors and partial volume effects.

Contouring

The virtual simulator will include software tools allowing both manual and auto-contouring. Some systems may also have a specific list of organs that may be auto-contoured. The sophistication and ease of use of these tools can vary between systems, and some algorithms can struggle under certain circumstances, for example, auto-contouring an external contour with a radiotherapy shell/mould in situ can prove challenging. A CT scan of a test phantom of complex shape should help to assess how well the algorithms perform qualitatively. Using a phantom of simple geometry and known physical size, the quantitative accuracy of contouring can be assessed by comparing the known physical dimensions of the phantom to the contoured dimensions on screen using the measurement tool. Auto-contouring should generate contours within one pixel of the physical dimension. Automatic contouring of the surface of our Perspex and wire phantom was accurate to within 1 pixel of the surface, but posed no real challenge to the system. Auto-contouring of plastic moulds was problematic and often unsatisfactory (known Advantage Sim 6.0 system limitation).

System software includes the functionality enabling the user to add a margin to, for example, the clinical target volume (CTV) in order to obtain the PTV. Margins are grown in Advantage Sim by adding or subtracting the defined margins to the existing structures along all axes. This should also be tested on a phantom of known physical dimensions, and each ‘grown’ margin measured in terms of change in area/volume. This should be carried out for both symmetric and asymmetric margins. Using our Perspex and wire phantom, we marked up some simple shapes of known area and volume using the wire markers, and added a range of margins (2D, 3D, isotropic and anisotropic). Using the distance measurement tool to measure the dimensions of the ‘grown’ margins, we found good agreement between measured and expected margins (calculated from the knowledge of the position of each of the wire markers), with the measured areas agreeing with those expected to be within 0.2% and with deviations <4% in the case of volumes (given that the margins will tend to round any ‘sharp’ corners, a more accurate agreement is not expected). Area and volume measurements again can be performed on a phantom of known physical dimensions by comparing known areas and volumes to those calculated by the virtual simulation software. Our measurements of simple geometric structures found deviations of <1% between measured and expected areas and volumes.

Beam parameters

The virtual simulation software allows the user to create virtual treatment beams once the system is configured with user-supplied machine parameters corresponding to the departmental treatment units. Each aspect of the treatment machine beam simulation should be fully tested (both magnitude and direction), for example, the gantry, collimator and couch rotations should all be checked and the virtual set field size should be assessed against a phantom of known physical dimensions. The machine configuration can often be checked quickly by comparison with treatment planning system (TPS) configuration. The beam simulation scale (i.e. IEC1217) should be checked for agreement with the TPS and the record and verify (R+V) system.

DRR generation

The DRRs produced by the virtual simulator must be of sufficient quality to visualise the anatomical details in order to verify the positioning of treatment fields when compared to portal images taken during treatment. Therefore, the DRRs must be geometrically accurate to eliminate the possibility of errors in patient set-up and treatment. Film magnification should be tested over the range of treatment distances. This may be carried out by making use of the known distances between points in a phantom, for example, between the wires embedded in our Perspex and wire cube phantom. A range of magnifications were tested between 80 and 120 cm source to surface distance (SSD) and were found to be within 1.0 mm of that expected.

The DRR is produced by tracing ray lines from a virtual source position through the CT data on to an image plane.Reference Sherouse and Chaney2 In standard DRR mode, for each ray line, the integral of the voxel densities along the line is used to generate the image. The ray-tracing algorithm must be assessed to ensure the DRRs are geometrically accurate. This can be achieved by constructing a simple phantom (Figure 2) containing markers on the top and bottom faces. A useful design is such that pairs of markers appear coincident on the DRR image when DRRs are produced correctly for a number of different source to surface distances (SSDs) (see Figure 2). This should be tested for a range of SSDs used clinically, for example, 80–120 cm. Using this phantom (kindly loaned to us by Dr John Conway, Weston Park Hospital, Sheffield), we found the deviation in coincidence to be <1 mm for measurements made at 80, 100 and 120 cm. This corresponds to an angular deviation in ray lines of <0.1°. Care should be taken as the SSD and isocentre to surface distance (ISD) calculated by Advantage Sim use the distances from the beam source and isocentre to the surface of the 3D patient model. These values will be affected if the beam intersects an object such as the patient couch.

Figure 2. Phantom used to assess DRR ray tracing algorithm. When reconstructed at 80 cm SSD, the two markers line up as shown.

The DRRs should be tested to confirm that contours defined on the axial CT source images are projected accurately onto the DRR. Contours representing thin wires, for example, should be drawn on a phantom image. The length of these contours can then be compared with their lengths represented on the DRR projection. This should be done for a range of depths and for treatment beams at a range of gantry angles. Our measurements showed a deviation of <1 mm for a range of contour lengths at depths between 5 and 20 cm with the SSD varied from 80 to 120 cm.

DRR image quality

The purpose of the DRR is to allow adequate visualisation of the target volume, contoured structures and bony landmarks used for patient set-up verification. The image quality of the DRR must be adequate to achieve these aims. DRR image quality may be assessed in a similar fashion, for example, to that of portal images. Using a suitable phantom, it is possible to assess a number of standard image quality characteristics. It is recommended in the literatureReference Mutic, Palta, Butker, Das, Huq, Loo, Salter, McCollough and Van Dyk3 that the following image quality tests on DRRs be carried out both at initial commissioning and at regular intervals thereafter:

  1. Low-contrast resolution,

  2. High-contrast resolution,

  3. Spatial linearity

  4. Performance of DRR output device.

To assess DRR image quality, we used the Catphan® 600 CT test object (The Phantom Laboratory Inc., Salem, NY, USA). The Catphan® 600 is a cylindrical test object comprising several different disc inserts or modules, enabling a variety of image quality tests to be performed.

To obtain a measure of high-contrast resolution, r, the module containing aluminium line pairs with resolutions ranging from 1 to 21 line pairs per cm (in-plane resolution of 5–0.24 mm) was used. The phantom was aligned along the superior–inferior axis and scanned with both 2.5 and 5.0 mm slice width/spacing, which are our routine clinical scanning parameters. When transferred to the virtual simulator workstation, the DRR depth control tool was used to generate a DRR based on reconstructing only the axial slice of the phantom containing the line-pair module (Figure 3). To obtain a qualitative measurement of high-contrast resolution, the reconstructed DRRs wereassessed for the number of line pairs visible. In both cases, for high-resolution DRRs, seven line pairs per cm were visible, corresponding to the resolution of 0.71 mm in-plane. Standard-resolution DRRs showed five line pairs per cm visible, corresponding to 1.0 mm resolution in-plane. Future assessments of high-contrast resolution will be compared with this value. More objective analysis is also possible as the phantom has a pin insert, enabling measurement of the point spread function and associated modulation transfer function. The low-contrast sensitivity was assessed in terms of visualisation of supra- and super-slice targets, which have nominal contrast levels of 0.3–1.0%. Qualitatively, all supra-slice targets were visualised and super-slice targets to the limit of 3 mm length, 5 mm diameter at 0.3% nominal contrast. The phantom includes a ‘flood field’ module for image uniformity assessment. Mean pixel values in five regions of interest (top, bottom, right, left and centre) were measured, and found to be within one standard deviation of each other.

Figure 3. Examples of the DRRs reconstructed on Advantage Sim of the Catphan® 600 CTP528 high-resolution module and the CTP515 low-contrast module.

Spatial linearity may be measured simply by scanning a phantom of known dimensions and comparing the corresponding dimensions in the resulting DRR. This was carried out here by measuring the distance between points in DRRs of our previously scanned cube phantom. Deviations were found to be <0.5 mm.

It is useful for future qualitative comparisons to acquire some baseline anatomical images at commissioning. Using typical phantoms, such as the RANDO® (The Phantom Laboratory Inc., Salem, NY, USA), images of a range of clinical sites should be acquired. The range should include varying tumour regions (e.g. head, pelvis and chest), image reconstruction kernels and DRR filters. These baseline images can then be checked annually with recent images.

Isocentre generation

The isocentre of the target volume can be calculated by the virtual simulation software. Most commercial systems offer a choice in the way the isocentre can be generated and positioned, for example, it can be generated to be the geometric centre of a structure or at a surface. Advantage Sim offers three choices for calculating the centre of a structure: The centre of the structure may be defined as (1) the ‘pseudo centre of gravity of the structure volume’, (2) the ‘geometric centre of a 3D box that encloses the 3D structure’, or (3) the ‘geometric centre of a 2D rectangle that encloses the 2D intersection of the structure of the DRR (in the isocentric plane)’. The algorithm is selected by the user. The isocentre may also be set to the midpoint of the patient. In this case, the midpoint is defined as ‘the position along the beam axis halfway between the first and last intersections with the 3D patient model’. Care should be taken with this method as image artefacts or objects, such as the patient table, may distort the model. The accuracy of the isocentre position can be checked by scanning a phantom of known dimensions and outlining the phantom to generate a structure with a known centre position. The calculated isocentre position should be compared with the physical phantom isocentre coordinates. Similarly, there will usually be a number of functions available to adjust the position of the isocentre, which should also be checked for correct functionality. Using our geometric phantom, a simple structure was defined with known centre coordinates. Isocentre adjustment functions were performed with deviations <1 mm. Once the isocentre is defined, setting the field size can usually be done in a number of ways. The field may be defined completely manually by typing jaw sizes, or graphically by dragging jaws represented onscreen. In addition, it is often possible to automatically conform the jaws to a user-defined 2D or 3D structure. The functionality and accuracy of field size definition may be tested by using a phantom with a structure of known geometric size. We found functionality to be correct and auto-conformation to be accurate with a deviation of <1 mm. While Advantage Sim allows accurate positioning of collimator jaws and MLC, it should be noted that small differences in positioning may appear significant when projected onto oblique planar views.

Storing and sending information

Often, virtual simulation systems allow users to store standard beam arrangements in a plan library. It is a simple qualitative check to create some standard clinical plans and ensure they are saved and retrieved without parameter change.

As a routine, data transfer to local and network printers should be checked, and in particular DRR quality.

Multimodality image registration

Multimodality imaging for target localisation is an area of increasing interest. For certain sites and conditions, CT images are accepted as being inferior to other modalities such as magnetic resonance imaging (MRI) and positron emission tomography (PET). However, CT is currently the only modality that provides electron density information, which is essential for dose calculation on the TPS. Some systems, such as current generation PET/CT, acquire image datasets that are inherently registered together. In these systems, PET and CT gantries are positioned coaxially side by side around the same couch track, and the patient can be imaged by each system consecutively without moving off the couch. However, the more usual scenario is for the patient to be imaged on two different scanners, often in different locations within the hospital(s). Accordingly, the ability to accurately register a dataset from another modality to the planning CT and use these fused data for localisation and contour definition is important.

Most currently available virtual simulator systems offer image registration utilities as standard. The algorithms of these registration utilities may vary and user documentation can be sparse, so it is important to liaise closely with the vendor to obtain details of the ‘black-box’. Simplistically, registration systems perform two basic tasks in an iterative or semi-iterative fashion: a calculation of the agreement or similarity between two datasets and a transformation of the paired data that is not in sufficient agreement. Currently, the calculation algorithms will usually be based either on ‘chamfer-matching’Reference Borgefors4 or on ‘Mutual Information’.Reference Maes, Collignon, Vandermuelen, Marchal and Suetens5, Reference Viola and Wells6 The transformation algorithms are usually based on rigid-body models,Reference Collins, Neelin, Peters and Evans7 but there is considerable work to move to the use of non-rigid-body models.Reference Woods, Maziotta and Cherry8 Of course, in reality, many other things may be happening, as well as this simple two-step process. Images may need optimising/enhancing by segmenting, scaling, interpolation, distortion correction or any of a plethora of other image-processing techniques. Our system (Advantage Fusion) uses a six degrees of freedom (DoF) (linear scaling, translation and rotation) transform calculation algorithm to register (align) the coordinates in the exam ‘to be registered’ with the coordinates of the ‘reference’ exam (rigid-body alignment).

For the image registration systems to be clinically useful, they must be accurate, precise, flexible, robust, comparatively fast and require a minimum of user input. There are several steps involved in image registration and the software systems are necessarily complex. Commissioning and routine quality assurance (QA) of these systems are recommended.Reference Fraass, Doppke, Hunt, Kutcher, Starkschall, Stern and Van Dyke9 Unfortunately, there is a dearth of practical information in the literature or from working groups regarding methods and equipment for commissioning/acceptance and QA of image registration systems. The options for validating image registration packages fall into three types. Validation of registration accuracy can be obtained for a small cohort of patients, grading the accuracy of a set of manually defined anatomical landmarks. This will obviously be time-consuming and usually involve considerable input from medical colleagues. Reports suggest observers can detect in-plane translational misregistration errors of >2 mm and rotations of >3–4° with a high degree of precision.Reference West, Fitzpatrick and Wang10 Anatomical validation was done on our system with the input of a consultant radiologist. Six sets of patient data were used, with MRI data (acquired on a GE Signa 1.5 T Excite HD scanner) registered to CT (GE Lightspeed RT scanner). In each case, both sets of data were axial with typical slice thickness of 3 mm. T1-weighted MR scan data was used, which offers the closest anatomical similarity with CT data. The sites were restricted to brain to avoid the confounding nature of potential organ movement. The MRI data had no additional image processing to that performed by the scanner as standard. For each data-set, three pairs of landmarks were marked by the radiologist. Deviations were generated manually by measuring the offset between landmark pairs on the screen. We observed a typical system deviation of 3 mm in each orthogonal axis.

It is also possible to use phantoms of known dimensions having interchangeable fiducial markers/inserts that can be scanned by several different imaging modalities.Reference Daisne, Sibomana, Bol, Cosnard, Lonneux and Gregoire11Reference Mutic, Dempsey, Bosch, Low, Drzymala, Chao, Goddu, Cutler and Purdy13 This technique is certainly quicker and more convenient than the earlier one. However, the phantoms are usually simple in design and may present little real challenge for image registration algorithms. Registration accuracy between PET, CT and MRI within 2 mm has been reported.Reference Daisne, Sibomana, Bol, Cosnard, Lonneux and Gregoire11 Using a simple geometric phantom (a Perspex cube with embedded wires), image registration was tested CT to CT. A series of scans was acquired with the phantom as a range of known orientations in the scanner (rotated and tilted). The land-marking algorithm was successful for all orientations and performed with deviations <2 mm in all axes.

Another solution is the use of simulated data. This has the advantages of speed and convenience, and may not suffer from images having very simple geometry.Reference Thurfjell, Pagani and Andersson14, Reference Zuk, Atkins, Booth and Loew15 Simulation data also have the potential to give very accurate quantitative results. Simulation data can be generated with relative ease in packages such as MATLAB (The Mathworks, Natick, MA, USA) by performing transformations and resampling source scan data, and commercial products are also available (Im-Sim, OSL, UK). For a rigorous test, non-linear transformations should be used. Useful source data, for example, are multi-echo T2 MRI scans where several scans with different echo times are acquired of the same volume providing intrinsically registered data with different image contrast. In addition, some interesting analytic methods are possible, such as the ‘full circle’ or ‘consistency’ method. Time allowing, perhaps, the gold standard test would be a combinational approach, using simulated and real data. Some typical values of registration accuracy can be seen in Table 2.Reference Hutton and Braun16

Table 2. Typical registration accuracy

Reference McJury, Foran, Conway, Dixon, Wilcock, Brown and Robinson*Source: Reproduced by kind permission of Elsevier from Ref. 16 (p. 187).

Summary

We have presented a detailed report on the commissioning tests performed on a new CT simulator (GE Lightspeed RT wide-bore CT scanner with GE Advantage Sim software) installed at the Northern Ireland Cancer Centre, focusing our discussion on the virtual simulation software. We have attempted to generalise the discussions to be applicable to the performance on most CT simulation systems currently available.

Phantoms required for initial commissioning and subsequent quality control tests have been described.

Commissioning was carried out on each aspect of the virtual simulator software: (1) the 3D image display, (2) contouring, (3) treatment unit beam parameters, (4) DRR generation, including image quality, (5) isocentre generation and (6) multi-modality image registration. The system was found to be working within suggested tolerances obtained from the literature currently available; and where there are no tolerances available, we have suggested suitable values.

References

McJury, M, Foran, B, Conway, J, Dixon, S, Wilcock, K, Brown, G, Robinson, MH. Optimising the use of virtual and conventional simulation: a clinical and ecconimic analysis. J Radiat Prot 2007; 6:19.Google Scholar
Sherouse, GW, Chaney, EL. The portable virtual simulator. Int J Radiat Oncol Biol Phys 1991;21:475481.CrossRefGoogle ScholarPubMed
Mutic, S, Palta, JR, Butker, EK, Das, IJ, Huq, MS, Loo, LN, Salter, BJ, McCollough, CH, Van Dyk, J. Quality assurance for computed-tomography simulators and the computed-tomography-simulation process: report of the AAPM Radiation Therapy Committee Task Group No. 66. Med Phys 2003; 30(10):27622792.CrossRefGoogle ScholarPubMed
Borgefors, G. Hierarchical chamfer matching: a parametrical edge matching algorithm. IEEE Trans Pattern Anal Mach Intell 1988; 10:849865.CrossRefGoogle Scholar
Maes, F, Collignon, A, Vandermuelen, D, Marchal, G, Suetens, P. Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging 1997; 16:187198.CrossRefGoogle ScholarPubMed
Viola, P, Wells, WM. Alignment by maximization of mutual information. Proceedings of the 5th International Conference Computer Vision 1995; 16–23.CrossRefGoogle Scholar
Collins, DL, Neelin, P, Peters, TM, Evans, AC. Automatic 3D intersubject registration of MR volumetric data in standardised Talairach space. J Comp Assist Tomogr 1994;18:192205.CrossRefGoogle ScholarPubMed
Woods, RP, Maziotta, JC, Cherry, SR. MRI-PET registration with automated algorithm. J Comp Assist Tomogr 1993;17:536546.CrossRefGoogle ScholarPubMed
Fraass, B, Doppke, K, Hunt, M, Kutcher, G, Starkschall, G, Stern, R, Van Dyke, J. American Association of Physicists in Medicine Radiation Therapy Committee Task Group 53: quality assurance for clinical radiotherapy treatment planning. Med Phys 1998; 25:17731829.CrossRefGoogle Scholar
West, J, Fitzpatrick, JM, Wang, MY et al. . Comparison and evaluation of retrospective intermodality image registration techniques. J Comput Assist Tomogr 1997; 21:554556.CrossRefGoogle ScholarPubMed
Daisne, JF, Sibomana, M, Bol, A, Cosnard, G, Lonneux, M, Gregoire, V. Evaluation of a multimodality image (CT, MRI and PET) coregistration procedure of phantom and head and neck cancer patients: accuracy, reproducibility and consistency. Radiother Oncol 2003; 69:237245.CrossRefGoogle ScholarPubMed
Moore, CS, Liney, GP, Beavis, AW. Quality assurance of registration of CT and MRI data sets for treatment planning of radiotherapy for head and neck cancers. J Appl Clin Med Phys 2004;5:2535.CrossRefGoogle ScholarPubMed
Mutic, S, Dempsey, JF, Bosch, WR, Low, DA, Drzymala, RE, Chao, KS, Goddu, SM, Cutler, PD, Purdy, JA. Multimodality image registration quality assurance for conformal three-dimensional treatment planning. Int J Radiat Oncol Biol Phys 2001; 51:255260.CrossRefGoogle ScholarPubMed
Thurfjell, L, Pagani, M, Andersson, JLR. Registration of neuroimaging data: implementation and clinical applications. J Neuroimaging 2000;10:3946.CrossRefGoogle ScholarPubMed
Zuk, T, Atkins, S, Booth, K. Approaches to registration using 3D surfaces. In: Loew, MH (ed). Medical Imaging: Image Processing. Bellingham, WA: SPIE Press 1994; 2167:176–187.Google Scholar
Hutton, BF, Braun, M. Software for image registration: algorithms, accuracy, efficacy. Semin Nucl Med 2003;33:180192.CrossRefGoogle ScholarPubMed
McGee, KP, Das, IJ. Commissioning, acceptance testing, and quality assurance of a CT simulator. In: Coia, LR, Schultheiss, TE, Hanks, GE (eds). A Practical Guide to CT Simulation. Madison, WI: Advanced Medical Publishing, 1995:523.Google Scholar
Eberl, S, Kanno, I, Fulton, RR, Ryan, A, Hutton, BF, Fulham, MJ. Automated interstudy image registration technique for SPECT and PET. J Nucl Med 1996; 37:137145.Google ScholarPubMed
Woods, RP, Grafton, ST, Holmes, CJ, Cherry, SR, Mazziotta, JC. Automated image registration: I. general methods and intrasubject, intramodality validation. J Comput Assist Tomogr 1998; 22:139152.CrossRefGoogle ScholarPubMed
Lau, YH, Braun, M, Hutton, BF. Registration of SPET and CT abdominal images using a symmetric correlation ratio. Nucl Med Commun 2000; 21:491.CrossRefGoogle Scholar
Holden, M, Hill, DL, Denton, ER, Jarosz, JM, Cox, TC, Rohlfing, T, Goodey, J, Hawkes, DJ. Voxel similarity measures for 3-D serial MR brain image registration. IEEE Trans Med Imaging 2000; 19:94102.CrossRefGoogle Scholar
Barnden, L, Kwiatek, R, Lau, Y. Validation of fully automatic brain SPECT to MR co-registration. Eur J Nucl Med 2000;27:147154.CrossRefGoogle ScholarPubMed
Wong, J C H, Studholme, C, Hawkes, D J, Maisey, M N. Evaluation of the limits of visual detection of image misregistration in a brain fluorine-18 fluorodexyglucose PET-MRI study. Eur J Nucl Med 1997;24:642650.Google Scholar
Fitzpatrick, J M, Hill, D L G, Shyr, Y, West, J, Studholme, C, Maurer, C R. Visual assessment of the accuracy of retrospective registration of MR and CT images of the brain. IEEE Trans Med Imag 1998;17:571585.CrossRefGoogle ScholarPubMed
Ardekani, B A, Braun, M, Hutton, B F, Kanno, I, Iida, H.A fully automatic multimodality image registration algorithm. J Comp Assist Tomog 1995; 19:615623.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Schematic diagram and CT image of the Perspex and wire cube phantom.

Figure 1

Table 1. Summary of the tests carried out to assess virtual simulator functionality

Figure 2

Figure 2. Phantom used to assess DRR ray tracing algorithm. When reconstructed at 80 cm SSD, the two markers line up as shown.

Figure 3

Figure 3. Examples of the DRRs reconstructed on Advantage Sim of the Catphan® 600 CTP528 high-resolution module and the CTP515 low-contrast module.

Figure 4

Table 2. Typical registration accuracy