Anniversary Paper: Image processing and manipulation through the pages ofMedical Physics.

The language of radiology has gradually evolved from "the film" (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to "the image," an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatial alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the "image" as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes ofMedical Physics.


I. FROM ANALOG FILM TO DIGITAL IMAGES
The radiographic film is an analog device that captures, during exposure, the spatial distribution of light photons that emanate from the phosphor screen in response to the spatial distribution of x-ray photons transmitted through the patient. In its dual role, the radiographic film then serves as an analog device that displays the recorded information as a spatial distribution of transmitted light photons from a source of backlight. As an analog display device, the film must be interpreted by the radiologist purely in a subjective, qualitative manner. Although many aspects of the screen-film-based imaging chain are tuned to accentuate the visual appearance of structures of interest within the anatomic region under evaluation ͑including beam energy, screen properties, and developer conditions͒, the developed film remains a static entity that may not be modified to mitigate perceptual limitations of the human eye-brain system. The image captured and displayed on film is very much in accordance with the saying, "What you see is what you get," although magnifying loops and hot-light techniques may be used, along with considerations of lightboard luminance 1 and ambient lighting conditions, 2 to facilitate human perception of the film image.
Analog image enhancement techniques have been reported to enhance film images based predominantly on optical processing approaches. 3-6 Renner and Luke 3 investigated two incoherent-light-based image processing techniques to improve the perception of conventional tomographic images. The first technique was an all-purpose approach based on the Herschel effect, and the second technique simulated a highpass spatial filter through a film-copying technique. Liu et al. 5 used three optical processing architectures that convert the film image into a distribution of coherent light and perform mathematical operations on this coherent light distribution ͑Fig. 1͒. Optical frequency filters were implemented to enhance features in mammograms. Panchangam et al. 6 used a self-adaptive optical Fourier processing system to selectively display microcalcification clusters or surrounding parenchymal tissue in mammograms. The selection of structures for enhancement was achieved interactively by rotating the analyzer in the optical system.
Despite the qualitative, analog nature of film, quantitative information may be extracted from regions of the film using a microdensitometer, 7 which converts the fraction of light transmitted through the film at different spatial locations into numeric optical density values. A more systematic approach to the quantification of film is through the use of film digitizers, which have been characterized, evaluated, and compared in multiple Medical Physics articles ͑Fig. 2͒. 8- 13 Yin et al. 8 combined a curve-fitting technique with an angulated slit image to measure the presampling modulation transfer function ͑MTF͒ of two laser scanners and an optical drum scan-ner. Meeder et al. 10 reported several tests designed to evaluate image transfer characteristics of digitizers, including geometric accuracy, characteristic curve linearity, temporal and spatial response to abrupt optical density changes, and noise contributions. Dempsey et al. 12 developed techniques to eliminate interference pattern fluctuation artifacts and light-spread artifacts introduced by film digitizers. The first type of artifact was eliminated through the use of a masked diffusing ground-glass scanning bed, and the second type of artifact was eliminated through application of Fouriertransform-based deconvolution of transmission profiles with measured digitizer line-spread functions.
Hangiandreou et al. 11 compared the performance of charged-coupled device ͑CCD͒ digitizers and laser digitizers.
A function derived from the Rose model was used to evaluate signal, noise, and useful optical density range, and the investigators found that this function could provide a useful evaluation tool for acceptance testing and quality control. Gonzalez et al. 13 continued investigations of CCD digitizers by comparing noise reduction through an increase in signal resolution versus application of a low-pass filter. Other groups have evaluated digitizers for film dosimetry applications. [14][15][16] The output of the film digitizer is a digital image: a computer file that contains, at discrete addresses that may be mapped to specific spatial locations of the film, bounded integer values representing the average optical density over a small region of the film at each location. Although the film digitizer is becoming an anachronism, the digital image reigns supreme in radiology, with nearly every diagnostic imaging modality now routinely generating digital image files ͑mammography is the last modality where screen film is still considered superior by some radiologists͒. Gone ͑or at least quickly disappearing͒ are the days of lost films, bulky patient film folders, space-consuming film libraries, and the need to hang prior films.

II. IMAGE INFORMATICS
These aspects of the film-based radiology department have been replaced by an electronic infrastructure for the storage and retrieval of image data generally known as imaging informatics, the foundation of which is the picture archiving and communication system ͑PACS͒. Efforts are underway to integrate imaging informatics with the more global hospital information system 17 to achieve an enterprise-wide approach to image, clinical, and patient demographic information. The role of the medical physicist in the development, implementation, and support of PACS and hospital information systems has been the subject of several Point/Counterpoint articles in Medical Physics. [18][19][20] The various roles of medical physicists and the AAPM in the evolution of informatics has been nicely reviewed in the Anniversary Paper by Kagadis et al. 21 The ability to integrate, as seamlessly as possible, the PACS environment with medical image acquisition devices and image display ͑or other output͒ devices has proven to be a monumental effort of digital image formats, standards, and communications. Of historical note, in 1982, several years before the American College of Radiology and the National Electrical Manufacturers Association released the standard that would later evolve into what has become known as DICOM ͑Digital Imaging and Communications in Medi-cine͒, the AAPM issued Report No. 10 ͑"A Standard Format for Digital Image Exchange"͒. 22 This report stemmed from a task force formed by the AAPM Science Council to "consider the problem of transferring digital image data between devices." The task force concluded that "it is impractical, at the present time, to adopt a standard internal representation for digital image data acquired and processed by commercially available equipment" due to "incompatibility between the hardware and software used by different manufacturers, and nonuniformity of the formats used for recording the image data on magnetic media." The report, however, provided a magnetic tape format solely for the exchange of digital images. Then, as now ͑with the DICOM standard͒, associated descriptive information ͑"metadata"͒ stored along with the image data was essential to properly understand and manipulate the image structure.
In addition to the metadata stored in the image headers or archive directory structure, methods to interrogate image data could prove useful to a more robust PACS implementation. To facilitate the retrieval of images ͑specifically radiographic chest images͒ from a PACS environment, Morishita et al. 23 developed an automated patient recognition method based on an image-matching technique that computes the correlation between two images ͑e.g., a current and previous image͒ presumed to belong to the same patient. Two posteroanterior ͑PA͒ radiographic chest images acquired from the same patient at different times generally yielded a larger correlation value than PA chest images from two different patients. The method was able to correctly identify mismatched previous and current radiographic chest images in over half of the cases in the investigators' database. A method to verify the accuracy of images retrieved from a PACS was developed by Arimura et al. 24 Their method applied a templatematching technique to the task of differentiating PA and lateral radiographic chest images. The "average" PA chest image and the ''average'' lateral chest image for small, medium, and large patients in a training database were created as the template images against which a novel image was compared ͑Fig. 3͒. The view for 1000 test images ͑500 PA and 500 lateral͒ was correctly identified.
One burden of digital images is the physical file sizes that can be attained. Large file sizes can greatly magnify storage media requirements and severely hinder file transfer. Image compression techniques may be used to overcome these issues, and a number of compression strategies have been reported in Medical Physics. [25][26][27][28][29][30][31][32][33][34] Lossless compression of an image manipulates the sequences of bits in the image file so that the decompressed file is exactly the same as the original file. 26 Since practically useful compression ratios generally are not possible with lossless compression techniques, compression strategies for images are "lossy." Lo et al. 27 developed a decomposition-based compression method that uses image splitting and gray-level remapping. Yin et al. 30 developed a compression method using both wavelet-transform and field-masking techniques. Phelan et al. 32 developed a wavelet-based compression method that uses the morphology of wavelet transform coefficients in the wavelet domain to isolate and retain significant coefficients ͑those that correspond to key image features͒ and further compress remaining coefficients.
Lossy compression, to be effective, requires a balance between high compression ratios and diagnostic fidelity of the decompressed image. Höhne et al. 25 applied Fouriertransform-based compression to digital angiography sequences of the heart and brain and obtained compression ratios between 5:1 and 10:1 without loss of diagnostic information. Cook et al. 28 and Cox et al. 29 both investigated the detectability of low-contrast objects in images compressed through full-frame discrete cosine transform techniques with varying levels of compression. The contrast-detail phantom experiments of Cook et al. identified a statistically significant degradation in detectability for an average compression ratio of 125:1 but not for an average compression ratio of 11:1. Cox et al. introduced simulated noncalcified pulmonary nodules into clinical radiographic chest images and, in a series of two-alternative forced choice observer experiments, observed a measurable decrease in performance across compression ratios of 7:1, 16:1, 44:1, and 127:1. Zhao et al. 31 used the nonprewhitening matched filter to quantify the effect of wavelet-based compression on lesion detectability through a simulation study. The size, amplitude, and associated noise of simulated signals were varied, along with the compression ratio, to identify combinations of parameters that generated equivalent detectability. Using images from FIG. 3. Examples of ͑a͒ a PA template image and ͑b͒ lateral template images used in an automated system to differentiate PA and lateral radiographic chest images retrieved from a PACS. ͑Reprinted with permission from Ref. 24.͒ six different diagnostic imaging modalities, Thompson et al. 33 compared a wavelet-based compression algorithm against a discrete cosine transform compression method. For compression ratios up to 40:1, the wavelet algorithm demonstrated generally lower average error-metric values and higher peak signal-to-noise ratios ͑Fig. 4͒. Fidler et al. 34 evaluated the influence of image information on compression and image degradation using the JPEG standard. Their qualitative and quantitative findings indicated that image degradation is strongly dependent on image information ͑as computed from image entropy͒.

III. IMAGES ON DISPLAY
Digital images have no inherent associated visual representation. The electronic display device ͑"soft-copy display"͒ fulfills the role of image presentation for the visual consumption of a radiologist. Medical physicists have been intimately involved in the physical characterization and performance assessment of such displays, which require a unique blend of physics and perception. [35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52] These investigations include cathode-ray tube ͑CRT͒ displays and liquid crystal displays ͑LCDs͒ and monochrome and color display devices. According to AAPM professional guidelines from 1994, "the performance assessment of electronic display devices in healthcare institutions falls within the professional responsibilities of medical physicists." 53 AAPM On-Line Report No. 03 ͑"Assessment of Display Performance for Medical Imaging Systems"͒ noted that "considering the fundamental importance of display image quality to the overall effectiveness of a diagnostic imaging practice, it is vitally important to assure that electronic display devices do not compromise image quality." 47 This statement was echoed in the Executive Summary of the AAPM TG18 report published in Medical Physics. 46 Although early work on the performance assessment of monitors made use of the Society of Motion Picture and Television Engineers test pattern, 38 the AAPM TG18 test patterns ͑Fig. 5͒ were quickly adopted by the medical physics community.
Jung et al. 45 evaluated the performance of 32 LCDs based on the AAPM TG18 document and test patterns. Their evaluation included the angular dependencies of luminance and contrast, an effect further explored by several other groups. 42,43,48 These studies demonstrated the successful clinical implementation of the AAPM TG18 guidelines for medical display performance assessment and the impact of angular response on image contrast. In an effort to advance these guidelines, Jacobs et al. 50 proposed a variable test pattern for the quality assurance assessment of displays. Unlike the test pattern developed by AAPM Task Group 18, the variable pattern included randomly generated elements intended to reduce bias due to observer memory effects.
Observer-based contrast-detail studies have been conducted to quantify the contribution of display-related effects to the detection of simulated lesions in images. 36,37,40,41 Direct comparisons among LCDs and CRT displays have been performed. 49,52 Others have investigated noise in LCDs 44 and the temporal response of LCDs, 51 an important characteristic for proper interpretation of real-time image sequences acquired, for example, during fluoroscopy.

IV. IMAGE RESTORATION
The ability to electronically manipulate images is a powerful benefit to digital image files. Image manipulation tech- niques have been performed to overcome the limitations of displays or to correct artifacts due to the image acquisition process. [54][55][56][57][58]39,[59][60][61][62][63][64][65][66] Moseley and Munro, 39 for example, developed a method to display portal images that permits the user to optimize display contrast without saturating parts of the image. The difference between the average signal in small regions and the global average was subtracted from the original image to reduce changes in average signal that occur over large spatial dimensions without obscuring changes in signal that occur over small spatial dimensions. A method to remove veiling glare from fluoroscopic images was developed by Seibert et al. 54 based on deconvolution of the acquired images with the point spread function that describes the veiling glare, a concept that was further advanced by Close et al. 60 to account for spatial variability of the point spread function.
Geometric distortion introduced by the physics of the imaging system have been analyzed and corrected. Cerveri et al. 61 reported two techniques to correct geometric distortion introduced by image intensifiers, a local unwarping polynomial approach and a hierarchical radial basis function network ͑Fig. 6͒. Fantozzi et al. 62 developed a thin-plate-spline global-correction technique, while Yan et al. 65 presented an approach that incorporated a moving least squares method combined with polynomial fitting. These groups all conducted an array of evaluations and comparisons on simulated and real image data to assess the sensitivity of the various methods to specific distortions ͑e.g., pincushion, sigmoidal, and local distortion͒. In the context of magnetic resonance imaging ͑MRI͒, Sekihara and Kohno 57 presented an image restoration technique to overcome the effects of nonuniform static magnetic fields in modified echo-planar imaging, and Baldwin et al. 64 used a reversed gradient method to separate and correct geometric inaccuracies due to inhomogeneities in the background field and nonlinearities in the applied gradient. Zhang et al. 66 used deconvolution to restore the resolution of digital autoradiography images to improve the correlation of the radiotracer distribution in a tissue section with histology and immunohistochemistry findings.

V. NOISE REDUCTION
Most medical images are inherently noisy due to the desire to keep the radiation dose as low as possible or maintain short scan times. Techniques to reduce image noise and improve image contrast, therefore, have been the topic of numerous investigations. Buades et al. 67 provide an excellent overview of a wide range of popular denoising techniques.
A large number of such studies have been conducted in the context of radionuclide imaging. Webb et al. 55 described a constrained deconvolution approach applied to singlephoton emission computed tomography ͑SPECT͒ images of the liver to improve cold-object contrast. Penny et al. 56 investigated an adaptive constrained least-squares method to perform planar image restoration ͑in terms of reduced noise and improved contrast͒ once the system MTF is known; subjective studies supported use of a "coarseness function" designed to minimize the energy in the second derivative of the restored image. This method was later extended to SPECT images. 58 A wavelet-based neural network filter was developed by Qian and Clarke 59 to reduce noise in gamma-camera imaging of beta-emitting isotopes required for the management of antibody therapy. Other areas where noise reduction is particularly important include dual-energy imaging 68,69 and low-dose computed tomography ͑CT͒; 70 noise reduction techniques developed in these areas are often modality specific and require access to the raw data.
Nonlinear diffusion schemes are extremely powerful for noise reduction in medical images. Xia et al. 71 used such techniques to improve the quality of breast CT images. Improved results were obtained when denoising was applied to the projection data rather than to reconstructed images ͑Fig. 7͒. A drawback of these schemes is their slow speed; therefore, other researchers have developed simpler techniques such as adaptive mean filtering, 72 the sigma filter, 73 and the SUSAN filter. 74 In the study by Hilts and Duzenli, 75 many such techniques were compared for performing dosimetry using polyacrylamide gels with CT. Schilham et al. 76 developed an extension of the SUSAN filter that adapts the filtering strength to local noise characteristics; emphysema scores of ultralow-dose CT scans subjected to this filter were similar to scores obtained from clinical-dose scans of the same patients.

VI. ENHANCED VISUALIZATION
Radiologists routinely interpret radiographic images along with previous images of the same patient for comparison to observe changes in anatomic structure or pathologic developments. An image processing technique known as temporal subtraction facilitates the visualization of pathologic change over a temporal sequence of patient images. [77][78][79][80][81] The resulting temporal subtraction images improve radiologists' ability to identify subtle focal lesions ͑e.g., lung cancers͒ and to recognize diffuse changes ͑Fig. 8͒.
Kano et al. 77 reported a temporal subtraction process for radiographic chest images based on small regions of interest automatically placed within the lung fields. A crosscorrelation method and polynomial fitting were applied to determine shift values between the coordinates of the two images. Temporal subtraction images improved the detection of interval change in tumors and other opacities and assisted radiologists in the assessment of pleural effusions, heart size, air fluid levels, and pneumothoraces as they changed over time. Ishida et al. 78 improved the quality of subtraction images of the chest through an iterative warping approach. Armato et al. 82 later developed an automated approach to the evaluation of temporal subtraction image quality.
A temporal subtraction technique based on nonlinear warping was used by Shiraishi et al. 81 to create temporal subtraction images of radionuclide whole-body bone scans; these temporal subtraction images were used as part of a computerized scheme for the automated detection of interval change. Other investigators have developed methods to associate related structures in temporally sequential images without directly constructing a temporal subtraction image. For example, Hadjiiski et al. 79 and Timp et al. 80 developed regional registration methods to identify corresponding mass lesions in temporal pairs of mammograms.
Temporal subtraction methods typically define one image from the image pair as the mask image to which the other image is registered through some combination of global and local registration methods; the net effect is a spatial "warping" of the one image, which is then subtracted from the other image. Consequently, image registration algorithms provide the foundation for temporal subtraction. Image registration is itself a rich topic in medical image research, with significant contributions to the field made by medical physicists. A search for the term "image registration" in the title, abstract, or keywords of the 8166 articles indexed on the Medical Physics web site to-date yielded 214 hits that involve multimodality images, [83][84][85][86] atlas-based models, 87,88 two-view or bilateral comparisons, 89-92 radiotherapy patient setup, 93-97 respiratory motion correction, 98-104 and a wide variety of other applications.
Unlike temporal subtraction, which may be performed retrospectively on existing images through image manipulation and requires no alteration to the manner in which the radiographic images are acquired, energy subtraction ͑or dualenergy imaging͒ requires dedicated hardware to capture a "low-energy image" and a "high-energy image" of the patient during the same radiographic examination. These two images may be mathematically combined to create a pair of diagnostically distinct images, for example a "soft tissue image" predominantly depicting structures with attenuation close to that of water and a "bone image" predominantly depicting structures with attenuation close to that of calcium. Dual-energy imaging has been applied to breast imaging, 105 cardiac imaging, 106,107 chest imaging, 108,109 angiography, 110,111 and bone mineral content assessment. 112 In chest radiography, dual-energy imaging serves a two-fold role in the evaluation of lung cancer: first, the soft-tissue image eliminates superimposed bone that might obscure subtle lung nodules, and second, calcified nodules may be differentiated from noncalcified nodules, since only calcified nodules will appear on the bone image ͑Fig. 9͒. Armato et al. 113 used energy subtraction chest images as input to a temporal subtraction process to demonstrate the potential of these combined enhanced visualization techniques.

VII. IMAGE PREPROCESSING
A vast array of image preprocessing techniques has been developed and applied to medical images to achieve a wide range of results. The images generated by these preprocessing techniques may be used directly to attain some clinical benefit ͑for example, improved lesion conspicuity͒ or as input to higher level computer vision schemes, one prominent example of which is computer-aided diagnosis ͑CAD͒. Furthermore, a series of such techniques may be performed in an appropriate order to create more advanced preprocessing methods.
Image preprocessing techniques consist of operations designed to suppress image information not relevant to the specific task or to enhance key image features. Such techniques include unsharp masking, [114][115][116][117] global contrast enhancement, 115,116,118 histogram equalization, 119-121 edge enhancement, 122,123 and selective enhancement. 124 Loo et al. 114 used a statistical decision theory that incorporated the observer's visual transfer function to compute the signal-tonoise ratio of radiologic patterns after processing with an unsharp-masking technique. The calculated results agreed with qualitative results obtained from an observer performance study and demonstrated that unsharp masking improves the detectability of simple objects if parameters of the technique are selected properly. Brailean et al. 118 used similar statistical decision theory to demonstrate the superiority of an expectation maximization algorithm over global contrast enhancement and unsharp masking for the enhancement of small objects in radiographic images. Global contrast enhancement and unsharp masking applied to computed radiography ͑CR͒ portal images were found by Wilenzick and Merritt 115 to generate images at least as good as the best portal films obtained with then-conventional commercial radiotherapy screen-film systems. Weiser et al. 116 demonstrated a statistical improvement in the perception of anatomic detail for similarly preprocessed CR portal images. Stahl et al. 117 extended the concept of unsharp masking to a multiscale architecture that hierarchically enhanced structures over a range of sizes ͑Fig. 10͒.
Kim et al. 120 evaluated the impact of image preprocessing techniques such as global contrast enhancement and histogram equalization on a rigid three-dimensional/twodimensional registration method. Histogram equalization was found to be a key preprocessing step for accurate registration. Histogram equalization was used by Lehmann et al. 121 to increase the sharpness of coronary arteries prior to application of an image-based metric for measuring the extent of motion in angiographic images and by Crooks and Fallone 119 to improve visualization of double-exposure portal images and facilitate the beam verification process.
Edges are key features in medical images as they provide visual cues for structure boundaries, fine textural detail, and many pathologic processes. Consequently, image processing techniques that enhance edges have a prominent role in diagnostic radiology and radiation oncology. Leszczynski et al. 122 developed an edge extraction algorithm to delineate the treatment field in portal images; their method was based on a FIG. 10. Radiographic image of the sacrum processed ͑a͒ with a standard unsharp-masking algorithm and ͑b͒ with a multiscale algorithm for the hierarchical enhancement of structures. ͑Reprinted with permission from Ref. 117.͒ derivative of Gaussian operator. Crooks and Fallone 123 used local histogram analysis to create a general edge enhancement algorithm that did not also substantially enhance image noise. The enhancement ͑and subsequent detection͒ of edges is an essential component of many image segmentation techniques and an important preprocessing step for a variety of CAD applications. 125,126

VIII. SEGMENTATION
Segmentation is a holy grail in computer vision and one of the most widely studied subjects in medical image processing. Segmentation is often a prerequisite for compound analysis systems, such as CAD applications. Consequently, many segmentation techniques have been reported for the structures and organs of interest in the major areas of CAD research, such as breast, lung, and colon. A complete overview of CAD applications that incorporate segmentation is comprehensively reviewed in the Anniversary Paper by Giger et al. 127 Image segmentation is crucial for accurate planning of radiotherapy, and the large number of such procedures makes it likely that segmentation is performed more often as a clinical procedure in radiation oncology than in all other medical specialties combined. 128 Consequently, a large body of work is devoted to segmentation for the purpose of radiotherapy treatment. On segmentation of the prostate alone, ten studies have appeared in Medical Physics since 2000, mainly focusing on two-and three-dimensional ultrasound, but more recently also on CT and MRI. A recent Point/Counterpoint article debated whether segmentation methods for radiation therapy treatment planning should be standardized and calibrated, 128 an issue that is important for all applications of medical image segmentation.
The segmentation of organs and lesions is essential for many image-based quantification tasks. Duryea et al. 129 used semiautomated segmentation tools to measure the joint destruction on wrist CT scans from patients with rheumatoid arthritis. Hardisty et al. 130 analyzed bone metastases by segmenting the vertebral body and the trabecular centrum of tumor-involved and healthy vertebrae. Zhuge et al. 131 segmented aortic aneurysms in CT angiograms to measure volume and morphological aspects useful for treatment planning. Angelie et al. 132 segmented the left ventricular ͑LV͒ myocardial borders in cardiovascular MR to measure LV function parameters such as the ejection fraction and wall motion. Mao et al. 133 and Gill et al. 134 extracted the carotid arteries from ultrasound images to estimate the degree of stenosis. Yuan et al. 135 developed a two-stage approach to the segmentation of mass lesions on digital mammograms, and Horsch et al. 136 segmented mass lesions on breast ultrasound images. Some topics have attracted a large amount of research because of their evident clinical importance. Examples include the segmentation of lung nodules from thoracic CT scans [137][138][139] and the segmentation of tumors from PET scans. [140][141][142][143] These examples demonstrate the wide range of applications for segmentation-based quantification.
A segmentation result may be used as input for fast and effective visualization by volume or surface rendering algorithms. Manual editing of cutplanes is one of the most timeconsuming aspects of volume rendered displays, which are used increasingly in clinical practice. Over the last few years, commercial vendors have integrated into their workstations automated segmentation tools that provide at least an approximate location of structures of interest. Interestingly, high-precision segmentations often are not required for this purpose; a rough delineation of a volume of interest usually suffices, but it is advantageous to remove, for example, the sternum in a volume rendering of the heart.
Methodologically, segmentation methods have shifted from rule-based systems to supervised approaches that learn a model of the object to be segmented. Model attributes may include object size, shape, location in an image ͑either absolute or relative to other structures͒, and appearance. For model-based approaches, training data consisting of images with the desired segmentations are required to train the system. The test phase, in which a previously unseen image is segmented, can be viewed as fitting the precomputed model to the novel image. Popular examples of such approaches include active shape models, 144 active appearance models, 145 and m-reps. 146 Many methods that employ pixel labeling are also supervised, although such methods do not contain an explicit model of the shape of the object to be segmented.
A good illustration of this shift is the segmentation of lung fields in chest radiographs. Several groups have presented rule-based techniques to achieve lung segmentation. [147][148][149][150] These methods are representative of many others, published in the 1980s and 1990s, that employ sequences of classic image processing techniques to arrive at a segmentation. The "art" of developing such methods is to choose appropriate combinations of techniques and proper values for the many parameters that govern the behavior of the method. Pixel labeling also has been used for lung field segmentation, [151][152][153] and more recent studies in this area invariably use supervised methods. [154][155][156] A major advantage of these supervised methods is that, theoretically, they are applicable to many different tasks, provided that suitable sets of training images are available. In practice, however, many of these methods require changes and adjustments for particular applications. This is the topic of many recent publications. Pilgram et al., 157 for example, presented several modifications to active shape models to allow application of these models to proximal femur segmentation in pelvic radiographs. It should be noted that rulebased schemes, both automated and semiautomated, are still being developed for situations where clinical practice requires fast algorithms, such as in the recent work of Bekes et al.; 158 another nonsupervised methodology that has attracted much attention is level sets, used, for example, by Zhuge et al. 159 to prevent boundary leakage through poorly resolved edges.
Atlas-based segmentation uses the paradigm of interpatient registration for the purpose of segmentation: by registering a novel image to a reference image with known segmentation ͑the atlas͒, the obtained transformation can be applied to the segmentation ͑label propagation͒ to yield a segmentation of the novel image. The general applicability of this methodology is its major attraction. The number of applications and possible variations on the basic approach is large. 87,88 Finally, we note the recent interest in segmentation strategies for the analysis of four-dimensional ͑three spatial dimensions plus time͒ images. [160][161][162][163] Due to the everincreasing capabilities of modern scanners, this trend toward fourdimensional image acquisition certainly will continue. The challenges for investigators developing appropriate image segmentation tools will continue as well.

IX. SUMMARY
The shift from the radiographic film to the radiographic image has both allowed and necessitated numerous developments in image processing and manipulation to achieve image restoration, noise reduction, enhanced visualization, image registration, preprocessing for improved structure visualization, and image segmentation. These techniques are themselves powerful as individual applications while providing tremendous benefits as components of advanced computer vision methods. This shift has been accompanied by challenges in image display and image informatics. These challenges and developments all have been actively advanced by medical physicists and the AAPM through the pages of Medical Physics.