Cascaded systems analysis of photon counting detectors

Purpose: Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum e ffi ciency (DQE). Methods: A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1–7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise e ff ects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. Results: The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most a ff ect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f 50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼ 30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. Conclusions: The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems.


101907-2
Xu et al.: Cascaded systems analysis of photon counting detectors 101907-2 T I. Model parameters for each stage in the imaging chain, with values calculated at nominal operating conditions for the Si-strip PCD system in Fig. 2.
For the silicon-strip detector, the sampling distance in the y direction, b y , is large, and sampling in the y direction is assumed to be independent (infinitely spaced).
Model parameter Description 35 kVp 70 kVp q 0 /X Incident fluence per exposure (X) 2.88 × 10 5 x-rays/mm 2 /mR 6.64 × 10 5 x-rays/mm 2 /mR g 1 Photon interaction probability 0.68 0.26 g 2 Gain in secondary quanta 8000 electrons 11 500 electrons T 3 Charge cloud diffusion σ 3 = 0.015 mm g 4 Charge collection efficiency 0.99 T 5 Aperture a x = 0.05 mm, a y = 0.55 mm σ 6 Additive noise σ add 6 = 200 electrons t 7 Threshold 1500 electrons III 8 Sampling function b x = 0.05 or 0.1 mm While the underlying physics of PCD systems has been studied extensively over the last decade, with investigation of spectral models, 22 detector scatter models, 23 and computer simulation, 24 there has been less work on the fundamental image quality characteristics, modeling, and analysis of Fourier metrics of modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). 25,26 A cascaded systems model of signal and noise transfer characteristics, as previously developed for flat panel detectors (FPDs) [27][28][29] and other types of energy-integrating detectors (EIDs) [30][31][32] would provide a powerful tool for system development and understanding the factors that govern imaging performance, especially in the early stages of system design, development, and optimization. Recent work established by Tanguay et al. 33 provides a basis for cascaded systems analysis of PCDs. Such work highlights the distinction between previously established models of EIDs-for example, FPDs-and PCD systems involving a signal threshold stage. Specifically, a PCD model should consider the propagation of the probabilistic distribution of image quanta arising from a single x-ray interaction through each stage, rather than simply following the mean signal, MTF, and NPS. The distribution is modeled as a binomial selection process including both the magnitude (probability density) and spatial distribution at each stage, thereby enabling the application of a threshold at the appropriate point in the imaging chain. The threshold amounts to an acceptance of signal above a given energy (recorded as "counts") and rejection of signal below that energy-potentially eliminating electronic noise, but as shown below, imparting effects on mean signal and spatial resolution as well. The work reported below extends the cascaded systems framework reported by Tanguay et al. 25 to consider the spatially dependent transfer implications of thresholding as well as charge sharing, additive noise, and count rate-independent spectral distortion. The model is also validated in comparison to physical measurements with a Si-strip PCD and exercised as a guide to optimizing system performance in selection of optimal threshold values and examining the effect of detector design on DQE. Finally, the model is used to highlight the fundamental advantages (and disadvantages) of photon counting in comparison to energy integration.

2.A. A cascaded systems model for PCDs
The PCD imaging chain is modeled as a cascade of stages, where each stage represents a physical process in which the distribution of image quanta changes in number (amplification or loss), spatial distribution (blur or integration), or is sampled at discrete locations. For PCDs, there is an additional stage (Stage 7 below) corresponding to the application of a threshold, which imparts important effects on the mean signal, charge sharing, electronic noise, false counts, and sampling effects-all with direct influence on DQE. Preliminary analysis was presented previously, 34 expanded here to include details of the analytical model and an expanded investigation of PCD performance.
The sections below describe the model for signal and noise propagation in the PCD imaging chain, with the gain stages reflecting a binomial selection process as described by Tanguay et al. 25 and extension to the spatially varying implications of threshold-dependent gain. For the specific PCD used in this work (MicroDose Si-strip detector, Philips Healthcare, Solna, Sweden), model parameters are derived below and summarized in Table I. Because the system was operated well below the exposure rate at which pulse pileup effects become significantspecifically, ∼6 × 10 3 x-rays/pixel/s (∼100 x-rays/pixel for a 15 ms x-ray pulse) compared to the count rate limit of ∼3 × 10 6 x-rays/pixel/s (∼4.5 × 10 4 x-rays/pixel for a 15 ms x-ray pulse) as shown below and in previous work 35 -such nonlinearities were not considered in the current model.
Stage 0: Incident x-ray spectrum. The incident x-ray spectrum was simulated using the spektr implementation 36 of TASMIP, 37 nominally 35 kVp with added filtration of 2 mm Al and 4 cm water or 70 kVp with added filtration of 2 mm Al and 1.2 mm Cu approximating objects in breast or extremities applications, respectively. As illustrated in Fig. 1(A) for a 70 kVp beam, the normalized spectrum, q norm 0 (E 0 ) gives the probability distribution of one incident photon having energy E 0 .
Stage 1: Interaction of x-ray quanta. Propagation of the distribution through Stage 1 considers energy-dependent interactions in the detector. Typical models assume a mean interaction probability (ḡ 1 ) derived from the total cross section F. 1. Distribution of quanta at various stages in the imaging chain. Table I provides a summary of system parameters. (A) Normalized incident photon spectrum at 70 kVp (Stage 0) and spectrum of interacting photons (Stage 1). (B) Stage 5 distribution of quanta collected at a distance x from the site of interaction (taken here as y = 0, and aperture size a x = 0.05 mm and a y = 0.55 mm). (C) Distribution of quanta at Stage 7 counted at any location (x) for a given threshold (t 7 ) (also shown for the case y = 0). of the detector material and a binomial selection process with variance in the gain given byḡ 1 (1 −ḡ 1 ). However, the distribution of energies absorbed by the detector is important in analysis of PCD systems, and photoelectric and Compton interactions impart distinct energy distributions that must be considered in the propagation of signal and noise. The supplementary material 57 available via EPAPS provides detailed description of the distinct distributions arising from photoelectric and Compton interactions similar to the analysis by Hajdok et al. 38 The resulting distribution of interactions at Stage 1 is where q 1PE represents the normalized distribution of photons undergoing a photoelectric interaction in the detector, and q 1C represents that undergoing a Compton interaction. The latter (q 1C ) accounts for both the scenario of scatter followed by escape [giving the low-energy peak in Fig. 1(A)] and scatter followed by reabsorption (with the total energy deposited equal to the energy of the incident photon). The relative contribution of photoelectric and Compton interactions is combined according to the relative cross section at each energy [σ PE (E 0 ) and σ C (E 0 ), respectively], such that σ total (E 0 ) = σ PE (E 0 ) + σ C (E 0 ). The probability that a photon passes through the detector without interacting gives the "zero-energy" term in the distribution, . Stage 2: Generation of secondary quanta. Stage 2 describes the conversion of energy to secondary quanta (electron-hole pairs). The distribution of the number (n) of secondary quanta is modeled as where g 2 (n = n 2 |E) is a probability distribution function describing the generation of n electron-hole (e-h) pairs from a single photon interaction at energy E. The mean number of e-h pairs generated at each energy (ḡ 2 (E) = E/W ) is determined by W , the work function of the detector material. The distribution g 2 may be modeled as a Poisson process or by a broader distribution characterized by a Poisson excess, but in materials such as Si the variance is reduced according to the Fano factor (F = 0.115). The distribution of secondary quanta is therefore modeled as a Gaussian distribution that includes the Fano factor with the variance σ 2 2 = Fḡ 2 , and N is the number of quanta (continuous variable). The distribution was discretized by evaluatingĝ(N |E) at positive integer values of N and normalizing so that g 2 (n|E) =ĝ 2 (N = n|E)/  iĝ2 (N = i|E). Stage 3: Spatial spreading of secondary quanta. At Stage 3, secondary quanta undergo spatial redistribution (stochastic scatter) according to a point spread function, p 3 (ξ,η), modeled as a Gaussian where σ 3 is the characteristic width of charge carrier diffusion, and (ξ,η) are spatial dimensions in x and y domains, respectively, corresponding to the relocation of a secondary quantum. For a blur stage, the probability of one quantum continuing to the next stage is  p 3 (ξ,η)dξdη = 1, indicating that a spatial blurring stage propagates all quanta from the previous stage, and the number of quanta is preserved. For simplicity, we assume a normally incident x-ray photon and symmetric diffusion, though previous work 39,40 has suggested asymmetric spread of secondary quanta for obliquely incident x-rays.
Stage 4: Collection efficiency of secondary quanta. At Stage 4, the loss of secondary quanta due to effects such as e-h absorption or trapping 41 is modeled as a binomial selection with loss characterized by the factor, g 4 . This quantity is assumed independent of position, which is a fair assumption for high-quality semiconductors free of defects.
Stage 5: Integration of secondary quanta. Integration of quanta by the detector aperture at Stage 5 is given by the convolution in Eq. (4b), where x and y denote the displacement between the center of the aperture and the point of interaction in the x and y domain, and a x and a y are the dimensions of the aperture. This considers the effect of the relative displacement between the aperture and the center of p 3 (ξ,η), with the distribution at Stage 5 given by Equation (4a) is a binomial distribution representing the collection of n 5 secondary quanta generated within a virtual aperture located at any point relative to the interaction. The probability of successfully collecting a single secondary quantum at (x, y) is given by p 5 (x,y). Equation (4b) assumes uniform collection sensitivity across the (rect function) aperture, though the model allows for more complex aperture models, e.g., a trapezoidal function. 42 The distribution of secondary quanta computed at all possible locations in the x domain (taking y = 0 for simplicity) is shown in Fig. 1(C), where the probability distribution (vertical axis) is shown as an intensity map at each x location (horizontal axis) about the point of interaction. Each column in Fig. 1(B) is the now familiar distribution of quanta (including, for example, the Compton peak at low n), and the distribution is modulated at increased x according to the binomial selection in Eq. (4a). The model thereby describes both the statistical (n) and spatial (x) distribution of quanta, with reduced probability of counts recorded at greater distance from the site of interaction <5 × 10 3 electrons collected for |x| > 0.04 mm in Fig. 1(B). Stage 6: Additive noise. Stage 6 models the addition of electronics noise prior to readout, modeled as a Gaussiandistributed random variable with characteristic width σ add q 6 (n = n 6 ; x, y) = q 5 (n 5 = n; x,y) * n q add 6 (n 6 = n), (5a) q add 6 (n 6 ) = The probability distribution resulting from the addition of electronics noise is equal to the probability distribution of a sum of two random variables (namely, n 5 representing the number of quanta collected in Stage 5 and n 6 representing the additive noise) and is given by the convolution of their respective individual distributions in Eq. (5a). Note that q add 6 is discretized in the same manner as the distribution of secondary quanta discussed in Stage 2.
The probability of collecting a number of secondary quanta exceeding the threshold is calculated for all possible (x, y) locations of the aperture in Stage 5, implying that a single x-ray photon interaction may result in a count above threshold in multiple apertures. As detailed in Appendix A, depending on the characteristics of the detector system [e.g., the radius of charge carrier diffusion (σ 3 ), the additive noise level (σ add ), and the threshold (t 7 )], such multiple counts ("double counts") can degrade signal fidelity by introducing false counts. Similarly, additive noise registering above threshold is a potential source of false counts. The relationship of detector threshold to false counts arising from charge sharing and/or additive noise is described in Appendices A and B. Stage 8: Sampling. Finally, the signal is sampled at Stage 8, represented by multiplication of q 7 (x, y;t 7 ) with a spatial domain comb function, III 8 x, y;b x ,b y , with the parameters b x ,b y equal to the sampling distance (pixel pitch) in the x and y directions and (x 0 , y 0 ) the relative displacement between the point of photon interaction and the center of the apertures Sampling corresponds to convolution in the Fourier domain between the (Fourier transform of the) presampling signal and a comb function at intervals of the sampling frequency (1/b x ). While q 7 gives the likelihood of recording a count at a given threshold, the recorded signal itself is binary-0 if the signal is below threshold and 1 if the signal (including true and false counts) is above threshold.

2.B. Fourier metrics of imaging performance
The geometry of the Si-strip detector ( Fig. 2 and Table I) allows analysis of MTF, NPS, and DQE in terms of a single spatial-frequency ( f x ) dimension (1D), since the aperture in the y direction (a y = 550 µm) is much greater than the aperture width in the x direction (a x = 50 µm) and the MTF in the y direction is determined by the aperture of a precollimator (not integral to the detector and not used in this work). Additionally, the individual Si wafers are isolated by metal septa in the y direction, preventing electron scatter between adjacent wafers; therefore, correlation in the y direction is considered negligible. The model detailed above provides a general basis for the 2D MTF, NPS, and DQE, and in Secs. 2.B.1-2.B.3, each metric is shown for 2D. In Secs. 3 and 4, analysis is shown in the 1D ( f x ) domain, since the central slice is a nearly complete representation of the Fourier characteristics of the system due to the large a y aperture.

2.B.1. MTF
The presampling MTF is computed from the point spread function [PSF(x, y)] at a given threshold t, and taking MTF(u,v;t) = F x y {PSF(x,y;t)}, where u and v are the Fourier coordinates associated with the x and y directions, respectively. The notation t is interchangeable with t 7 (Stage 7, above). Theoretical calculation of the PSF and MTF is based on the distribution q 7 shown above, and measurement of the presampling MTF is described below (Sec. 3.B).

2.B.2. NPS
The NPS at a threshold, t, is computed as whereq 0 is the incident x-ray fluence (photons/mm 2 ), γ is the system gain and * denotes Fourier domain convolution.

2.B.3. DQE
The system gain, MTF, and NPS are combined to yield the DQE As described in Stage 7 and Appendix A, "false" counts are defined as the recording of a photon interaction when an interaction did not occur (e.g., due to electronic noise) or also resulted in a count in another detector element (e.g., due to charge sharing) and represents a source of variability in estimation of the total number of photon interactions. The analysis of DQE therefore distinguishes true counts [i.e., true system gain, γ true (t), defined as the probability of one incident photon yielding exactly one count in the corresponding detector element] and total counts arising from a single incident photon [i.e., the total system gain γ(t) in Eq. (8b)].
The effects of false counts on the DQE-in particular, the dependence of charge sharing and additive noise effects on detector threshold, and the potential to reduce false counts via coincidence detection-are investigated in Sec. 4.B.

3.A. Imaging bench for photon counting CT
As illustrated in Fig. 2, an imaging bench was built to test PCD imaging performance as predicted by the model and serve as a basis for the development of new PCD CT systems. The bench includes an x-ray source (XRS-125-7K-P, Source-Ray, Ronkonkoma, NY), computer-controlled translation and rotation stages (PK266-03A-P1, Velmex, Bloomfield, NY, with minimum step size 0.00635 mm and 0.0125 • , respectively), and an edge-on Si-strip PCD (MicroDose, Philips Healthcare, Solna, Sweden) originally developed for mammography, with (0.05 × 0.550) mm 2 pixel size (x and y directions) and 3.6 mm thickness (z direction), as illustrated in Fig. 2(C). An example CT reconstruction of a hand phantom (natural human skeleton in tissue-equivalent plastic) is shown in Fig. 2(D), acquired on the PCD bench at 70 kVp, 0.075 mAs per projection, 360 projections over 360 • and reconstructed using 3D filtered backprojection.
The detector features coincidence detection logic that identifies when counts are recorded by adjacent pixels within a small time window (τ co ). With coincidence rejection enabled, the two counts are considered to represent the same photon, and a count is assigned only to the pixel with the higher pulse height. The time window τ co determines the so-called dead time extension, as two or more distinct photon interactions occurring over adjacent pixels during this time window will also be considered coincident, resulting in loss of true signal. The model incorporates the effects of coincidence rejection as detailed in Appendix A. Assuming a detector dead time of ∼180 ns as specified by the manufacturer, a coincidence rejection dead time extension of τ co = 20 ns, 35 and fluence as reported in Table I for a 70 kVp beam, only ∼0.1% of all incident photons are within the dead time window even with coincidence circuitry active. These coincident counts are postprocessed such that the pixel recording the larger pulse height is taken to record the "true" count, and the other count is rejected. In the nominal readout mode, coincidence detection is on by default. For measurements without coincidence detection, the system deactivates every other pixel (giving pixel pitch b x = 0.1 mm) such that a coincident event in adjacent pixels will never be recorded. This scenario of b x = 2a x is not intended for typical image acquisition, but is included as a testing mode for investigating the effects of charge sharing. The measurement of presampling MTF is not affected by sampling distance, so the measurements presented in Sec. 4 are not affected by sampling effects resulting from turning off coincidence rejection.
Measurements were performed at a tube voltage of 70 kVp, added filtration of 2 mm Al plus 1.2 mm of Cu (approximating attenuation by 10 cm water), tube current varied from 1 to 7 mA, x-ray pulse duration of 15 ms, and detector readout at 1 frame/s. A basic calculation of tube output in spektr 36 suggests that a typical exposure (70 kVp, 4 mA, 15 ms pulse, 10 cm water equivalent filtration) with a source-detector distance of 653 mm and pixel size of (0.05 × 0.55) mm 2 amounts to fewer than 100 photons per pixel per frame, which is well below the count rate limit of this PCD. 35

3.B. Measurement of detector signal, MTF, and NPS
The performance of the Si-strip PCD system (Fig. 2) was evaluated in terms of the mean signal, MTF, NPS, and DQE. To relate the measured and predicted detector response, the threshold in secondary quanta (t, with units of secondary quanta) must be converted to the detector pulse height threshold (D, with units of arbitrary detector threshold corresponding to pulse height in millivolts). Section II of the supplementary material provides further discussion of the empirical relationship of t and D, a process governed by the pulse shaper and digitizer in converting charge carriers to a voltage pulse height. 57 A comparator determines if a count is above the voltage threshold D, and the pulse height monotonically increases with increased number of charge carriers. Saturation occurs when the pulse height exceeds the capacity of the pulse shaper and digitizer, resulting in a nonlinear mapping of charge carrier threshold and pulse height threshold. The mapping was described by an empirical fit of the form D = c 1 − c 2 e −c 3 t (for t > c 4 ) and D = c 2 c 3 e −c 2 c 3 t (for t ≤ c 4 ), where c 1 describes the maximum detector threshold, c 2 the zero-threshold offset, c 3 the saturation rolloff, and c 4 the range of linear operation. The parameters c 1 -c 4 were determined by fitting the predicted mean signal q 0 γ(t) and the measured detector signal (at 70 kVp, 4 mA, 15 ms pulse time) and were found to be consistent with previous work 35 in mammography (e.g., ∼25 keV saturation energy and ∼3 keV noise floor). These parameters were sufficient for testing the model and could be adjusted according to other potential applications, e.g., higher saturation energy for CT applications involving a higher energy incident spectrum.
The mean detector signal was measured from flood-field images acquired at various settings of detector threshold. Gain correction was performed to account for residual pixelto-pixel differences after trimming the individual thresholds to adjust for varying pulse height amplification. An empirical fit of the mean signal response (Fig. 3) with and without coincidence rejection suggested a coincidence rejection efficiency of r m = 0.35. This relatively low rejection efficiency is likely due to "leakage" resulting from an inability of the circuitry to distinguish between two simultaneous pulses of similar (saturated) pulse heights at 70 kVp. 35 The MTF was measured using a 0.5 mm thick tungsten edge abutting and parallel to the face of the detector. The beam was collimated to ∼1 × 1 cm 2 at the face of the x-ray tube to minimize off-focal radiation. An oversampled edgespread function (ESF) was formed from 30 images of the tungsten edge in which the edge was translated in increments of 6.35 µm via the computer-controlled translation stage between each image. Data from a continuous row of pixels were analyzed for each image, and the oversampled ESF was generated by interleaving the ESF images according to the displacement of the tungsten edge. A numerical derivative of the ESF was computed, and the tails of the resulting line spread function (LSF) at less than 1% of the peak magnitude were smoothed by a sliding (1 × 7) mean filter to reduce high-frequency noise. 43,44 The discrete Fourier transform of the area-normalized LSF was computed to arrive at the presampling MTF.
The NPS was measured from an ensemble of flood-field images at detector threshold values ranging from D = 0 to D = 150 (t range from 0 to 2.4 × 10 3 electrons). For each threshold, 30 flood-field images were acquired (70 kVp, 2 mm Al and 1.2 mm Cu added filtration, 1-7 mA, 15 ms pulse length). Each flood-field image consisted of ∼100 individual edge-on Si-strip detector wafers, with ∼1500 pixels per wafer. These data were processed in 58 regions of interest (300 pixels each) with continuous adjacent pixels within each wafer, yielding a total of 1740 noise realizations for each setting of milliampere and threshold. These data were linearly detrended to account for anode heel effect, and each image was gain-corrected at each threshold. The mean signal was subtracted from each noise realization, and the squared modulus of the Fourier transform was computed and normalized by the pixel pitch and number of pixels in each realization. 45 The resulting 1740 NPS estimates were averaged to yield the ensemble NPS.

4.A. Comparison of theory and measurement
The predictions of mean signal, MTF, and NPS derived from cascaded system analysis were compared to measurements at nominal parameters (70 kVp and without coincidence rejection) as described in Sec. 3. As shown in Fig. 3(A), detector signal response was measured at various milliamperes and detector threshold, with the mean signal predicted as in Sec. 2. The mean signal decreases monotonically as detector threshold is increased, reflecting a larger number of counts rejected at higher threshold. As shown in Fig. 3(B) for a fixed threshold (D = 100), the mean signal, q 0 γ(t), is linear with exposure (tube current). The system gain for one incident photon, γ, does not exhibit a dependence on exposure over the range of tube currents investigated in the current system. A slight deviation from linearity is observed at tube current greater than ∼7 mA, confirming that the PCD is operating at count rates well below the pileup regime. As shown in Fig. 3(C), the MTF was evaluated at low (D = 0) and nominal (D = 100) detector thresholds, showing an improvement in transfer characteristic at higher threshold. As the distance between the point of photon interaction and the center of the pixel is increased, the likelihood of collecting secondary quanta in the pixel is reduced (per Stages 5 and 7 of the model). Therefore, raising the threshold rejects signal collected far from the point of interaction and improves the effective PSF by reducing the relative contribution of adjacent apertures from multiple counts or detector crosstalk. (See supplementary material Sec. III for analysis of PSF at various levels of detector threshold. 57 ) A potential disadvantage of using a higher threshold is reduced signal, as shown in Fig. 3(A). Overall, theory and measurement were in reasonably good agreement, quantified in terms of a Pearson's correlation coefficient (R value in a linear regression of measured versus theoretical values). The correlation coefficient was greater than 0.93 for all results shown in Figs. 3-5 except where specifically noted.
The spatial-frequency-dependent NPS is shown in Fig.  4(A) and is found to be largely uncorrelated at all exposure levels for the nominal operating threshold despite the detector cross talk caused by charge sharing evident in the PSF. The whitening of the NPS arises from undersampling associated with doubling of the sampling distance (b x = 0.1 mm) in this readout mode (see Sec. 3.A) without a corresponding doubling of aperture size. If the detector could be fully sampled in this readout mode (and the aliasing effect correspondingly reduced), then the broadening of the PSF would be more clearly evident in bandlimiting of the NPS. 26,46 Such results are demonstrated in the supplementary material, Sec. III.A. 57 As shown in Fig. 4(B), individual pixel noise computed from the standard deviation of a single pixel in successive frames was seen to increase with the square root of the exposure, as expected for a Poisson distributed random variable, in good agreement with model predictions. In Fig. 4(C), the individual pixel noise is plotted as a function of threshold (with pixel noise given by the integral of the 2D NPS over the Nyquist region). The frequency dependence of the NPS (not shown) was verified in this readout mode to be white (uncorrelated) for all threshold values, D > 20. At the lowest threshold settings, the magnitude of the noise increases dramatically due to additive noise. This effect is predicted by the model (Appendix B); however, individual pixels with varying behavior in the pulse shapers and small differences in gain (detector trim differences) were found to count additive noise in varying degrees, and the error in gain calibration resulted in a slight overestimation of the noise at D < 10 (giving a reduced correlation coefficient of 0.81).

4.B. Effect of charge sharing on PCD performance
The effects of charge sharing on PCD performance primarily involve a contribution of false counts [calculated from q 7 (x,y;t)] from multiple adjacent pixels recording a count for the same photon interaction. As discussed in Appendix A, the effective PSF is a weighted combination of rect functions representing the PSF of a single count, a double count, etc. This suggests that as the threshold is reduced, the contribution from instances of multiple counting is increased, leading to a larger proportion of rect functions from multiple counting and broadening the overall PSF. Supplementary material Sec. III.A provides further analysis and discussion. 57 The effects of charge sharing on signal response are shown in Fig. 5(A), where the predicted and measured mean signals are shown for the detector operated with anticoincidence circuitry enabled. A discrepancy at high threshold might be expected due to channel leakage associated with high-energy charge sharing events as previously reported 35 for this detector (a reduced correlation coefficient of 0.91 and 0.93 for the case of 4 and 7 mA, respectively). These effects are evident in the measurement of mean signal and spatial resolution, as seen in the 7 mA measurements in Fig. 5. Channel leakage can cause the majority of counts recorded at high threshold to be saturated, which can confound the coincidence rejection logic, resulting in a higher than expected signal. For the nominal detector threshold, however, the linearity of the signal response was preserved when the coincidence rejection circuitry was enabled, indicating that the dead time loss remained largely unchanged. In comparing the mean signal with coincidence rejection as shown in Fig. 5(A) to that without coincidence rejection [ Fig. 3(A)], we observe little or no effect on the number of counts reported at high thresholds, since most counts at high thresholds are single counts, whereas at low threshold (D < ∼50) the signal is reduced by ∼10%-20% due to the large proportion of double counts.
Analysis of the count coefficients, w m (detailed in Appendix A), showed that coincidence rejection reduced the ratio of double counts (w 2 ) to single counts (w 1 ), which is reflected in Fig. 5 as an improvement in the presampling MTF at all thresholds [e.g., comparing Figs. 5(B) and 3(C)]. The improvement is most pronounced for low thresholds where charge sharing events are strongest, although it is still apparent to a smaller degree at the nominal threshold of D = 100 [ Fig. 5(B)]. The dependence of the MTF on threshold in coincidence rejection mode is less pronounced than predicted by the model (with a reduced Pearson's coefficient of 0.91 and 0.92 for D = 0 and D = 100, respectively), which is attributed in part to the channel leakage effect. The improved MTF [i.e., narrower PSF as shown in Fig. 5(C)] indicates improved spatial resolution associated with coincidence rejection.
The effects of charge sharing on spatial-frequency-dependent detector performance are further illustrated in Figs. 6(A)-6(D) as a function of exposure conditions (beam energy) and PCD parameters (threshold, charge carrier diffusion, and pixel size). As shown in Fig. 6(A), the DQE at low threshold (D = 0) and low energy (35 kVp) suffers without coincidence rejection due to an increase in false counts. At the same energy, raising the threshold improves both high-frequency DQE and the zero-frequency DQE. A detector with perfect coincidence rejection efficiency (CRE, r m = 1) was simulated, and the resulting DQE at low and high thresholds is shown as dotted lines. With perfect coincidence rejection, the threshold effect is eliminated. At higher tube voltage [70 kVp, Fig. 6(B)], the detector performance at low and high thresholds without coincidence rejection mirrors that at low energy, but with an overall decrease in the DQE due to a reduction in quantum detection efficiency. For a detector with perfect coincidence detection, the DQE at high threshold is worse than that at low threshold due to rejection of true low-energy counts arising from Compton interactions.  0) at low coincidence rejection efficiency is due to a reduction in the true count fraction, shown in (E). In (F), the optimal detector threshold is shown at 35 and 70 kVp, demonstrating a strong dependence of optimal threshold on kVp. The reduction of charge sharing effects by coincidence rejection is shown to benefit DQE(0) computed as a function of (G) charge carrier diffusion radius and (H) detector threshold. events to true counts. 57 The effect is also somewhat apparent at 35 kVp, but at 70 kVp a larger portion of interactions are Compton, making the effect more pronounced.
In Fig. 6(C), the effect of charge carrier diffusion is shown for three different diffusion lengths (σ 3 ). Improving the charge carrier spread function (reducing σ 3 ) increases both the high-frequency performance (as expected from reduced blur) and improves DQE(0) due to a reduction in false counts from charge sharing. Similarly, changing the aperture or pixel size [ Fig. 6(D)] reflects a tradeoff between a loss in high-frequency performance due to a reduction in the Nyquist frequency and an improvement in the low-frequency DQE from a reduction of charge sharing. It should be noted that increasing the aperture size imparts distinct implications for PCDs compared, for example, to FPDs. Increasing the aperture size does not give an appreciable improvement in the ratio of signal to additive noise, unlike in EIDs. 47 Instead, the benefit of larger apertures stems from a reduction in the chance that a single photon will contribute secondary quanta to multiple pixels, i.e., reduced probability of charge sharing.
The true count fraction, k, is defined as k = γ true /γ and shown in Fig. 6(E). The true count fraction computed as a function of threshold (dotted line) indicates that more than half of all recorded counts at low threshold (D < 20) are false counts (from both charge sharing and additive noise), but at high thresholds (D > 150), almost all counts are true. A "perfect" threshold set equal to the maximum possible number of secondary quanta generated by a photon interaction would yield unity k, but the system gain (γ) would be nearly zero. The solid line shows the true count fraction as a function of the ratio of diffusion radius (σ 3 ) to aperture size (a x ). For diffusion radius much smaller than aperture size, nearly all secondary quanta generated by a photon interaction are collected by a single pixel, rendering charge sharing negligible. On the other hand, if the charge carrier diffusion radius is much greater than the aperture size, almost all photon interactions result in multiple pixels receiving some secondary quanta, increasing false counts. Figure 6(F) shows the optimal detector threshold for various levels of coincidence rejection efficiency at 35 and 70 kVp. The optimal threshold is that which maximizes DQE by best separating the additive noise from the signal and balancing the reduction in charge sharing versus the corresponding reduction in true signal. For a 35 kVp beam, a high threshold is optimal: increasing the threshold rejects relatively few true counts (the low-energy Compton peak is small, Fig. 7) compared to a large number of false counts resulting from charge sharing. For a 70 kVp beam, there is a much larger Compton peak compared to the additive noise, and the optimal threshold balances the rejection of both additive noise and Compton signal with charge sharing rejection.
Figures 6(G) and 6(H) illustrates PCD performance in terms of DQE(0) for a number of pertinent parameters. The zero-frequency DQE is shown for brevity, and since the DQE(u) of Figs. 6(A)-6(D) are nearly flat over a fairly broad range of parameters, the analysis conveys many of the pertinent ramifications of the system design parameters under consideration. Figure 6(G) shows that as charge carrier diffusion increases, coincidence rejection becomes increasingly important. coincidence rejection efficiency, the detector threshold needs to be selected to balance tradeoffs among additive noise, charge sharing, and collection of true signal as discussed in relation to Fig. 6(F).

4.C. Effects of additive noise on PCD performance and spectral resolution
As detailed in Appendix B, the effects of additive noise on PCD performance primarily involve a contribution of false counts at low thresholds (largely determined by the behavior of the pulse shaper and the ASIC pulse height gain) and a "blurring" effect on the detected energy spectrum. As seen in Fig. 7(A), for the PCD operating without coincidence detection, the nominal additive noise contribution to the signal is relatively small for energies greater than 1 keV. However, comparing with a PCD capable of perfect coincidence rejection [ Fig. 7(B)], nearly all of the signal at E < 10 keV (at 35 kVp) [and E < 30 keV at 70 kVp in Fig. 7(A)] is due to charge sharing events registered as counts. Even with perfect coincidence rejection, however, a threshold that is high enough to reject additive noise unavoidably rejects a portion of the low-energy counts resulting from Compton interactions. This is tantamount to a reduction of the quantum detection efficiency of the detector and contributes to the different ranges of optimal detector threshold seen in Fig. 6.
The effects of additive noise and coincidence rejection efficiency on DQE(0) are shown in Figs. 7(C) and 7(D). In addition to the strong dependence of optimal threshold on beam energy shown in Fig. 6(F), the optimum depends on additive noise and coincidence rejection efficiency. Without coincidence rejection [ Fig. 7(C)], the optimal threshold exhibits a weak dependence on additive noise until the magnitude of the additive noise overcomes the charge sharing effects contributing to false counts (σ add > 1500). In Fig. 7(D), on the other hand, a detector with perfect coincidence rejection has an optimal threshold with a much stronger dependence on additive noise, since there is no contribution of false counts from charge sharing at low thresholds. The result is intuitive in that reduction of charge sharing effects allows a lower threshold (and higher signal) with reduced effect of additive noise.

4.D. Potential advantages (and disadvantages) of photon counting
While EIDs (for example, FPDs) have been a widespread base technology for x-ray detection for more than 15 years, and PCDs have become increasingly prevalent over the last decade, the fundamental advantages and disadvantages of each has been only somewhat rigorously assessed. For PCDs, the benefit of reduced (effectively zero) electronics noise is often noted by virtue of thresholding, as is the improved energy weighting and the potential for energy discrimination and spectral imaging. 48 The latter effects are outside the scope of the current work and were not included in the analysis below. However, at least a portion of the low-energy Compton interactions are lost in selecting a threshold that rejects the entirety of the additive noise distribution [for example, as in Figs. 7(A) and 7(B)]. To compare the fundamental advantages and disadvantages of the thresholding step, consider a hypothetical EID with the same nominal parameters as the PCD system described above (Table I). For purposes of this analysis, the only difference between the PCD and the hypothetical EID is the ability of the former to threshold the detected signal at a voltage pulse height corresponding to the number of collected secondary quanta and the resulting binary nature of the recorded signal. The effects of dead time loss (pulse pileup and chance coincidence) are ignored, as discussed above. The EID therefore has equivalent quantum detection efficiency (g 1 ), gain and spread in secondary quanta (g 2 and T 3 ), aperture size (T 5 ), electronics noise (σ add ), etc. and was modeled according to well-established cascaded system analysis in previous work. 49 As shown in Fig. 8(A), a PCD system operating with perfect coincidence rejection (r m = 1) at a typical threshold level (E = 6 keV, approximately equal to D = 100 for the system in Sec. 3.A) selected to reject additive noise suffers from a slight reduction in DQE(0) compared to an identically parameterized EID due to the loss of low-energy Compton interactions. Reduction of the threshold (e.g., to E = 0.5 keV, approximately equal to D = 5 for the system in Sec. 3.A) results in an improvement at lower additive noise values, but causes the PCD to suffer as additive noise is increased. Furthermore, Figs. 8(B) and 8(C) show that as the coincidence rejection decreases, the advantage of PCDs at all levels of additive noise and dose decreases appreciably, F. 8. Performance of a PCD in comparison to a hypothetical EID of equivalent design (but without the ability for signal thresholding). The plot shows DQE(0) for the two systems as a function of additive noise at (A) perfect coincidence rejection efficiency (r m = 1), (B) imperfect coincidence rejection efficiency (r m = 0.5), and (C) no coincidence rejection (r m = 0). The DQE(0) for the PCD is nearly independent of dose and was evaluated at two threshold settings [nominal 6 keV (solid black line) and a low-energy threshold of 0.5 keV (dotted black line)]. The DQE(0) for the EID is shown at three dose levels: 10 µR (dotted gray line), 5 µR (solid gray line), and 1 µR (dashed gray line).
but still maintains regions of operation at high noise and low dose that are advantageous to that of the hypothetical EID. This analysis allows selection of an optimal threshold that balances the tradeoffs between the loss in Compton counts and the influence of additive noise. With reduced coincidence rejection efficiency, false counts from charge sharing occur at low-energy thresholds [see Fig. 6(A)] and remove the advantage of a low threshold which does not reject the low-energy Compton signal.
Further comparison between the PCD and the hypothetical, identical EID is shown as a function of dose, additive noise, incident energy, threshold, and coincidence rejection efficiency in Fig. 9. These calculations show the range of operating conditions at low dose and/or high additive noise for which the application of a threshold in the PCD systems is beneficial in comparison to the energy-integrating system. Note, however, that the hypothetical EID enjoys a quantum gain (ḡ 2 ≈ 8000 electrons) that is much larger than for typical scintillators and less efficient semiconductors, so the performance of the hypothetical system at low dose (and higher additive noise) is greater than should be expected for a realistic EID. The point here is to illustrate the fundamental advantage (and disadvantage) of thresholding, all other factors being equal, and analysis for a realistic EID (viz., a FPD) is shown below.
Perfect coincidence rejection is assumed in Fig. 9(A) for a 35 kVp beam, leaving additive noise as the only contributor of false counts for the PCD. The threshold was fixed at E = 10 keV to give good separation of the additive noise (σ add = 0-3000 electrons) from the true signal spectrum. In the regime of low additive noise and high dose, the EID is shown to slightly outperform the PCD, because the EID is operating in close to ideal circumstances (strongly quantum limited) while the PCD suffers a small loss due to the rejection of the Compton interactions. The effect of frequency was evaluated and found to have minimal effect on the relative performance of the PCD versus the EID. Further evaluation of DQE(u,v) is within the capability of the model and certainly of interest in future work. In Fig. 9(B), the identical scenario is shown for a 70 kVp beam. At higher incident energies, the Compton interaction cross section comprises a larger portion of the total cross section, and a threshold of E = 10 keV therefore rejects a larger portion of the interacting photons. This results in a stronger reduction in DQE(0) and a reduction in the dose range for which the PCD is advantageous in comparison to the EID.
The relative performance of EIDs and PCDs with suboptimal coincidence rejection and different thresholds is discussed further in supplementary materials Sec. III.B, showing the region where PCD performance is equivalent or superior to that of the hypothetical EID is strongly dependent on coincidence rejection efficiency. 57 Figure 9(C) summarizes these results-at a fixed dose and additive noise level-as a function of coincidence rejection efficiency and detector threshold. The EID performance is independent of these parameters. For a given additive noise level, dose, and coincidence rejection efficiency, there is an optimal threshold at which PCD performance is maximized (indicated with the dashed black line). At low-energy threshold and low coincidence rejection efficiency, the PCD performance suffers due to the introduction of false counts from charge sharing effects. As the threshold is increased above ∼25 keV, the PCD again suffers in comparison to the EID due to the rejection of true signal (high-energy Compton interactions and photoelectric interactions).
In Fig. 9(D), the spatial-frequency-dependent DQE of the PCD and hypothetical EID is shown at fixed dose, additive noise, threshold, and coincidence rejection efficiency. The PCD shows a modest improvement at low spatial-frequency, and a more pronounced improvement in comparison to the EID at higher frequencies. To provide a realistic base of comparison, and because the gain at Stage 2 is so high (higher than would be expected for a typical EID employing a scintillator), the performance is plotted in comparison to that of a "typical" FPD as modeled in previous work. [49][50][51] The FPD was modeled according to a CsI:Tl scintillator (150 mg/cm 2 thickness),ḡ 2 = 900, T 3 determined by the scintillator thickness,ḡ 4 = 0.99, T 5 given by a sinc function for a x = 0.05 mm, and σ add = 1000 e. This more realistic representation of an energy-integrating FPD exhibits lower gain in Stage 2 and a blurrier Stage 3, achieving comparable performance to the PCD only down to ∼1 mR. As shown in Fig. 9(D), the DQE for the FPD (gray dashed curve) is higher at low spatial-frequency due to the higher atomic number of the CsI:Tl scintillator (higher g 1 ); however, the high-frequency performance is limited by blur in the scintillator, which greatly exceeds that of charge carrier diffusion in the silicon-strip detector.

CONCLUSIONS
A cascaded systems model has been developed for the analysis of the imaging performance characteristics of PCD systems, building upon more than a decade of well-established modeling of signal and noise propagation in energy-integrating detectors (e.g., FPDs) and extends analysis to include effects that are unique to PCDs. The model was validated in comparison to measurements and illustrates the threshold dependence of the MTF, extending previous work by Acciavatti and Maidment. 26 The model also complements recent work in detector physics specifically modeling a single stage-e.g., spectral distortion effects 52,53 -to include a more complete framework for spatial-frequency-dependent signal and noise characteristics. Among the findings of the current work are (i) illustration of numerous important factors of system performance, such as the effects of detector threshold on DQE as reported by Tanguay et al., 33 (ii) revealing the effects of charge sharing and false counts, (iii) validation of theoretical predictions in comparison to experimental measurements on a PCD benchtop across a wide range of exposure and detector operating conditions, (iv) formulation of a framework for system optimization, and (v) a basis for more rigorous understanding of the potential advantages (and disadvantages) of PCDs in comparison to conventional energy integrators. The model also introduces the concept of a threshold-dependent effective PSF given by a weighted combination of aperture functions and incorporates coincidence rejection in the analytical framework for PCD imaging performance.
The current model is not without limitations, as acknowledged in part in Sec. 2. Among these is the assumption that pulse pileup-in which multiple photons are incident onto the same pixel during the dead time of the detector-and chance coincidence-in which multiple photons are incident on adjacent pixels at a nearly coincident time and blocked by the anti-coincidence logic-are negligible. As demonstrated in the experimental measurements, however, these assumptions appear to be valid for the PCD used in this work over nearly the entire range of operating techniques [kilovolt (peak) and milliamperes], where only a small departure from linearity was observed at the highest exposure rates. For PCD systems with larger pixel size, more compact geometry, and increased dead time compared to the Si detector considered in this work, pileup effects may degrade detector response. Extension of the analytical model to encompass both charge sharing (as shown in this work) and pileup effects (to be considered in future work) would benefit system design and optimization for such configurations.
The model was shown to demonstrate reasonable agreement with the measured signal and noise characteristics of the PCD, including spatial resolution (MTF), spectral characteristics (signal versus threshold), and stochastic noise characteristics (NPS). These aspects of detector performance are important to consider as such PCD systems begin to enter application in various areas of radiographic/mammographic and tomographic imaging. Such characteristics not only govern the fundamental low-dose performance of such systems but also the ability for energy discrimination, which is the subject of future work in extension of the model. The model adapts equally well to a variety of PCD configurations distinct from the Si-strip detector considered in this work. For example, the model allows analysis of the potential benefit of improved quantum detection efficiency of detector materials such as CdTe (higher atomic number) weighed against the tradeoffs of poor charge transport of holes (increased σ 3 and reduced g 4 ) and K-fluorescence. In particular, K-fluorescent photons can degrade the resolution and increase charge sharing events. This can be incorporated in the model as a parallel cascade 54 representing spatial relocation at Stage 2. As in previous work, 55 the distribution for photoelectric and Compton interactions would be split in Stage 1, treated independently in Stages 2-4 (including K-fluorescent effects) and recombined in Stage 5. Finally, the model provides a rigorous basis for understanding the fundamental performance advantages and limits of PCDs in comparison to conventional energy integrators. Future work includes extension of the model to distributions of quanta in the temporal domain (pulse pileup and chance coincidence), a more complete detector scatter model (Rayleigh and multiple Compton scatter), extension of the model to other detector material types, 3D image reconstruction, and material decomposition in spectral CT.

ACKNOWLEDGMENTS
The research was supported by NIH Grant Nos. 2R01-CA-112163 and T32EB010021. The authors extend thanks to Håkan Langemark, Mathias Beer, and Vignesh Natarajan (Philips Healthcare) for assistance with instrumentation of the PCD used in this work.

APPENDIX A: THE EFFECT OF CHARGE SHARING ON PCD PERFORMANCE
As described in Sec. 2.A, a single photon can contribute counts to multiple detector elements. For example, if the spatial extent of the charge carrier cloud arising at Stage 3 spans more than a single pixel, all pixels that collect charge carriers from a photon interaction have a chance to record a count, resulting in a system gain that can be greater than unity. This phenomenon, called charge sharing or pixel crosstalk, causes a loss of fidelity in the recorded data. Both the true system gain, γ true (t), and the total system gain, γ(t), can be computed from the distribution, q 7 (x,y,t) derived in Stage 7, as an expectation value of the sampled signal, with consideration for the sampling step in Stage 8 yielding an "unmodulated" signal of 1 when a count is recorded and a signal of 0 if a count is not recorded. A count coefficient,ŵ m , can be computed to describe the likelihood that a single incident photon at threshold t will contribute counts to exactly m pixelŝ For example, m = 1 corresponds to a coefficientŵ 1 (t) describing the probability of one pixel recording one count, m = 2 corresponds to the probabilityŵ 2 (t) of two pixels recording one count each (for a total of two counts), etc. By definition,ŵ 0 = 0, since it is assumed that if a photon generates a detectable signal, then the likelihood of zero counts is zero. Note that the likelihood of interaction of the photon is included in the calculation of q 7 from Stage 1. The effects of charge sharing can be at least partially mitigated in PCD systems, such as the one in this work, by detection of temporally coincident counts in adjacent pixels. This effect can be included in the count coefficient as a modification ofŵ m for m > 1 by a CRE coefficient, denoted r m , as A rejection efficiency of r m = 1 corresponds to perfect coincidence detection (a photon interaction contributes counts only to the pixel directly beneath it) and a rejection efficiency of 0 means nothing is rejected (e.g., a PCD without coincidence detection). The expected mean signal resulting from a single photon interaction, γ(t), can then be computed as where γ(t) can be equivalently interpreted as the total system gain (detector counts per incident x-ray) and x 0 represents the phase difference between the sampling matrix and the photon interaction. 56 The expected true system gain is then γ true (t) = w 1 (t), where γ true (t) can be equivalently interpreted as the likelihood that one incident photon will contribute one count to the detector element directly under the point of interaction. The count coefficients, w m (t), are ideally limited simply by the quantum detection efficiency (the likelihood of the photon interacting at Stage 1) and the integration of secondary quanta at Stage 5 resulting in a count above threshold. However, with a sufficiently broad spread (σ 3 ) of secondary quanta at Stage 3, a small aperture size (a x ) at Stage 5, or a low threshold (t) at Stage 7, the terms w m for m > 1 can be nonzero, meaning that more than one pixel records a count from a single interaction. As a result, γ(t) can in principle exceed unity due to false (alternatively, "double") counts. The model describes how both charge sharing and additive noise can result in such false counts, presenting a source of error that must be accounted in the propagation of signal and noise.
Furthermore, the PSF associated with a single photon counted by the detector is a rect function, since the detector records photons in a binary fashion (0 or 1) and is therefore unmodulated. However, again due to the charge sharing effect, the width of this rect function is in multiples of a x -i.e., one photon can contribute counts to one pixel resulting in an rect of width a x , two pixels resulting in a rect of width 2a x , etc. The "effective" point spread function is computed as a weighted sum of such rect functions, 56 with the weights proportional to the count coefficients computed in Eq. (A2) . (A5)

APPENDIX B: THE EFFECT OF ADDITIVE NOISE ON PCD PERFORMANCE AND SPECTRAL RESOLUTION
There are two potential benefits to application of a threshold in PCDs. The first aims to achieve the highest data fidelity by optimally separating the true count distribution from the false-count distribution. A second benefit is provided by the ability to distinguish incident energies. To this end, the threshold, t (in units of secondary quanta) can be approximately remapped to an energy threshold by E t (t) ≈ tW , where W is the average energy required to liberate a single charge carrier in the detector material, and E t is the energy threshold equivalent. For notational convenience, the subscripts are dropped in further analysis. It is useful to cite the threshold alternatively in terms of detector threshold (D, units proportional to pulse height), charge carrier threshold (t, units of charge carriers), and energy threshold (E, units of energy). From Appendix A, an expected system gain can be computed for any threshold. The threshold can be converted to energy as above, and the numerical derivative can be performed to arrive at the detected energy spectrum, γ det (E) = d/dE (γ[t (E)]). The detected spectrum for a PCD operating with and without coincidence rejection is shown in Figs. 7(A) and 7(B). It is important to note that the low-energy "noise" associated with the detected spectrum in detectors without coincidence rejection or some other form of charge sharing rejection can be almost entirely attributed to false counts resulting from charge sharing. This effect is compounded by the presence of additive noise (discussed below), but can be mitigated with proper false-count correction.
Additive noise contributes to the low-energy portion of the distribution in quanta and can contribute false counts if the threshold is selected at a value that is too low. Common practice is to select a threshold well above the additive noise level (i.e., t several times larger than σ add ), but as can be seen in the energy spectrum of Fig. 7(B), doing so will also set the threshold above the energy of Compton interactions, resulting in a loss of true signal as well. Reducing the threshold increases the detector signal associated with low-energy interactions, but it increases the probability that a recorded count is due to additive noise instead of a true interaction. For a given threshold, the additive noise distribution for a noninteracting photon iŝ q 6 (n; x) = q 5 (n = 0; x) * q add 6 (n), whereq 6 is a convolution of the probability of no photon interaction [q 5 (n = 0; x)] with the additive noise term defined in Eq. (5b). The resulting additive noise distribution after thresholding (denotedq 7 ) is given bŷ and is the probability of recording a false count at a given threshold t 7 and any location x due solely to additive noise. The expected false signal from additive noise is derived in a similar manner as the expected gain in Eq. (A3) with the factor, m add , empirically determined from measurement of noise recorded when no photon is incident. For the PCD system under consideration, m add = 3 provided a reasonable match to measured dark-field distributions, accounting for pulse shaper behavior and pulse height sampling rate. This semiempirical approach accounts for the ratio of falsecount events resulting from instances where a photon was not incident on the detector as discussed in Tanguay et al., 25 but characterization of the innate detector processes which leads to this false-count ratio is outside the scope of this work.