992 resultados para Quantitative reconstruction
Resumo:
Objective: The PEM Flex Solo II (Naviscan, Inc., San Diego, CA) is currently the only commercially-available positron emission mammography (PEM) scanner. This scanner does not apply corrections for count rate effects, attenuation or scatter during image reconstruction, potentially affecting the quantitative accuracy of images. This work measures the overall quantitative accuracy of the PEM Flex system, and determines the contributions of error due to count rate effects, attenuation and scatter. Materials and Methods: Gelatin phantoms were designed to simulate breasts of different sizes (4 – 12 cm thick) with varying uniform background activity concentration (0.007 – 0.5 μCi/cc), cysts and lesions (2:1, 5:1, 10:1 lesion-to-background ratios). The overall error was calculated from ROI measurements in the phantoms with a clinically relevant background activity concentration (0.065 μCi/cc). The error due to count rate effects was determined by comparing the overall error at multiple background activity concentrations to the error at 0.007 μCi/cc. A point source and cold gelatin phantoms were used to assess the errors due to attenuation and scatter. The maximum pixel values in gelatin and in air were compared to determine the effect of attenuation. Scatter was evaluated by comparing the sum of all pixel values in gelatin and in air. Results: The overall error in the background was found to be negative in phantoms of all thicknesses, with the exception of the 4-cm thick phantoms (0%±7%), and it increased with thickness (-34%±6% for the 12-cm phantoms). All lesions exhibited large negative error (-22% for the 2:1 lesions in the 4-cm phantom) which increased with thickness and with lesion-to-background ratio (-85% for the 10:1 lesions in the 12-cm phantoms). The error due to count rate in phantoms with 0.065 μCi/cc background was negative (-23%±6% for 4-cm thickness) and decreased with thickness (-7%±7% for 12 cm). Attenuation was a substantial source of negative error and increased with thickness (-51%±10% to -77% ±4% in 4 to 12 cm phantoms, respectively). Scatter contributed a relatively constant amount of positive error (+23%±11%) for all thicknesses. Conclusion: Applying corrections for count rate, attenuation and scatter will be essential for the PEM Flex Solo II to be able to produce quantitatively accurate images.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
PURPOSE To determine the image quality of an iterative reconstruction (IR) technique in low-dose MDCT (LDCT) of the chest of immunocompromised patients in an intraindividual comparison to filtered back projection (FBP) and to evaluate the dose reduction capability. MATERIALS AND METHODS 30 chest LDCT scans were performed in immunocompromised patients (Brilliance iCT; 20-40 mAs; mean CTDIvol: 1.7 mGy). The raw data were reconstructed using FBP and the IR technique (iDose4™, Philips, Best, The Netherlands) set to seven iteration levels. 30 routine-dose MDCT (RDCT) reconstructed with FBP served as controls (mean exposure: 116 mAs; mean CDTIvol: 7.6 mGy). Three blinded radiologists scored subjective image quality and lesion conspicuity. Quantitative parameters including CT attenuation and objective image noise (OIN) were determined. RESULTS In LDCT high iDose4™ levels lead to a significant decrease in OIN (FBP vs. iDose7: subscapular muscle 139.4 vs. 40.6 HU). The high iDose4™ levels provided significant improvements in image quality and artifact and noise reduction compared to LDCT FBP images. The conspicuity of subtle lesions was limited in LDCT FBP images. It significantly improved with high iDose4™ levels (> iDose4). LDCT with iDose4™ level 6 was determined to be of equivalent image quality as RDCT with FBP. CONCLUSION iDose4™ substantially improves image quality and lesion conspicuity and reduces noise in low-dose chest CT. Compared to RDCT, high iDose4™ levels provide equivalent image quality in LDCT, hence suggesting a potential dose reduction of almost 80%.
Resumo:
We present quantitative reconstructions of regional vegetation cover in north-western Europe, western Europe north of the Alps, and eastern Europe for five time windows in the Holocene around 6k, 3k, 0.5k, 0.2k, and 0.05k calendar years before present (bp)] at a 1 degrees x1 degrees spatial scale with the objective of producing vegetation descriptions suitable for climate modelling. The REVEALS model was applied on 636 pollen records from lakes and bogs to reconstruct the past cover of 25 plant taxa grouped into 10 plant-functional types and three land-cover types evergreen trees, summer-green (deciduous) trees, and open land]. The model corrects for some of the biases in pollen percentages by using pollen productivity estimates and fall speeds of pollen, and by applying simple but robust models of pollen dispersal and deposition. The emerging patterns of tree migration and deforestation between 6k bp and modern time in the REVEALS estimates agree with our general understanding of the vegetation history of Europe based on pollen percentages. However, the degree of anthropogenic deforestation (i.e. cover of cultivated and grazing land) at 3k, 0.5k, and 0.2k bp is significantly higher than deduced from pollen percentages. This is also the case at 6k in some parts of Europe, in particular Britain and Ireland. Furthermore, the relationship between summer-green and evergreen trees, and between individual tree taxa, differs significantly when expressed as pollen percentages or as REVEALS estimates of tree cover. For instance, when Pinus is dominant over Picea as pollen percentages, Picea is dominant over Pinus as REVEALS estimates. These differences play a major role in the reconstruction of European landscapes and for the study of land cover-climate interactions, biodiversity and human resources.
Resumo:
The concentrations of chironomid remains in lake sediments are very variable and, therefore, chironomid stratigraphies often include samples with a low number of counts. Thus, the effect of low count sums on reconstructed temperatures is an important issue when applying chironomid‐temperature inference models. Using an existing data set, we simulated low count sums by randomly picking subsets of head capsules from surface‐sediment samples with a high number of specimens. Subsequently, a chironomid‐temperature inference model was used to assess how the inferred temperatures are affected by low counts. The simulations indicate that the variability of inferred temperatures increases progressively with decreasing count sums. At counts below 50 specimens, a further reduction in count sum can cause a disproportionate increase in the variation of inferred temperatures, whereas at higher count sums the inferences are more stable. Furthermore, low count samples may consistently infer too low or too high temperatures and, therefore, produce a systematic error in a reconstruction. Smoothing reconstructed temperatures downcore is proposed as a possible way to compensate for the high variability due to low count sums. By combining adjacent samples in a stratigraphy, to produce samples of a more reliable size, it is possible to assess if low counts cause a systematic error in inferred temperatures.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
In an attempt to document the palaeoecological affinities of individual extant and extinct dinoflagellate cysts, Late Pliocene and Early Pleistocene dinoflagellate cyst assemblages have been compared with geochemical data from the same samples. Mg/Ca ratios of Globigerina bulloides were measured to estimate the spring-summer sea-surface temperatures from four North Atlantic IODP/DSDP sites. Currently, our Pliocene-Pleistocene database contains 204 dinoflagellate cyst samples calibrated to geochemical data. This palaeo-database is compared with modern North Atlantic and global datasets. The focus lies in the quantitative relationship between Mg/Ca-based (i.e. spring-summer) sea-surface temperature (SSTMg/Ca) and dinoflagellate cyst distributions. In general, extant species are shown to have comparable spring-summer SST ranges in the past and today, demonstrating that our new approach is valid for inferring spring-summer SST ranges for extinct species. For example, Habibacysta tectata represents SSTMg/Ca values between 10° and 15°C when it exceeds 30% of the assemblage, and Invertocysta lacrymosa exceeds 15% when SSTMg/Ca values are between 18.6° and 23.5°C. However, comparing Pliocene and Pleistocene SSTMg/Ca values with present day summer values for the extant Impagidinium pallidum suggests a greater tolerance of higher temperatures in the past. This species occupies more than 5% of the assemblage at SSTMg/Ca values of 11.6-17.9°C in the Pliocene and Pleistocene, whereas present day summer SSTs are around -1.7 to 6.9°C. This observation questions the value of Impagidinium pallidum as reliable indicator of cold waters in older deposits, and may explain its bipolar distribution.
Resumo:
The application of quantitative and semiquantitative methods to assemblage data from dinoflagellate cysts shows potential for interpreting past environments, both in terms of paleotemperature estimates and in recognizing water masses and circulation patterns. Estimates of winter sea-surface temperature (WSST) were produced by using the Impagidinium Index (II) method, and by applying a winter-temperature transfer function (TFw). Estimates of summer sea-surface temperature (SSST) were produced by using a summer-temperature transfer function (TFs), two methods based on a temperature-distribution chart (ACT and ACTpo), and a method based on the ratio of gonyaulacoid:protoperidinioid specimens (G:P). WSST estimates from the II and TFw methods are in close agreement except where Impagidinium species are sparse. SSST estimates from TFs are more variable. The value of the G:P ratio for the Pliocene data in this paper is limited by the apparent sparsity of protoperidinioids, which results in monotonous SSST estimates of 14-26°C. The ACT methods show two biases for the Pliocene data set: taxonomic substitution may force 'matches' yielding incorrect temperature estimates, and the method is highly sensitive to the end-points of species distributions. Dinocyst assemblage data were applied to reconstruct Pliocene sea-surface temperatures between 3.5-2.5 Ma from DSDP Hole 552A, and ODP Holes 646B and 642B, which are presently located beneath cold and cool-temperate waters north of 56°N. Our initial results suggest that at 3.0 Ma, WSSTs were a few degrees C warmer than the present and that there was a somewhat reduced north-south temperature gradient. For all three sites, it is likely that SSSTs were also warmer, but by an unknown, perhaps large, amount. Past oceanic circulation in the North Atlantic was probably different from the present.
Resumo:
Botanical data are widely used as terrestrial proxy data for climate reconstructions. Using a newly established method based on probability density functions (pdf-method), the temperature development throughout the last interglacial, the Eemian, is reconstructed for the two German sites Bispingen and Grobern and the French site La Grande Pile. The results are compared with previous reconstructions using other methods. After a steep increase in January as well as July temperatures in the early phase of the interglacial, the reconstructed most probable climate appears to be slightly warmer than today. While the temperature is reconstructed as relatively stable throughout the Eemian, a certain tendency towards cooler January temperatures is evident. January temperatures decreased from approx. 2-3° C in the early part to approx. -3° C in the later part at Bispingen, and from approx. 2° C to approx. -1° C at Grobern and La Grande Pile. A major drop to about -8° C marks the very end of the interglacial at all three sites. While these results agree well with other proxy data and former reconstructions based on the indicator species method, the results differ significantly from reconstructions based on the modern pollen analogue technique ("pollen transfer functions"). The lack of modern analogues is assumed to be the main reason for the discrepancies. It is concluded that any reconstruction method needs to be evaluated carefully in this respect if used for periods lacking modern analogous plant communities.