21 resultados para Quantitative reconstruction
Resumo:
A lack of quantitative high resolution paleoclimate data from the Southern Hemisphere limits the ability to examine current trends within the context of long-term natural climate variability. This study presents a temperature reconstruction for southern Tasmania based on analyses of a sediment core from Duckhole Lake (43.365°S, 146.875°E). The relationship between non-destructive whole core scanning reflectance spectroscopy measurements in the visible spectrum (380–730 nm) and the instrumental temperature record (ad 1911–2000) was used to develop a calibration-in-time reflectance spectroscopy-based temperature model. Results showed that a trough in reflectance from 650 to 700 nm, which represents chlorophyll and its derivatives, was significantly correlated to annual mean temperature. A calibration model was developed (R = 0.56, p auto < 0.05, root mean squared error of prediction (RMSEP) = 0.21°C, five-year filtered data, calibration period 1911–2000) and applied down-core to reconstruct annual mean temperatures in southern Tasmania over the last c. 950 years. This indicated that temperatures were initially cool c. ad 1050, but steadily increased until the late ad 1100s. After a brief cool period in the ad 1200s, temperatures again increased. Temperatures steadily decreased during the ad 1600s and remained relatively stable until the start of the 20th century when they rapidly decreased, before increasing from ad 1960s onwards. Comparisons with high resolution temperature records from western Tasmania, New Zealand and South America revealed some similarities, but also highlighted differences in temperature variability across the mid-latitudes of the Southern Hemisphere. These are likely due to a combination of factors including the spatial variability in climate between and within regions, and differences between records that document seasonal (i.e. warm season/late summer) versus annual temperature variability. This highlights the need for further records from the mid-latitudes of the Southern Hemisphere in order to constrain past natural spatial and seasonal/annual temperature variability in the region, and to accurately identify and attribute changes to natural variability and/or anthropogenic activities.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
PURPOSE To determine the image quality of an iterative reconstruction (IR) technique in low-dose MDCT (LDCT) of the chest of immunocompromised patients in an intraindividual comparison to filtered back projection (FBP) and to evaluate the dose reduction capability. MATERIALS AND METHODS 30 chest LDCT scans were performed in immunocompromised patients (Brilliance iCT; 20-40 mAs; mean CTDIvol: 1.7 mGy). The raw data were reconstructed using FBP and the IR technique (iDose4™, Philips, Best, The Netherlands) set to seven iteration levels. 30 routine-dose MDCT (RDCT) reconstructed with FBP served as controls (mean exposure: 116 mAs; mean CDTIvol: 7.6 mGy). Three blinded radiologists scored subjective image quality and lesion conspicuity. Quantitative parameters including CT attenuation and objective image noise (OIN) were determined. RESULTS In LDCT high iDose4™ levels lead to a significant decrease in OIN (FBP vs. iDose7: subscapular muscle 139.4 vs. 40.6 HU). The high iDose4™ levels provided significant improvements in image quality and artifact and noise reduction compared to LDCT FBP images. The conspicuity of subtle lesions was limited in LDCT FBP images. It significantly improved with high iDose4™ levels (> iDose4). LDCT with iDose4™ level 6 was determined to be of equivalent image quality as RDCT with FBP. CONCLUSION iDose4™ substantially improves image quality and lesion conspicuity and reduces noise in low-dose chest CT. Compared to RDCT, high iDose4™ levels provide equivalent image quality in LDCT, hence suggesting a potential dose reduction of almost 80%.
Resumo:
We present quantitative reconstructions of regional vegetation cover in north-western Europe, western Europe north of the Alps, and eastern Europe for five time windows in the Holocene around 6k, 3k, 0.5k, 0.2k, and 0.05k calendar years before present (bp)] at a 1 degrees x1 degrees spatial scale with the objective of producing vegetation descriptions suitable for climate modelling. The REVEALS model was applied on 636 pollen records from lakes and bogs to reconstruct the past cover of 25 plant taxa grouped into 10 plant-functional types and three land-cover types evergreen trees, summer-green (deciduous) trees, and open land]. The model corrects for some of the biases in pollen percentages by using pollen productivity estimates and fall speeds of pollen, and by applying simple but robust models of pollen dispersal and deposition. The emerging patterns of tree migration and deforestation between 6k bp and modern time in the REVEALS estimates agree with our general understanding of the vegetation history of Europe based on pollen percentages. However, the degree of anthropogenic deforestation (i.e. cover of cultivated and grazing land) at 3k, 0.5k, and 0.2k bp is significantly higher than deduced from pollen percentages. This is also the case at 6k in some parts of Europe, in particular Britain and Ireland. Furthermore, the relationship between summer-green and evergreen trees, and between individual tree taxa, differs significantly when expressed as pollen percentages or as REVEALS estimates of tree cover. For instance, when Pinus is dominant over Picea as pollen percentages, Picea is dominant over Pinus as REVEALS estimates. These differences play a major role in the reconstruction of European landscapes and for the study of land cover-climate interactions, biodiversity and human resources.
Resumo:
The concentrations of chironomid remains in lake sediments are very variable and, therefore, chironomid stratigraphies often include samples with a low number of counts. Thus, the effect of low count sums on reconstructed temperatures is an important issue when applying chironomid‐temperature inference models. Using an existing data set, we simulated low count sums by randomly picking subsets of head capsules from surface‐sediment samples with a high number of specimens. Subsequently, a chironomid‐temperature inference model was used to assess how the inferred temperatures are affected by low counts. The simulations indicate that the variability of inferred temperatures increases progressively with decreasing count sums. At counts below 50 specimens, a further reduction in count sum can cause a disproportionate increase in the variation of inferred temperatures, whereas at higher count sums the inferences are more stable. Furthermore, low count samples may consistently infer too low or too high temperatures and, therefore, produce a systematic error in a reconstruction. Smoothing reconstructed temperatures downcore is proposed as a possible way to compensate for the high variability due to low count sums. By combining adjacent samples in a stratigraphy, to produce samples of a more reliable size, it is possible to assess if low counts cause a systematic error in inferred temperatures.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.