968 resultados para PET module DOI calibration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-resolution, small-bore PET systems suffer from a tradeoff between system sensitivity, and image quality degradation. In these systems long crystals allow mispositioning of the line of response due to parallax error and this mispositioning causes resolution blurring, but long crystals are necessary for high system sensitivity. One means to allow long crystals without introducing parallax errors is to determine the depth of interaction (DOI) of the gamma ray interaction within the detector module. While DOI has been investigated previously, newly available solid state photomultipliers (SSPMs) well-suited to PET applications and allow new modules for investigation. Depth of interaction in full modules is a relatively new field, and so even if high performance DOI capable modules were available, the appropriate means to characterize and calibrate the modules are not. This work presents an investigation of DOI capable arrays and techniques for characterizing and calibrating those modules. The methods introduced here accurately and reliably characterize and calibrate energy, timing, and event interaction positioning. Additionally presented is a characterization of the spatial resolution of DOI capable modules and a measurement of DOI effects for different angles between detector modules. These arrays have been built into a prototype PET system that delivers better than 2.0 mm resolution with a single-sided-stopping-power in excess of 95% for 511 keV g's. The noise properties of SSPMs scale with the active area of the detector face, and so the best signal-to-noise ratio is possible with parallel readout of each SSPM photodetector pixel rather than multiplexing signals together. This work additionally investigates several algorithms for improving timing performance using timing information from multiple SSPM pixels when light is distributed among several photodetectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next generation PET scanners should fulfill very high requirements in terms of spatial, energy and timing resolution. Modern scanner performances are inherently limited by the use of standard photomultiplier tubes. The use of Silicon Photomultipliers (SiPMs) is proposed for the construction of a 4D-PET module of 4.8×4.8 cm2 aimed to replace the standard PMT based PET block detector. The module will be based on a LYSO continuous crystal read on two faces by Silicon Photomultipliers. A high granularity detection surface made by SiPM matrices of 1.5 mm pitch will be used for the x–y photon hit position determination with submillimetric accuracy, while a low granularity surface constituted by 16 mm2 SiPM pixels will provide the fast timing information (t) that will be used to implement the Time of Flight technique (TOF). The spatial information collected by the two detector layers will be combined in order to measure the Depth of Interaction (DOI) of each event (z). The use of large area multi-pixel Silicon Photomultiplier (SiPM) detectors requires the development of a multichannel Data Acquisition system (DAQ) as well as of a dedicated front-end in order not to degrade the intrinsic detector capabilities and to manage many channels. The paper describes the progress made on the development of the proof of principle module under construction at the University of Pisa.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A set of bottled waters from a single natural spring distributed worldwide in polyethylene terephthalate (PET) bottles has been used to examine the effects of storage in plastic polymer material on the isotopic composition (delta(18)O and delta(2)H values) of the water. All samples analyzed were subjected to the same packaging procedure but experienced different conditions of temperature and humidity during storage. Water sorption and the diffusive transfer of water and water vapor through the wall of the PET bottle may cause isotopic exchange between water within the bottle and water vapor in air near the PET-water interface. Changes of about +4 parts per thousand for delta(2)H and +0.7 parts per thousand for delta(18)O have been measured for water after 253 days of storage within the PET bottle. The results of this study clearly indicate the need to use glass bottles for storing water samples for isotopic studies. It is imperative to transfer PET-bottled natural waters to glass bottles for their use as calibration material or potential international working standards. Copyright (C) 2008 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ion beam therapy is a valuable method for the treatment of deep-seated and radio-resistant tumors thanks to the favorable depth-dose distribution characterized by the Bragg peak. Hadrontherapy facilities take advantage of the specific ion range, resulting in a highly conformal dose in the target volume, while the dose in critical organs is reduced as compared to photon therapy. The necessity to monitor the delivery precision, i.e. the ion range, is unquestionable, thus different approaches have been investigated, such as the detection of prompt photons or annihilation photons of positron emitter nuclei created during the therapeutic treatment. Based on the measurement of the induced β+ activity, our group has developed various in-beam PET prototypes: the one under test is composed by two planar detector heads, each one consisting of four modules with a total active area of 10 × 10 cm2. A single detector module is made of a LYSO crystal matrix coupled to a position sensitive photomultiplier and is read-out by dedicated frontend electronics. A preliminary data taking was performed at the Italian National Centre for Oncological Hadron Therapy (CNAO, Pavia), using proton beams in the energy range of 93–112 MeV impinging on a plastic phantom. The measured activity profiles are presented and compared with the simulated ones based on the Monte Carlo FLUKA package.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antimony is a common catalyst in the synthesis of polyethylene terephthalate used for food-grade bottles manufacturing. However, antimony residues in final products are transferred to juices, soft drinks or water. The literature reports mentions of toxicity associated to antimony. In this work, a green, fast and direct method to quantify antimony, sulfur, iron and copper, in PET bottles by X-ray fluorescence spectrometry is presented. 2.4 to 11 mg Sb kg-1 were found in 20 samples analyzed. The coupling of the multielemental technique to chemometric treatment provided also the possibility to classify PET samples between bottle-grade PET/recycled PET blends by Fe content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rangel EM, Mendes IA, Carnio EC, Marchi Alves LM, Godoy S, Crispim JA. Development, implementation, and assessment of a distance module in endocrine physiology. Adv Physiol Educ 34: 70-74, 2010; doi: 10.1152/advan.00070.2009.-This study aimed to develop, implement, and assess a distance module in endocrine physiology in TelEduc for undergraduate nursing students from a public university in Brazil, with a sample size of 44 students. Stage 1 consisted of the development of the module, through the process of creating a distance course by means of the Web. Stage 2 was the planning of the module's practical functioning, and stage 3 was the planning of student evaluations. In the experts' assessment, the module complied with pedagogical and technical requirements most of the time. In the practical functioning stage, 10 h were dedicated for on-site activities and 10 h for distance activities. Most students (93.2%) were women between 19 and 23 yr of age (75%). The internet was the most used means to remain updated for 23 students (59.0%), and 30 students (68.2%) accessed it from the teaching institution. A personal computer was used by 23 students (56.1%), and most of them (58.1%) learned to use it alone. Access to a forum was more dispersed (variation coefficient: 86.80%) than access to chat (variation coefficient: 65.14%). Average participation was 30 students in forums and 22 students in the chat. Students' final grades in the module averaged 8.5 (SD: 1.2). TelEduc was shown to be efficient in supporting the teaching- learning process of endocrine physiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multifilter rotating shadowband radiometer (MFRSR) calibration values for aerosol optical depth (AOD) retrievals were determined by means of the general method formulated by Forgan [Appl. Opt. 33, 4841 (1994)] at a polluted urban site. The obtained precision is comparable with the classical method, the Langley plot, applied on clean mountaintops distant of pollution sources. The AOD retrieved over Sao Paulo City with both calibration procedures is compared with the Aerosol Robotic Network data. The observed results are similar, and, except for the shortest wavelength (415 nm), the MFRSR`s AOD is systematically overestimated by similar to 0.03. (c) 2008 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated two different support materials (ground tire and polyethylene terephthalate [PET]) for biohydrogen production in an anaerobic fluidized bed reactor (AFBR) treating synthetic wastewater containing glucose (4000 mg L(-1)). The AFBR, which contained either ground tire (R1) or PET (R2) as support materials, were inoculated with thermally pretreated anaerobic sludge and operated at a temperature of 30 degrees C. The AFBR were operated with a range of hydraulic retention times (HRT) between 1 and 8 h. The reactor R1 operating with a HRT of 2 h showed better performance than reactor R2, reaching a maximum hydrogen yield of 2.25 mol H(2) mol(-1) glucose with 1.3 mg of biomass (as the total volatile solids) attached to each gram of ground tire. Subsequent 16S rRNA gene sequencing and phylogenetic analysis of particle samples revealed that reactor R1 favored the presence of hydrogen-producing bacteria such as Clostridium, Bacillus, and Enterobacter. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.