977 resultados para Image Reconstruction
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The Therapy with proton beam has shown more e ective than Radiotherapy for oncology treatment. However, to its planning use photon beam Computing Tomography that not considers the fundamentals di erences the interaction with the matter between X-rays and Protons. Nowadays, there is a great e ort to develop Tomography with proton beam. In this way it is necessary to know the most likely trajectory of proton beam to image reconstruction. In this work was realized calculus of the most likely trajectory of proton beam in homogeneous target compound with water that was considered the inelastic nuclear interaction. Other calculus was the analytical calculation of lateral de ection of proton beam. In the calculation were utilized programs that use Monte Carlo Method: SRIM 2006 (Stopping and Range of Ions in Matter ), MCNPX (Monte Carlo N-Particle eXtended) v2.50. And to analytical calculation was employed the software Wolfram Mathematica v7.0. We obtained how di erent nuclear reaction models modify the trajectory of proton beam and the comparative between analytical and Monte Carlo method
Resumo:
In the recent years, the use of proton beams in radiotherapy has been an outstanding progress (SMITH, 2006). Up to now, computed tomography (CT) is a prerequisite for treatment planning in this kind of therapy because it provides the electron density distribution required for calculation of dose and the interval of doses. However, the use of CT images for proton treatment planning ignores fundamental differences in physical interaction processes between photons and protons and is, therefore, potentially inaccurate (SADROZINSKI, 2004). Proton CT (pCT) can in principle directly measure the density distribution needed in a patient for the dose distribution (SCHULTE, et al, 2004). One important problem that should be solved is the implementation of image reconstruction algorithms. In this sense, it is necessary to know how the presence of materials with different density and composition interfere in the energy deposition by ionization and coulomb excitation, during its trajectory. The study was conducted in two stages, was used in both the program SRIM (The Stopping and Range of Ions in Matter) to perform simulations of the interaction of proton beams with pencil beam type. In the first step we used the energies in the range of 100-250 MeV (ZIEGLER, 1999). The targets were set to 50 mm in length for the beam of 100 MeV, due to its interaction with the target, and short-range, and 70 mm for 150, 200 and 250 MeV The target was composed of liquid water and a layer of 6 mm cortical bone (ICRP). It were made 9 simulations varying the position of the heterogeneity of 5 mm. In the second step the energy of 250 MeV was taken out from the simulations, due to its greater energy and less interaction. The targets were diminished to 50 mm thick to standardize the simulations. The layer of bone was divided into two equal parts and both were put in the ends of the target... (Complete abstract click electronic access below)
Resumo:
Die vorliegende Arbeit untersucht die Möglichkeiten und Limitierungen der MR-Lungenbildgebung mit hyperpolarisiertem 3-He bei gegenüber üblichen Magnetfeldstärken reduzierten magnetischen Führungsfeldern. Dabei werden insbesondere auch die funktionellen Bildgebungstechniken (dynamische Bildgebung, diffusionsgewichtete Bildgebung und Bestimmung des Sauerstoff-Partialdrucks) berücksichtigt. Experimentell geschieht dies durch in vivo Messungen an einem 0.1 T-Ganzkörpertomographen. Zur systematischen Untersuchung der MR-Bildgebung unter dem Einfluss diffundierender Kontrastmittel werden analytische Simulationen, Monte-Carlo-Studien und Referenzexperimente durchgeführt. Hier wird das Augenmerk besonders auf den Einfluss von Diffusions- und Suszeptibilitätsartefakten auf die morphologische und die diffusionsgewichtete Bildgebung gerichtet. Die Entwicklung und der Vergleich verschiedener Konzepte zur Erzeugung von MR-Führungsmagnetfeldern führt zur Erfindung eines neuartigen Prinzips zur weiträumigen Homogenisierung von Magnetfeldern. Die Umsetzung dieses Prinzips erfolgt in Form eines besonders kompakten Transportbehälters für kernspinpolarisierte Edelgase. Die Arbeit beinhaltet eine ausführliche Diskussion der MR-Bildgebungstechnik in Theorie und Praxis, um die Anknüpfungspunkte an die angestellten Untersuchungen herauszuarbeiten. Teile dieser Studien wurden von der Europäischen Raumfahrtorganisation ESA finanziert (Contract No.15308/01/NL/PA).
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
In this work we study a polyenergetic and multimaterial model for the breast image reconstruction in Digital Tomosynthesis, taking into consideration the variety of the materials forming the object and the polyenergetic nature of the X-rays beam. The modelling of the problem leads to the resolution of a high-dimensional nonlinear least-squares problem that, due to its nature of inverse ill-posed problem, needs some kind of regularization. We test two main classes of methods: the Levenberg-Marquardt method (together with the Conjugate Gradient method for the computation of the descent direction) and two limited-memory BFGS-like methods (L-BFGS). We perform some experiments for different values of the regularization parameter (constant or varying at each iteration), tolerances and stop conditions. Finally, we analyse the performance of the several methods comparing relative errors, iterations number, times and the qualities of the reconstructed images.
Resumo:
The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
Objective: The PEM Flex Solo II (Naviscan, Inc., San Diego, CA) is currently the only commercially-available positron emission mammography (PEM) scanner. This scanner does not apply corrections for count rate effects, attenuation or scatter during image reconstruction, potentially affecting the quantitative accuracy of images. This work measures the overall quantitative accuracy of the PEM Flex system, and determines the contributions of error due to count rate effects, attenuation and scatter. Materials and Methods: Gelatin phantoms were designed to simulate breasts of different sizes (4 – 12 cm thick) with varying uniform background activity concentration (0.007 – 0.5 μCi/cc), cysts and lesions (2:1, 5:1, 10:1 lesion-to-background ratios). The overall error was calculated from ROI measurements in the phantoms with a clinically relevant background activity concentration (0.065 μCi/cc). The error due to count rate effects was determined by comparing the overall error at multiple background activity concentrations to the error at 0.007 μCi/cc. A point source and cold gelatin phantoms were used to assess the errors due to attenuation and scatter. The maximum pixel values in gelatin and in air were compared to determine the effect of attenuation. Scatter was evaluated by comparing the sum of all pixel values in gelatin and in air. Results: The overall error in the background was found to be negative in phantoms of all thicknesses, with the exception of the 4-cm thick phantoms (0%±7%), and it increased with thickness (-34%±6% for the 12-cm phantoms). All lesions exhibited large negative error (-22% for the 2:1 lesions in the 4-cm phantom) which increased with thickness and with lesion-to-background ratio (-85% for the 10:1 lesions in the 12-cm phantoms). The error due to count rate in phantoms with 0.065 μCi/cc background was negative (-23%±6% for 4-cm thickness) and decreased with thickness (-7%±7% for 12 cm). Attenuation was a substantial source of negative error and increased with thickness (-51%±10% to -77% ±4% in 4 to 12 cm phantoms, respectively). Scatter contributed a relatively constant amount of positive error (+23%±11%) for all thicknesses. Conclusion: Applying corrections for count rate, attenuation and scatter will be essential for the PEM Flex Solo II to be able to produce quantitatively accurate images.
Resumo:
OBJECTIVE: The aim of this study was to visualize and localize the sheep antimicrobials, beta-defensins 1, 2, and 3, (SBD-1, SBD-2, SBD-3), sheep neutrophil defensin alpha (SNP-1), and the cathelicidin LL-37 in sheep small intestine after burn injury, our hypothesis being that these compounds would be upregulated in an effort to overcome a compromised endothelial lining. Response to burn injury includes the release of proinflammatory cytokines and systemic immune suppression that, if untreated, can progress to multiple organ failure and death, so protective mechanisms have to be initiated and implemented. METHODS: Tissue sections were probed with antibodies to the antimicrobials and then visualized with fluorescently labeled secondary antibodies and subjected to fluorescence deconvolution microscopy and image reconstruction. RESULTS: In both the sham and burn samples, all the aforementioned antimicrobials were seen in each of the layers of small intestine, the highest concentration being localized to the epithelium. SBD-2, SBD-3, and SNP-1 were upregulated in both enterocytes and Paneth cells, while SNP-1 and LL-37 showed increases in both the inner circular and outer longitudinal muscle layers of the muscularis externa following burn injury. Each of the defensins, except SBD-1, was also seen in between the muscle layers of the externa and while burn caused slight increases of SBD-2, SBD-3, and SNP-1 in this location, LL-37 content was significantly decreased. CONCLUSION: That while each of these human antimicrobials is present in multiple layers of sheep small intestine, SBD-2, SBD-3, SNP-1, and LL-37 are upregulated in the specific layers of the small intestine.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
OBJECTIVE: Human defensins and cathelicidins are a family of cationic antimicrobial peptides (AMPs), which play multiple roles in both innate and adaptive immune systems. They have direct antimicrobial activity against several microorganisms including burn pathogens. The majority of components of innate and adaptive immunity either express naturally occurring defensins or are otherwise chemoattracted or functionally affected by them. They also enhance adaptive immunity and wound healing and alter antibody production. All mechanisms to explain multiple functions of AMPs are not clearly understood. Prior studies to localize defensins in normal and burned skin using deconvolution fluorescence scanning microscopy indicate localization of defensins in the nucleus, perinuclear regions, and cytoplasm. The objective of this study is to further confirm the identification of HBD-1 in the nucleus by deconvolution microscopic studies involving image reconstruction and wire frame modeling. RESULTS: Our study demonstrated the presence of intranuclear HBD-1 in keratinocytes throughout the stratum spinosum by costaining with the nuclear probe DAPI. In addition, HBD-1 sequence does show some homology with known cationic nuclear localization signal sequences. CONCLUSION: To our knowledge, this is the first report to localize HBD-1 in the nuclear region, suggesting a role for this peptide in gene expression and providing new data that may help determine mechanisms of defensin functions.
Resumo:
This year marks the 20th anniversary of functional near-infrared spectroscopy and imaging (fNIRS/fNIRI). As the vast majority of commercial instruments developed until now are based on continuous wave technology, the aim of this publication is to review the current state of instrumentation and methodology of continuous wave fNIRI. For this purpose we provide an overview of the commercially available instruments and address instrumental aspects such as light sources, detectors and sensor arrangements. Methodological aspects, algorithms to calculate the concentrations of oxy- and deoxyhemoglobin and approaches for data analysis are also reviewed. From the single-location measurements of the early years, instrumentation has progressed to imaging initially in two dimensions (topography) and then three (tomography). The methods of analysis have also changed tremendously, from the simple modified Beer-Lambert law to sophisticated image reconstruction and data analysis methods used today. Due to these advances, fNIRI has become a modality that is widely used in neuroscience research and several manufacturers provide commercial instrumentation. It seems likely that fNIRI will become a clinical tool in the foreseeable future, which will enable diagnosis in single subjects.