994 resultados para computer tomography
Resumo:
Recently, morphometric measurements of the ascending aorta have been done with ECG-gated multidector computerized tomography (MDCT) to help the development of future novel transcatheter therapies (TCT); nevertheless, the variability of such measurements remains unknown. Thirty patients referred for ECG-gated CT thoracic angiography were evaluated. Continuous reformations of the ascending aorta, perpendicular to the centerline, were obtained automatically with a commercially available computer aided diagnosis (CAD). Then measurements of the maximal diameter were done with the CAD and manually by two observers (separately). Measurements were repeated one month later. The Bland-Altman method, Spearman coefficients, and a Wilcoxon signed-rank test were used to evaluate the variability, the correlation, and the differences between observers. The interobserver variability for maximal diameter between the two observers was up to 1.2 mm with limits of agreement [-1.5, +0.9] mm; whereas the intraobserver limits were [-1.2, +1.0] mm for the first observer and [-0.8, +0.8] mm for the second observer. The intraobserver CAD variability was 0.8 mm. The correlation was good between observers and the CAD (0.980-0.986); however, significant differences do exist (P<0.001). The maximum variability observed was 1.2 mm and should be considered in reports of measurements of the ascending aorta. The CAD is as reproducible as an experienced reader.
A Comparative Analysis between Ultrasonometry and Computer-Aided Tomography to Evaluate Bone Healing
Resumo:
An ultrasonometric and computed-tomographic study of bone healing was undertaken using a model of a transverse mid-shaft osteotomy of sheep tibiae fixed with a semi-flexible external fixator. Fourteen sheep were operated and divided into two groups of seven according to osteotomy type, either regular or by segmental resection. The animals were killed on the 90th postoperative day and the tibiae resected for the in vitro direct contact transverse and axial measurement of ultrasound propagation velocity (UV) followed by quantitative computer-aided tomography (callus density and volume) through the osteotomy site. The intact left tibiae were used for control, being examined in a symmetrical diaphyseal segment. Regular osteotomies healed with a smaller and more mature callus than resection osteotomies. Axial UV was consistently and significantly higher (p?=?0.01) than transverse UV and both transverse and axial UV were significantly higher for the regular than for the segmental resection osteotomy. Transverse UV did not differ significantly between the intact and operated tibiae (p?=?0.20 for regular osteotomy; p?=?0.02 for resection osteotomy), but axial UV was significantly higher for the intact tibiae. Tomographic callus density was significantly higher for the regular than for the resection osteotomy and higher than both for the intact tibiae, presenting a strong positive correlation with UV. Callus volume presented an opposite behavior, with a negative correlation with UV. We conclude that UV is at least as precise as quantitative tomography for providing information about the healing state of both regular and resection osteotomy. (C) 2011 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 30:10761082, 2012
Resumo:
Ecosystem engineers that increase habitat complexity are keystone species in marine systems, increasing shelter and niche availability, and therefore biodiversity. For example, kelp holdfasts form intricate structures and host the largest number of organisms in kelp ecosystems. However, methods that quantify 3D habitat complexity have only seldom been used in marine habitats, and never in kelp holdfast communities. This study investigated the role of kelp holdfasts (Laminaria hyperborea) in supporting benthic faunal biodiversity. Computer-aided tomography (CT-) scanning was used to quantify the three-dimensional geometrical complexity of holdfasts, including volume, surface area and surface fractal dimension (FD). Additionally, the number of haptera, number of haptera per unit of volume, and age of kelps were estimated. These measurements were compared to faunal biodiversity and community structure, using partial least-squares regression and multivariate ordination. Holdfast volume explained most of the variance observed in biodiversity indices, however all other complexity measures also strongly contributed to the variance observed. Multivariate ordinations further revealed that surface area and haptera per unit of volume accounted for the patterns observed in faunal community structure. Using 3D image analysis, this study makes a strong contribution to elucidate quantitative mechanisms underlying the observed relationship between biodiversity and habitat complexity. Furthermore, the potential of CT-scanning as an ecological tool is demonstrated, and a methodology for its use in future similar studies is established. Such spatially resolved imager analysis could help identify structurally complex areas as biodiversity hotspots, and may support the prioritization of areas for conservation.
Resumo:
Ecosystem engineers that increase habitat complexity are keystone species in marine systems, increasing shelter and niche availability, and therefore biodiversity. For example, kelp holdfasts form intricate structures and host the largest number of organisms in kelp ecosystems. However, methods that quantify 3D habitat complexity have only seldom been used in marine habitats, and never in kelp holdfast communities. This study investigated the role of kelp holdfasts (Laminaria hyperborea) in supporting benthic faunal biodiversity. Computer-aided tomography (CT-) scanning was used to quantify the three-dimensional geometrical complexity of holdfasts, including volume, surface area and surface fractal dimension (FD). Additionally, the number of haptera, number of haptera per unit of volume, and age of kelps were estimated. These measurements were compared to faunal biodiversity and community structure, using partial least-squares regression and multivariate ordination. Holdfast volume explained most of the variance observed in biodiversity indices, however all other complexity measures also strongly contributed to the variance observed. Multivariate ordinations further revealed that surface area and haptera per unit of volume accounted for the patterns observed in faunal community structure. Using 3D image analysis, this study makes a strong contribution to elucidate quantitative mechanisms underlying the observed relationship between biodiversity and habitat complexity. Furthermore, the potential of CT-scanning as an ecological tool is demonstrated, and a methodology for its use in future similar studies is established. Such spatially resolved imager analysis could help identify structurally complex areas as biodiversity hotspots, and may support the prioritization of areas for conservation.
Resumo:
At St Thomas' Hospital, we have developed a computer program on a Titan graphics supercomputer to plan the stereotactic implantation of iodine-125 seeds for the palliative treatment of recurrent malignant gliomas. Use of the Gill-Thomas-Cosman relocatable frame allows planning and surgery to be carried out at different hospitals on different days. Stereotactic computed tomography (CT) and positron emission tomography (PET) scans are performed and the images transferred to the planning computer. The head, tumour and frame fiducials are outlined on the relevant images, and a three-dimensional model generated. Structures which could interfere with the surgery or radiotherapy, such as major vessels, shunt tubing etc., can also be outlined and included in the display. Catheter target and entry points are set using a three-dimensional cursor controlled by a set of dials attached to the computer. The program calculates and displays the radiation dose distribution within the target volume for various catheter and seed arrangements. The CT co-ordinates of the fiducial rods are used to convert catheter co-ordinates from CT space to frame space and to calculate the catheter insertion angles and depths. The surgically implanted catheters are after-loaded the next day and the seeds left in place for between 4 and 6 days, giving a nominal dose of 50 Gy to the edge of the target volume. 25 patients have been treated so far.
Resumo:
We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.
Resumo:
Firstly, we would like to thank Ms. Alison Brough and her colleagues for their positive commentary on our published work [1] and their appraisal of our utility of the “off-set plane” protocol for anthropometric analysis. The standardized protocols described in our manuscript have wide applications, ranging from forensic anthropology and paleodemographic research to clinical settings such as paediatric practice and orthopaedic surgical design. We affirm that the use of geometrically based reference tools commonly found in computer aided design (CAD) programs such as Geomagic Design X® are imperative for more automated and precise measurement protocols for quantitative skeletal analysis. Therefore we stand by our recommendation of the use of software such as Amira and Geomagic Design X® in the contexts described in our manuscript...
Resumo:
Nearly pollution-free solutions of the Helmholtz equation for k-values corresponding to visible light are demonstrated and verified through experimentally measured forward scattered intensity from an optical fiber. Numerically accurate solutions are, in particular, obtained through a novel reformulation of the H-1 optimal Petrov-Galerkin weak form of the Helmholtz equation. Specifically, within a globally smooth polynomial reproducing framework, the compact and smooth test functions are so designed that their normal derivatives are zero everywhere on the local boundaries of their compact supports. This circumvents the need for a priori knowledge of the true solution on the support boundary and relieves the weak form of any jump boundary terms. For numerical demonstration of the above formulation, we used a multimode optical fiber in an index matching liquid as the object. The scattered intensity and its normal derivative are computed from the scattered field obtained by solving the Helmholtz equation, using the new formulation and the conventional finite element method. By comparing the results with the experimentally measured scattered intensity, the stability of the solution through the new formulation is demonstrated and its closeness to the experimental measurements verified.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
We developed an automated system that registers chest CT scans temporally. Our registration method matches corresponding anatomical landmarks to obtain initial registration parameters. The initial point-to-point registration is then generalized to an iterative surface-to-surface registration method. Our "goodness-of-fit" measure is evaluated at each step in the iterative scheme until the registration performance is sufficient. We applied our method to register the 3D lung surfaces of 11 pairs of chest CT scans and report promising registration performance.
Resumo:
INTRODUCTION: The characterization of urinary calculi using noninvasive methods has the potential to affect clinical management. CT remains the gold standard for diagnosis of urinary calculi, but has not reliably differentiated varying stone compositions. Dual-energy CT (DECT) has emerged as a technology to improve CT characterization of anatomic structures. This study aims to assess the ability of DECT to accurately discriminate between different types of urinary calculi in an in vitro model using novel postimage acquisition data processing techniques. METHODS: Fifty urinary calculi were assessed, of which 44 had >or=60% composition of one component. DECT was performed utilizing 64-slice multidetector CT. The attenuation profiles of the lower-energy (DECT-Low) and higher-energy (DECT-High) datasets were used to investigate whether differences could be seen between different stone compositions. RESULTS: Postimage acquisition processing allowed for identification of the main different chemical compositions of urinary calculi: brushite, calcium oxalate-calcium phosphate, struvite, cystine, and uric acid. Statistical analysis demonstrated that this processing identified all stone compositions without obvious graphical overlap. CONCLUSION: Dual-energy multidetector CT with postprocessing techniques allows for accurate discrimination among the main different subtypes of urinary calculi in an in vitro model. The ability to better detect stone composition may have implications in determining the optimum clinical treatment modality for urinary calculi from noninvasive, preprocedure radiological assessment.
Resumo:
AIMS: To investigate the potential dosimetric and clinical benefits predicted by using four-dimensional computed tomography (4DCT) compared with 3DCT in the planning of radical radiotherapy for non-small cell lung cancer.
MATERIALS AND METHODS:
Twenty patients were planned using free breathing 4DCT then retrospectively delineated on three-dimensional helical scan sets (3DCT). Beam arrangement and total dose (55 Gy in 20 fractions) were matched for 3D and 4D plans. Plans were compared for differences in planning target volume (PTV) geometrics and normal tissue complication probability (NTCP) for organs at risk using dose volume histograms. Tumour control probability and NTCP were modelled using the Lyman-Kutcher-Burman (LKB) model. This was compared with a predictive clinical algorithm (Maastro), which is based on patient characteristics, including: age, performance status, smoking history, lung function, tumour staging and concomitant chemotherapy, to predict survival and toxicity outcomes. Potential therapeutic gains were investigated by applying isotoxic dose escalation to both plans using constraints for mean lung dose (18 Gy), oesophageal maximum (70 Gy) and spinal cord maximum (48 Gy).
RESULTS:
4DCT based plans had lower PTV volumes, a lower dose to organs at risk and lower predicted NTCP rates on LKB modelling (P < 0.006). The clinical algorithm showed no difference for predicted 2-year survival and dyspnoea rates between the groups, but did predict for lower oesophageal toxicity with 4DCT plans (P = 0.001). There was no correlation between LKB modelling and the clinical algorithm for lung toxicity or survival. Dose escalation was possible in 15/20 cases, with a mean increase in dose by a factor of 1.19 (10.45 Gy) using 4DCT compared with 3DCT plans.
CONCLUSIONS:
4DCT can theoretically improve therapeutic ratio and dose escalation based on dosimetric parameters and mathematical modelling. However, when individual characteristics are incorporated, this gain may be less evident in terms of survival and dyspnoea rates. 4DCT allows potential for isotoxic dose escalation, which may lead to improved local control and better overall survival.
Resumo:
This article reports a relaxation study in an oriented system containing spin 3/2 nuclei using quantum state tomography (QST). The use of QST allowed evaluating the time evolution of all density matrix elements starting from several initial states. Using an appropriated treatment based on the Redfield theory, the relaxation rate of each density matrix element was measured and the reduced spectral densities that describe the system relaxation were determined. All the experimental data could be well described assuming pure quadrupolar relaxation and reduced spectral densities corresponding to a superposition of slow and fast motions. The data were also analyzed in the context of Quantum Information Processing, where the coherence loss of each qubit of the system was determined using the partial trace operation. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Computed tomographic scanning is a precise, noinvasive surveying technique that enables the professionals to improve the precision of implant placement by building a prototype that allows the confection of surgical guides. The authors present a clinical case of anterior tooth rehabilitation with frozen homogenous bone graft and immediately loaded titanium implant using computer-guided surgery. A multislice computed tomography was realized, and a prototype was built. All the procedures were previously realized in the prototype before started in the patient. This technique allows a better surgical planning, makes the procedures more accurate, and reduces surgery time.