944 resultados para Tomography


Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: To test the hypothesis that computed tomography (CT)-based signs might precede symptomatic malignant spinal cord compression (MSCC) in men with metastatic castration-resistant prostate cancer (mCRPC). MATERIALS AND METHODS: A database was used to identify suitable mCRPC patients. Staging CT images were retrospectively reviewed for signs preceding MSCC. Signs of malignant paravertebral fat infiltration and epidural soft-tissue disease were defined and assessed on serial CT in 34 patients with MSCC and 58 control patients. The presence and evolution of the features were summarized using descriptive statistics. RESULTS: In MSCC patients, CT performed a median of 28 days prior to the diagnostic magnetic resonance imaging (MRI) demonstrated significant epidural soft tissue in 28 (80%) patients. The median time to MSCC from a combination of overt malignant paravertebral and epidural disease was 2.7 (0-14.6) months. Conversely, these signs were uncommon in the control cohort. CONCLUSIONS: Significant malignant paravertebral and/or epidural disease at CT precede MSCC in up to 80% of mCRPC patients and should prompt closer patient follow-up and consideration of early MRI evaluation. These CT-based features require further prospective validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary: The Australian Microscopy & Microanalysis Research Facility (AMMRF) operates a national atom probe laboratory at The University of Sydney. This paperprovides a brief review and update of the technique of atom probe tomography (APT),together with a summary of recent research applications at Sydney in the scienceand technology of materials. We describe recent instrumentation advances such asthe use of laser pulsing to effect time-controlled field evaporation, the introductionof wide field of view detectors, where the solid angle for observation is increased byup to a factor of ∼20 as well as innovations in specimen preparation. We concludethat these developments have opened APT to a range of new materials that werepreviously either difficult or impossible to study using this technique because of theirpoor conductivity or brittleness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical impedance tomography is applied to the problem of detecting, locating, and tracking fractures in ballistics gelatin. The hardware developed is intended to be physically robust and based on off-the-shelf hardware. Fractures were created in two separate ways: by shooting a .22 caliber bullet into the gelatin and by injecting saline solution into the gelatin. The .22 caliber bullet created an air gap, which was seen as an increase in resistivity. The saline solution created a fluid filled gap, which was seen as a decrease in resistivity. A double linear array was used to take data for each of the fracture mechanisms and a two dimensional cross section was inverted from the data. The results were validated by visually inspecting the samples during the fracture event. It was found that although there were reconstruction errors present, it was possible to reconstruct a representation of the resistive cross section. Simulations were performed to better understand the reconstructed cross-sections and to demonstrate the ability of a ring array, which was not experimentally tested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of volcano deformation data can provide information on magma processes and help assess the potential for future eruptions. In employing inverse deformation modeling on these data, we attempt to characterize the geometry, location and volume/pressure change of a deformation source. Techniques currently used to model sheet intrusions (e.g., dikes and sills) often require significant a priori assumptions about source geometry and can require testing a large number of parameters. Moreover, surface deformations are a non-linear function of the source geometry and location. This requires the use of Monte Carlo inversion techniques which leads to long computation times. Recently, ‘displacement tomography’ models have been used to characterize magma reservoirs by inverting source deformation data for volume changes using a grid of point sources in the subsurface. The computations involved in these models are less intensive as no assumptions are made on the source geometry and location, and the relationship between the point sources and the surface deformation is linear. In this project, seeking a less computationally intensive technique for fracture sources, we tested if this displacement tomography method for reservoirs could be used for sheet intrusions. We began by simulating the opening of three synthetic dikes of known geometry and location using an established deformation model for fracture sources. We then sought to reproduce the displacements and volume changes undergone by the fractures using the sources employed in the tomography methodology. Results of this validation indicate the volumetric point sources are not appropriate for locating fracture sources, however they may provide useful qualitative information on volume changes occurring in the surrounding rock, and therefore indirectly indicate the source location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis focuses on advanced reconstruction methods and Dual Energy (DE) Computed Tomography (CT) applications for proton therapy, aiming at improving patient positioning and investigating approaches to deal with metal artifacts. To tackle the first goal, an algorithm for post-processing input DE images has been developed. The outputs are tumor- and bone-canceled images, which help in recognising structures in patient body. We proved that positioning error is substantially reduced using contrast enhanced images, thus suggesting the potential of such application. If positioning plays a key role in the delivery, even more important is the quality of planning CT. For that, modern CT scanners offer possibility to tackle challenging cases, like treatment of tumors close to metal implants. Possible approaches for dealing with artifacts introduced by such rods have been investigated experimentally at Paul Scherrer Institut (Switzerland), simulating several treatment plans on an anthropomorphic phantom. In particular, we examined the cases in which none, manual or Iterative Metal Artifact Reduction (iMAR) algorithm were used to correct the artifacts, using both Filtered Back Projection and Sinogram Affirmed Iterative Reconstruction as image reconstruction techniques. Moreover, direct stopping power calculation from DE images with iMAR has also been considered as alternative approach. Delivered dose measured with Gafchromic EBT3 films was compared with the one calculated in Treatment Planning System. Residual positioning errors, daily machine dependent uncertainties and film quenching have been taken into account in the analyses. Although plans with multiple fields seemed more robust than single field, results showed in general better agreement between prescribed and delivered dose when using iMAR, especially if combined with DE approach. Thus, we proved the potential of these advanced algorithms in improving dosimetry for plans in presence of metal implants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: to determine whether pupil dilation affects biometric measurements and intraocular lens (IOL) power calculation made using the new swept-source optical coherence tomography-based optical biometer (IOLMaster 700©; Carl Zeiss Meditec, Jena, Germany). Procedures: eighty-one eyes of 81 patients evaluated for cataract surgery were prospectively examined using the IOLMaster 700© before and after pupil dilation with tropicamide 1%. The measurements made were: axial length (AL), central corneal thickness (CCT), aqueous chamber depth (ACD), lens thickness (LT), mean keratometry (MK), white-to-white distance (WTW) and pupil diameter (PD). Holladay II and SRK/T formulas were used to calculate IOL power. Agreement between measurement modes (with and without dilation) was assessed through intraclass correlation coefficients (ICC) and Bland-Altman plots. Results: mean patient age was 75.17 ± 7.54 years (range: 57–92). Of the variables determined, CCT, ACD, LT and WTW varied significantly according to pupil dilation. Excellent intraobserver correlation was observed between measurements made before and after pupil dilation. Mean IOL power calculation using the Holladay 2 and SRK/T formulas were unmodified by pupil dilation. Conclusions: the use of pupil dilation produces statistical yet not clinically significant differences in some IOLMaster 700© measurements. However, it does not affect mean IOL power calculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose To compare measurements taken using a swept-source optical coherence tomography-based optical biometer (IOLmaster 700) and an optical low-coherence reflectometry biometer (Lenstar 900), and to determine the clinical impacts of differences in their measurements on intraocular lens (IOL) power predictions. Methods Eighty eyes of 80 patients scheduled to undergo cataract surgery were examined with both biometers. The measurements made using each device were axial length (AL), central corneal thickness (CCT), aqueous depth (AQD), lens thickness (LT), mean keratometry (MK), white-to-white distance (WTW), and pupil diameter (PD). Holladay 2 and SRK/T formulas were used to calculate IOL power. Differences in measurement between the two biometers were determined using the paired t-test. Agreement was assessed through intraclass correlation coefficients (ICC) and Bland–Altman plots. Results Mean patient age was 76.3±6.8 years (range 59–89). Using the Lenstar, AL and PD could not be measured in 12.5 and 5.25% of eyes, respectively, while IOLMaster 700 took all measurements in all eyes. The variables CCT, AQD, LT, and MK varied significantly between the two biometers. According to ICCs, correlation between measurements made with both devices was excellent except for WTW and PD. Using the SRK/T formula, IOL power prediction based on the data from the two devices were statistically different, but differences were not clinically significant. Conclusions No clinically relevant differences were detected between the biometers in terms of their measurements and IOL power predictions. Using the IOLMaster 700, it was easier to obtain biometric measurements in eyes with less transparent ocular media or longer AL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to develop and validate a multivariate predictive model to detect glaucoma by using a combination of retinal nerve fiber layer (RNFL), retinal ganglion cell-inner plexiform (GCIPL), and optic disc parameters measured using spectral-domain optical coherence tomography (OCT). Methods: Five hundred eyes from 500 participants and 187 eyes of another 187 participants were included in the study and validation groups, respectively. Patients with glaucoma were classified in five groups based on visual field damage. Sensitivity and specificity of all glaucoma OCT parameters were analyzed. Receiver operating characteristic curves (ROC) and areas under the ROC (AUC) were compared. Three predictive multivariate models (quantitative, qualitative, and combined) that used a combination of the best OCT parameters were constructed. A diagnostic calculator was created using the combined multivariate model. Results: The best AUC parameters were: inferior RNFL, average RNFL, vertical cup/disc ratio, minimal GCIPL, and inferior-temporal GCIPL. Comparisons among the parameters did not show that the GCIPL parameters were better than those of the RNFL in early and advanced glaucoma. The highest AUC was in the combined predictive model (0.937; 95% confidence interval, 0.911–0.957) and was significantly (P = 0.0001) higher than the other isolated parameters considered in early and advanced glaucoma. The validation group displayed similar results to those of the study group. Conclusions: Best GCIPL, RNFL, and optic disc parameters showed a similar ability to detect glaucoma. The combined predictive formula improved the glaucoma detection compared to the best isolated parameters evaluated. The diagnostic calculator obtained good classification from participants in both the study and validation groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The background of this study is to assess the accuracy of lung ultrasound (LUS) to diagnose interstitial lung disease (ILD) in Sjögren’s syndrome (Sjs), in patients who have any alterations in pulmonary function tests (PFT) or respiratory symptoms. LUS was correlated with chest tomography (hrCT), considering it as the imaging gold standard technique to diagnose ILD. This is a pilot, multicenter, cross-sectional, and consecutive-case study. The inclusion criteria are ≥18 years old, Signs and symptoms: according to ACEG 2002 criteria, respiratory symptoms (dyspnea, cough), or any alterations in PFR. LUS was done following the International Consensus Conference on Lung Ultrasound protocol for interstitial syndrome (B pattern). Of the 50 patients in follow-up, 13 (26%) met the inclusion criteria. All were women with age 63.62 years (range 39–88). 78.6% of the cases had primary Sjs (SLE, RA, n = 2). The intra-rater reliability k is 1, according to Gwet’s Ac1 and GI index (probability to concordance—e(K)—, by Cohen, of 0.52). LUS has a sensitivity of 1 (95% CI 0.398–1.0), specificity of 0.89 (95% CI 0.518–0.997), and a positive probability reason of 9.00 (95% CI 7.1–11.3) to detect ILD. The correlation of Pearson is r = 0.84 (p < 0.001). To check the accuracy of LUS to diagnose ILD, a completely bilateral criterion of yes/no for interstitial pattern was chosen, AUC reaches significance, 0.94 (0.07) (95% CI 0.81–1.0, p = 0.014). LUS reaches an excellent correlation to hrCT in Sjs affected with ILD, and might be a useful technique in daily clinical practice for the assessment of pulmonary disease in the sicca syndrome. © 2016 SIMI

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non Alcoholic Fatty Liver Disease (NAFLD) is a condition that is frequently seen but seldom investigated. Until recently, NAFLD was considered benign, self-limiting and unworthy of further investigation. This opinion is based on retrospective studies with relatively small numbers and scant follow-up of histology data. (1) The prevalence for adults, in the USA is, 30%, and NAFLD is recognized as a common and increasing form of liver disease in the paediatric population (1). Australian data, from New South Wales, suggests the prevalence of NAFLD in “healthy” 15 year olds as being 10%.(2) Non-alcoholic fatty liver disease is a condition where fat progressively invades the liver parenchyma. The degree of infiltration ranges from simple steatosis (fat only) to steatohepatitis (fat and inflammation) steatohepatitis plus fibrosis (fat, inflammation and fibrosis) to cirrhosis (replacement of liver texture by scarred, fibrotic and non functioning tissue).Non-alcoholic fatty liver is diagnosed by exclusion rather than inclusion. None of the currently available diagnostic techniques -liver biopsy, liver function tests (LFT) or Imaging; ultrasound, Computerised tomography (CT) or Magnetic Resonance Imaging (MRI) are specific for non-alcoholic fatty liver. An association exists between NAFLD, Non Alcoholic Steatosis Hepatitis (NASH) and irreversible liver damage, cirrhosis and hepatoma. However, a more pervasive aspect of NAFLD is the association with Metabolic Syndrome. This Syndrome is categorised by increased insulin resistance (IR) and NAFLD is thought to be the hepatic representation. Those with NAFLD have an increased risk of death (3) and it is an independent predictor of atherosclerosis and cardiovascular disease (1). Liver biopsy is considered the gold standard for diagnosis, (4), and grading and staging, of non-alcoholic fatty liver disease. Fatty-liver is diagnosed when there is macrovesicular steatosis with displacement of the nucleus to the edge of the cell and at least 5% of the hepatocytes are seen to contain fat (4).Steatosis represents fat accumulation in liver tissue without inflammation. However, it is only called non-alcoholic fatty liver disease when alcohol - >20gms-30gms per day (5), has been excluded from the diet. Both non-alcoholic and alcoholic fatty liver are identical on histology. (4).LFT’s are indicative, not diagnostic. They indicate that a condition may be present but they are unable to diagnosis what the condition is. When a patient presents with raised fasting blood glucose, low HDL (high density lipoprotein), and elevated fasting triacylglycerols they are likely to have NAFLD. (6) Of the imaging techniques MRI is the least variable and the most reproducible. With CT scanning liver fat content can be semi quantitatively estimated. With increasing hepatic steatosis, liver attenuation values decrease by 1.6 Hounsfield units for every milligram of triglyceride deposited per gram of liver tissue (7). Ultrasound permits early detection of fatty liver, often in the preclinical stages before symptoms are present and serum alterations occur. Earlier, accurate reporting of this condition will allow appropriate intervention resulting in better patient health outcomes. References 1. Chalasami N. Does fat alone cause significant liver disease: It remains unclear whether simple steatosis is truly benign. American Gastroenterological Association Perspectives, February/March 2008 www.gastro.org/wmspage.cfm?parm1=5097 Viewed 20th October, 2008 2. Booth, M. George, J.Denney-Wilson, E: The population prevalence of adverse concentrations with adiposity of liver tests among Australian adolescents. Journal of Paediatrics and Child Health.2008 November 3. Catalano, D, Trovato, GM, Martines, GF, Randazzo, M, Tonzuso, A. Bright liver, body composition and insulin resistance changes with nutritional intervention: a follow-up study .Liver Int.2008; February 1280-9 4. Choudhury, J, Sanysl, A. Clinical aspects of Fatty Liver Disease. Semin in Liver Dis. 2004:24 (4):349-62 5. Dionysus Study Group. Drinking factors as cofactors of risk for alcohol induced liver change. Gut. 1997; 41 845-50 6. Preiss, D, Sattar, N. Non-alcoholic fatty liver disease: an overview of prevalence, diagnosis, pathogenesis and treatment considerations. Clin Sci.2008; 115 141-50 7. American Gastroenterological Association. Technical review on nonalcoholic fatty liver disease. Gastroenterology.2002; 123: 1705-25

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction Many bilinguals will have had the experience of unintentionally reading something in a language other than the intended one (e.g. MUG to mean mosquito in Dutch rather than a receptacle for a hot drink, as one of the possible intended English meanings), of finding themselves blocked on a word for which many alternatives suggest themselves (but, somewhat annoyingly, not in the right language), of their accent changing when stressed or tired and, occasionally, of starting to speak in a language that is not understood by those around them. These instances where lexical access appears compromised and control over language behavior is reduced hint at the intricate structure of the bilingual lexical architecture and the complexity of the processes by which knowledge is accessed and retrieved. While bilinguals might tend to blame word finding and other language problems on their bilinguality, these difficulties per se are not unique to the bilingual population. However, what is unique, and yet far more common than is appreciated by monolinguals, is the cognitive architecture that subserves bilingual language processing. With bilingualism (and multilingualism) the rule rather than the exception (Grosjean, 1982), this architecture may well be the default structure of the language processing system. As such, it is critical that we understand more fully not only how the processing of more than one language is subserved by the brain, but also how this understanding furthers our knowledge of the cognitive architecture that encapsulates the bilingual mental lexicon. The neurolinguistic approach to bilingualism focuses on determining the manner in which the two (or more) languages are stored in the brain and how they are differentially (or similarly) processed. The underlying assumption is that the acquisition of more than one language requires at the very least a change to or expansion of the existing lexicon, if not the formation of language-specific components, and this is likely to manifest in some way at the physiological level. There are many sources of information, ranging from data on bilingual aphasic patients (Paradis, 1977, 1985, 1997) to lateralization (Vaid, 1983; see Hull & Vaid, 2006, for a review), recordings of event-related potentials (ERPs) (e.g. Ardal et al., 1990; Phillips et al., 2006), and positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies of neurologically intact bilinguals (see Indefrey, 2006; Vaid & Hull, 2002, for reviews). Following the consideration of methodological issues and interpretative limitations that characterize these approaches, the chapter focuses on how the application of these approaches has furthered our understanding of (1) selectivity of bilingual lexical access, (2) distinctions between word types in the bilingual lexicon and (3) control processes that enable language selection.