999 resultados para Parallel imaging
Resumo:
Esta tese pretende contribuir para o estudo e análise dos factores relacionados com as técnicas de aquisição de imagens radiológicas digitais, a qualidade diagnóstica e a gestão da dose de radiação em sistema de radiologia digital. A metodologia encontra-se organizada em duas componentes. A componente observacional, baseada num desenho do estudo de natureza retrospectiva e transversal. Os dados recolhidos a partir de sistemas CR e DR permitiram a avaliação dos parâmetros técnicos de exposição utilizados em radiologia digital, a avaliação da dose absorvida e o índice de exposição no detector. No contexto desta classificação metodológica (retrospectiva e transversal), também foi possível desenvolver estudos da qualidade diagnóstica em sistemas digitais: estudos de observadores a partir de imagens arquivadas no sistema PACS. A componente experimental da tese baseou-se na realização de experiências em fantomas para avaliar a relação entre dose e qualidade de imagem. As experiências efectuadas permitiram caracterizar as propriedades físicas dos sistemas de radiologia digital, através da manipulação das variáveis relacionadas com os parâmetros de exposição e a avaliação da influência destas na dose e na qualidade da imagem. Utilizando um fantoma contraste de detalhe, fantomas antropomórficos e um fantoma de osso animal, foi possível objectivar medidas de quantificação da qualidade diagnóstica e medidas de detectabilidade de objectos. Da investigação efectuada, foi possível salientar algumas conclusões. As medidas quantitativas referentes à performance dos detectores são a base do processo de optimização, permitindo a medição e a determinação dos parâmetros físicos dos sistemas de radiologia digital. Os parâmetros de exposição utilizados na prática clínica mostram que a prática não está em conformidade com o referencial Europeu. Verifica-se a necessidade de avaliar, melhorar e implementar um padrão de referência para o processo de optimização, através de novos referenciais de boa prática ajustados aos sistemas digitais. Os parâmetros de exposição influenciam a dose no paciente, mas a percepção da qualidade de imagem digital não parece afectada com a variação da exposição. Os estudos que se realizaram envolvendo tanto imagens de fantomas como imagens de pacientes mostram que a sobreexposição é um risco potencial em radiologia digital. A avaliação da qualidade diagnóstica das imagens mostrou que com a variação da exposição não se observou degradação substancial da qualidade das imagens quando a redução de dose é efectuada. Propõe-se o estudo e a implementação de novos níveis de referência de diagnóstico ajustados aos sistemas de radiologia digital. Como contributo da tese, é proposto um modelo (STDI) para a optimização de sistemas de radiologia digital.
Resumo:
This article reports on a-Si:H-based low-leakage blue-enhanced photodiodes for dual-screen x-ray imaging detectors. Doped nanocrystalline silicon was incorporated in both the n- and p-type regions to reduce absorption losses for light incoming from the top and bottom screens. The photodiode exhibits a dark current density of 900 pA/cm(2) and an external quantum efficiency up to 90% at a reverse bias of 5 V. In the case of illumination through the tailored p-layer, the quantum efficiency of 60% at a 400 nm wavelength is almost double that for the conventional a-Si:H n-i-p photodiode.
Resumo:
In this paper we present results on the optimization of device architectures for colour and imaging applications, using a device with a TCO/pinpi'n/TCO configuration. The effect of the applied voltage on the color selectivity is discussed. Results show that the spectral response curves demonstrate rather good separation between the red, green and blue basic colors. Combining the information obtained under positive and negative applied bias a colour image is acquired without colour filters or pixel architecture. A low level image processing algorithm is used for the colour image reconstruction.
Resumo:
In the last years there has been a huge growth and consolidation of the Data Mining field. Some efforts are being done that seek the establishment of standards in the area. Included on these efforts there can be enumerated SEMMA and CRISP-DM. Both grow as industrial standards and define a set of sequential steps that pretends to guide the implementation of data mining applications. The question of the existence of substantial differences between them and the traditional KDD process arose. In this paper, is pretended to establish a parallel between these and the KDD process as well as an understanding of the similarities between them.
Resumo:
In the last years there has been a huge growth and consolidation of the Data Mining field. Some efforts are being done that seek the establishment of standards in the area. Included on these efforts there can be enumerated SEMMA and CRISP-DM. Both grow as industrial standards and define a set of sequential steps that pretends to guide the implementation of data mining applications. The question of the existence of substantial differences between them and the traditional KDD process arose. In this paper, is pretended to establish a parallel between these and the KDD process as well as an understanding of the similarities between them.
Resumo:
Advances in digital technology led to the development of digital x-ray detectors that are currently in wide use for projection radiography, including Computed Radiography (CR) and Digital Radiography (DR). Digital Imaging Systems for Plain Radiography addresses the current technological methods available to medical imaging professionals to ensure the optimization of the radiological process concerning image quality and reduction of patient exposure. Based on extensive research by the authors and reference to the current literature, the book addresses how exposure parameters influence the diagnostic quality in digital systems, what the current acceptable radiation doses are for useful diagnostic images, and at what level the dose could be reduced to maintain an accurate diagnosis. The book is a valuable resource for both students learning the field and for imaging professionals to apply to their own practice while performing radiological examinations with digital systems.
Resumo:
Claustrophobia causes a huge discomfort to those who need to perform Magnetic Resonance examinations mainly due to the physical design of most equipment. This study aimed to maximize the success rate of Magnetic Resonance Imaging (MRI) clinical studies in claustrophobic patients by the identification of facilitative strategies.
Resumo:
Mestrado em Medicina Nuclear.
Resumo:
Many solid tumors have a poor response to systemic chemotherapy, local radiotherapy or surgical recession. They are responsible for premature morbidity and decreased patient survival. The radiofrequency ablation is an emerging technique, and is now becoming more widespread throughout the world because it is minimally invasive, image guided, which offers the possibility of an effective and less costly approach. The procedure can be performed percutaneously, guided by several imaging modalities as Ultrasound, Computed Tomography and Magnetic Resonance. This article pretends to demonstrate the state-of-the-art of this technique focusing in the technical aspects and application of radiofrequency ablation.
Resumo:
Susceptibility-weighted imaging (SWI) is a relatively new contrast in MR imaging. Previous studies have found an effect of caffeine in the contrast generated by SWI images. The present study investigates the effect of caffeine on contrast-to-noise ratio (CNR) in SWI.
Resumo:
We present new Rayleigh-wave dispersion maps of the western Iberian Peninsula for periods between 8 and 30 s, obtained from correlations of seismic ambient noise, following the recent increase in seismic broadband network density in Portugal and Spain. Group velocities have been computed for each station pair using the empirical Green's functions generated by cross-correlating one-day-length seismic ambient-noise records. The resulting high-path density allows us to obtain lateral variations of the group velocities as a function of period in cells of 0.5 degrees x 0.5 degrees with an unprecedented resolution. As a result we were able to address some of the unknowns regarding the lithospheric structure beneath SW Iberia. The dispersion maps allow the imaging of the major structural units, namely the Iberian Massif, and the Lusitanian and Algarve Meso-Cenozoic basins. The Cadiz Gulf/Gibraltar Strait area corresponds to a strong low-velocity anomaly, which can be followed to the largest period inverted, although slightly shifted to the east at longer periods. Within the Iberian Massif, second-order perturbations in the group velocities are consistent with the transitions between tectonic units composing the massif. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The formation of amyloid structures is a neuropathological feature that characterizes several neurodegenerative disorders, such as Alzheimer´s and Parkinson´s disease. Up to now, the definitive diagnosis of these diseases can only be accomplished by immunostaining of post mortem brain tissues with dyes such Thioflavin T and congo red. Aiming at early in vivo diagnosis of Alzheimer´s disease (AD), several amyloid-avid radioprobes have been developed for b-amyloid imaging by positron emission tomography (PET) and single-photon emission computed tomography (SPECT). The aim of this paper is to present a perspective of the available amyloid imaging agents, special those that have been selected for clinical trials and are at the different stages of the US Food and Drugs Administration (FDA) approval.
Resumo:
Objective - To describe and validate the simulation of the basic features of GE Millennium MG gamma camera using the GATE Monte Carlo platform. Material and methods - Crystal size and thickness, parallel-hole collimation and a realistic energy acquisition window were simulated in the GATE platform. GATE results were compared to experimental data in the following imaging conditions: a point source of 99mTc at different positions during static imaging and tomographic acquisitions using two different energy windows. The accuracy between the events expected and detected by simulation was obtained with the Mann–Whitney–Wilcoxon test. Comparisons were made regarding the measurement of sensitivity and spatial resolution, static and tomographic. Simulated and experimental spatial resolutions for tomographic data were compared with the Kruskal–Wallis test to assess simulation accuracy for this parameter. Results - There was good agreement between simulated and experimental data. The number of decays expected when compared with the number of decays registered, showed small deviation (≤0.007%). The sensitivity comparisons between static acquisitions for different distances from source to collimator (1, 5, 10, 20, 30cm) with energy windows of 126–154 keV and 130–158 keV showed differences of 4.4%, 5.5%, 4.2%, 5.5%, 4.5% and 5.4%, 6.3%, 6.3%, 5.8%, 5.3%, respectively. For the tomographic acquisitions, the mean differences were 7.5% and 9.8% for the energy window 126–154 keV and 130–158 keV. Comparison of simulated and experimental spatial resolutions for tomographic data showed no statistically significant differences with 95% confidence interval. Conclusions - Adequate simulation of the system basic features using GATE Monte Carlo simulation platform was achieved and validated.
Resumo:
Introduction Myocardial Perfusion Imaging (MPI) is a very important tool in the assessment of Coronary Artery Disease ( CAD ) patient s and worldwide data demonstrate an increasingly wider use and clinical acceptance. Nevertheless, it is a complex process and it is quite vulnerable concerning the amount and type of possible artefacts, some of them affecting seriously the overall quality and the clinical utility of the obtained data. One of the most in convenient artefacts , but relatively frequent ( 20% of the cases ) , is relate d with patient motion during image acquisition . Mostly, in those situations, specific data is evaluated and a decisi on is made between A) accept the results as they are , consider ing that t he “noise” so introduced does not affect too seriously the final clinical information, or B) to repeat the acquisition process . Another possib ility could be to use the “ Motion Correcti on Software” provided within the software package included in any actual gamma camera. The aim of this study is to compare the quality of the final images , obtained after the application of motion correction software and after the repetition of image acqui sition. Material and Methods Thirty cases of MPI affected by Motion Artefacts and repeated , were used. A group of three, independent (blinded for the differences of origin) expert Nuclear Medicine Clinicians had been invited to evaluate the 30 sets of thre e images - one set for each patient - being ( A) original image , motion uncorrected , (B) original image, motion corrected, and (C) second acquisition image, without motion . The results so obtained were statistically analysed . Results and Conclusion Results obtained demonstrate that the use of the Motion Correction Software is useful essentiall y if the amplitude of movement is not too important (with this specific quantification found hard to define precisely , due to discrepancies between clinicians and other factors , namely between one to another brand); when that is not the case and the amplitude of movement is too important , the n the percentage of agreement between clinicians is much higher and the repetition of the examination is unanimously considered ind ispensable.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.