940 resultados para medical image segmentation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

To investigate the degree of T2 relaxometry changes over time in groups of patients with familial mesial temporal lobe epilepsy (FMTLE) and asymptomatic relatives. We conducted both cross-sectional and longitudinal analyses of T2 relaxometry with Aftervoxel, an in-house software for medical image visualization. The cross-sectional study included 35 subjects (26 with FMTLE and 9 asymptomatic relatives) and 40 controls; the longitudinal study was composed of 30 subjects (21 with FMTLE and 9 asymptomatic relatives; the mean time interval of MRIs was 4.4 ± 1.5 years) and 16 controls. To increase the size of our groups of patients and relatives, we combined data acquired in 2 scanners (2T and 3T) and obtained z-scores using their respective controls. General linear model on SPSS21® was used for statistical analysis. In the cross-sectional analysis, elevated T2 relaxometry was identified for subjects with seizures and intermediate values for asymptomatic relatives compared to controls. Subjects with MRI signs of hippocampal sclerosis presented elevated T2 relaxometry in the ipsilateral hippocampus, while patients and asymptomatic relatives with normal MRI presented elevated T2 values in the right hippocampus. The longitudinal analysis revealed a significant increase in T2 relaxometry for the ipsilateral hippocampus exclusively in patients with seizures. The longitudinal increase of T2 signal in patients with seizures suggests the existence of an interaction between ongoing seizures and the underlying pathology, causing progressive damage to the hippocampus. The identification of elevated T2 relaxometry in asymptomatic relatives and in patients with normal MRI suggests that genetic factors may be involved in the development of some mild hippocampal abnormalities in FMTLE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Machado-Joseph disease (SCA3) is the most frequent spinocerebellar ataxia worldwide and characterized by remarkable phenotypic heterogeneity. MRI-based studies in SCA3 focused in the cerebellum and connections, but little is known about cord damage in the disease and its clinical relevance. To evaluate the spinal cord damage in SCA3 through quantitative analysis of MRI scans. A group of 48 patients with SCA3 and 48 age and gender-matched healthy controls underwent MRI on a 3T scanner. We used T1-weighted 3D images to estimate the cervical spinal cord area (CA) and eccentricity (CE) at three C2/C3 levels based on a semi-automatic image segmentation protocol. The scale for assessment and rating of ataxia (SARA) was employed to quantify disease severity. The two groups-SCA3 and controls-were significantly different regarding CA (49.5 ± 7.3 vs 67.2 ± 6.3 mm(2), p < 0.001) and CE values (0.79 ± 0.06 vs 0.75 ± 0.05, p = 0.005). In addition, CA presented a significant correlation with SARA scores in the patient group (p = 0.010). CE was not associated with SARA scores (p = 0.857). In the multiple variable regression, we found that disease duration was the only variable associated with CA (coefficient = -0.629, p = 0.025). SCA3 is characterized by cervical cord atrophy and antero-posterior flattening. In addition, the spinal cord areas did correlate with disease severity. This suggests that quantitative analyses of the spinal cord MRI might be a useful biomarker in SCA3.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing adoption of information systems in healthcare has led to a scenario where patient information security is more and more being regarded as a critical issue. Allowing patient information to be in jeopardy may lead to irreparable damage, physically, morally, and socially to the patient, potentially shaking the credibility of the healthcare institution. Medical images play a crucial role in such context, given their importance in diagnosis, treatment, and research. Therefore, it is vital to take measures in order to prevent tampering and determine their provenance. This demands adoption of security mechanisms to assure information integrity and authenticity. There are a number of works done in this field, based on two major approaches: use of metadata and use of watermarking. However, there still are limitations for both approaches that must be properly addressed. This paper presents a new method using cryptographic means to improve trustworthiness of medical images, providing a stronger link between the image and the information on its integrity and authenticity, without compromising image quality to the end user. Use of Digital Imaging and Communications in Medicine structures is also an advantage for ease of development and deployment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neurobiological models support an involvement of white matter tracts in the pathophysiology of obsessive-compulsive disorder (OCD), but there has been little systematic evaluation of white matter volumes in OCD using magnetic resonance imaging (MRI). We investigated potential differences in the volume of the cingulum bundle (CB) and anterior limb of internal capsule (ALIC) in OCD patients (n = 19) relative to asymptomatic control subjects (n = 15). White matter volumes were assessed using a 1.5T MRI scanner. Between-group comparisons were carried out after spatial normalization and image segmentation using optimized voxel-based morphometry. Correlations between regional white matter volumes in OCD subjects and symptom severity ratings were also investigated. We found significant global white matter reductions in OCD patients compared to control subjects. The voxel-based search for regional abnormalities (with covariance for total white matter volumes) showed no specific white matter volume deficits in brain portions predicted a priori to be affected in OCD (CB and ALIC). However, large clusters of significant positive correlation with OCD severity scores were found bilaterally on the ALIC. These findings provide evidence of OCD-related ALIC abnormalities and suggest a connectivity dysfunction within frontal-striatal-thalamic-cortical circuits. Further studies are warranted to better define the role of such white matter alterations in the pathophysiology of OCD, and may provide clues for a more effectively targeting of neurosurgical treatments for OCD. (C) 2009 Elsevier Ireland Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. The development of accurate and reliable segmentation techniques may be essential to improve research outcomes. This work presents an image processing method to measure the perimeter and area of lung branches on fetal rat explants. The algorithm starts by reducing the noise corrupting the image with a pre-processing stage. The outcome is input to a watershed operation that automatically segments the image into primitive regions. Then, an image pixel is selected within the lung explant epithelial, allowing a region growing between neighbouring watershed regions. This growing process is controlled by a statistical distribution of each region. When compared with manual segmentation, the results show the same tendency for lung development. High similarities were harder to obtain in the last two days of culture, due to the increased number of peripheral airway buds and complexity of lung architecture. However, using semiautomatic measurements, the standard deviation was lower and the results between independent researchers were more coherent

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. The development of accurate and reliable segmentation techniques may be essential to improve research outcomes. This work presents an image processing method to measure the perimeter and area of lung branches on fetal rat explants. The algorithm starts by reducing the noise corrupting the image with a pre-processing stage. The outcome is input to a watershed operation that automatically segments the image into primitive regions. Then, an image pixel is selected within the lung explant epithelial, allowing a region growing between neighbouring watershed regions. This growing process is controlled by a statistical distribution of each region. When compared with manual segmentation, the results show the same tendency for lung development. High similarities were harder to obtain in the last two days of culture, due to the increased number of peripheral airway buds and complexity of lung architecture. However, using semiautomatic measurements, the standard deviation was lower and the results between independent researchers were more coherent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82±5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7±4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A análise forense de documentos é uma das áreas das Ciências Forenses, responsável pela verificação da autenticidade dos documentos. Os documentos podem ser de diferentes tipos, sendo a moeda ou escrita manual as evidências forenses que mais frequentemente motivam a análise. A associação de novas tecnologias a este processo de análise permite uma melhor avaliação dessas evidências, tornando o processo mais célere. Esta tese baseia-se na análise forense de dois tipos de documentos - notas de euro e formulários preenchidos por escrita manual. Neste trabalho pretendeu-se desenvolver técnicas de processamento e análise de imagens de evidências dos tipos referidos com vista a extração de medidas que permitam aferir da autenticidade dos mesmos. A aquisição das imagens das notas foi realizada por imagiologia espetral, tendo-se definidas quatro modalidades de aquisição: luz visível transmitida, luz visível refletida, ultravioleta A e ultravioleta C. Para cada uma destas modalidades de aquisição, foram também definidos 2 protocolos: frente e verso. A aquisição das imagens dos documentos escritos manualmente efetuou-se através da digitalização dos mesmos com recurso a um digitalizador automático de um aparelho multifunções. Para as imagens das notas desenvolveram-se vários algoritmos de processamento e análise de imagem, específicos para este tipo de evidências. Esses algoritmos permitem a segmentação da região de interesse da imagem, a segmentação das sub-regiões que contém as marcas de segurança a avaliar bem como da extração de algumas características. Relativamente as imagens dos documentos escritos manualmente, foram também desenvolvidos algoritmos de segmentação que permitem obter todas as sub-regiões de interesse dos formulários, de forma a serem analisados os vários elementos. Neste tipo de evidências, desenvolveu-se ainda um algoritmo de análise para os elementos correspondentes à escrita de uma sequência numérica o qual permite a obtenção das imagens correspondentes aos caracteres individuais. O trabalho desenvolvido e os resultados obtidos permitiram a definição de protocolos de aquisição de imagens destes tipos de evidências. Os algoritmos automáticos de segmentação e análise desenvolvidos ao longo deste trabalho podem ser auxiliares preciosos no processo de análise da autenticidade dos documentos, o qual, ate então, é feito manualmente. Apresentam-se ainda os resultados dos estudos feitos às diversas evidências, nomeadamente as performances dos diversos algoritmos analisados, bem como algumas das adversidades encontradas durante o processo. Apresenta-se também uma discussão da metodologia adotada e dos resultados, bem como de propostas de continuação deste trabalho, nomeadamente, a extração de características e a implementação de classificadores capazes aferir da autenticidade dos documentos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste documento ´e feita a descrição detalhada da integração modular de um script no software OsiriX. O objectivo deste script ´e determinar o diâmetro central da artéria aorta a partir de uma Tomografia Computorizada. Para tal são abordados conceitos relacionados com a temática do processamento de imagem digital, tecnologias associadas, e.g., a norma DICOM e desenvolvimento de software. Como estudo preliminar, são analisados diversos visualizadores de imagens médica, utilizados para investigação ou mesmo comercializados. Foram realizadas duas implementações distintas do plugin. A primeira versão do plugin faz a invocação do script de processamento usando o ficheiro de estudo armazenado em disco; a segunda versão faz a passagem de dados através de um bloco de memória partilhada e utiliza o framework Java Native Interface. Por fim, é demonstrado todo o processo de aposição da Marcação CE de um dispositivo médico de classe IIa e obtenção da declaração de conformidade por parte de um Organismo Notificado. Utilizaram-se os Sistemas Operativos Mac OS X e Linux e as linguagens de programação Java, Objective-C e Python.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As novas tecnologias aplicadas ao processamento de imagem e reconhecimento de padrões têm sido alvo de um grande progresso nas últimas décadas. A sua aplicação é transversal a diversas áreas da ciência, nomeadamente a área da balística forense. O estudo de evidências (invólucros e projeteis) encontradas numa cena de crime, recorrendo a técnicas de processamento e análise de imagem, é pertinente pelo facto de, aquando do disparo, as armas de fogo imprimirem marcas únicas nos invólucros e projéteis deflagrados, permitindo relacionar evidências deflagradas pela mesma arma. A comparação manual de evidências encontradas numa cena de crime com evidências presentes numa base de dados, em termos de parâmetros visuais, constitui uma abordagem demorada. No âmbito deste trabalho pretendeu-se desenvolver técnicas automáticas de processamento e análise de imagens de evidências, obtidas através do microscópio ótico de comparação, tendo por base algoritmos computacionais. Estes foram desenvolvidos com recurso a pacotes de bibliotecas e a ferramentas open-source. Para a aquisição das imagens de evidências balísticas foram definidas quatro modalidades de aquisição: modalidade Planar, Multifocus, Microscan e Multiscan. As imagens obtidas foram aplicados algoritmos de processamento especialmente desenvolvidos para o efeito. A aplicação dos algoritmos de processamento permite a segmentação de imagem, a extração de características e o alinhamento de imagem. Este último tem como finalidade correlacionar as evidências e obter um valor quantitativo (métrica), indicando o quão similar essas evidências são. Com base no trabalho desenvolvido e nos resultados obtidos, foram definidos protocolos de aquisição de imagens de microscopia, que possibilitam a aquisição de imagens das regiões passiveis de serem estudadas, assim como algoritmos que permitem automatizar o posterior processo de alinhamento de imagens de evidências, constituindo uma vantagem em relação ao processo de comparação manual.