927 resultados para Computer-assisted image processing
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
To investigate the degree of T2 relaxometry changes over time in groups of patients with familial mesial temporal lobe epilepsy (FMTLE) and asymptomatic relatives. We conducted both cross-sectional and longitudinal analyses of T2 relaxometry with Aftervoxel, an in-house software for medical image visualization. The cross-sectional study included 35 subjects (26 with FMTLE and 9 asymptomatic relatives) and 40 controls; the longitudinal study was composed of 30 subjects (21 with FMTLE and 9 asymptomatic relatives; the mean time interval of MRIs was 4.4 ± 1.5 years) and 16 controls. To increase the size of our groups of patients and relatives, we combined data acquired in 2 scanners (2T and 3T) and obtained z-scores using their respective controls. General linear model on SPSS21® was used for statistical analysis. In the cross-sectional analysis, elevated T2 relaxometry was identified for subjects with seizures and intermediate values for asymptomatic relatives compared to controls. Subjects with MRI signs of hippocampal sclerosis presented elevated T2 relaxometry in the ipsilateral hippocampus, while patients and asymptomatic relatives with normal MRI presented elevated T2 values in the right hippocampus. The longitudinal analysis revealed a significant increase in T2 relaxometry for the ipsilateral hippocampus exclusively in patients with seizures. The longitudinal increase of T2 signal in patients with seizures suggests the existence of an interaction between ongoing seizures and the underlying pathology, causing progressive damage to the hippocampus. The identification of elevated T2 relaxometry in asymptomatic relatives and in patients with normal MRI suggests that genetic factors may be involved in the development of some mild hippocampal abnormalities in FMTLE.
Resumo:
The search for an Alzheimer's disease (AD) biomarker is one of the most relevant contemporary research topics due to the high prevalence and social costs of the disease. Functional connectivity (FC) of the default mode network (DMN) is a plausible candidate for such a biomarker. We evaluated 22 patients with mild AD and 26 age- and gender-matched healthy controls. All subjects underwent resting functional magnetic resonance imaging (fMRI) in a 3.0 T scanner. To identify the DMN, seed-based FC of the posterior cingulate was calculated. We also measured the sensitivity/specificity of the method, and verified a correlation with cognitive performance. We found a significant difference between patients with mild AD and controls in average z-scores: DMN, whole cortical positive (WCP) and absolute values. DMN individual values showed a sensitivity of 77.3% and specificity of 70%. DMN and WCP values were correlated to global cognition and episodic memory performance. We showed that individual measures of DMN connectivity could be considered a promising method to differentiate AD, even at an early phase, from normal aging. Further studies with larger numbers of participants, as well as validation of normal values, are needed for more definitive conclusions.
Resumo:
This study was designed to evaluate the correlation between computed tomography findings and data from the physical examination and the Friedman Staging System (FSS) in patients with obstructive sleep apnea (OSA). We performed a retrospective evaluation by reviewing the medical records of 33 patients (19 male and 14 female patients) with a mean body mass index of 30.38 kg/m(2) and mean age of 49.35 years. Among these patients, 14 presented with severe OSA, 7 had moderate OSA, 7 had mild OSA, and 5 were healthy. The patients were divided into 2 groups according to the FSS: Group A comprised patients with FSS stage I or II, and group B comprised patients with FSS stage III. By use of the Fisher exact test, a positive relationship between the FSS stage and apnea-hypopnea index (P = .011) and between the FSS stage and body mass index (P = .012) was found. There was no correlation between age (P = .55) and gender (P = .53) with the FSS stage. The analysis of variance test comparing the upper airway volume between the 2 groups showed P = .018. In this sample the FSS and upper airway volume showed an inverse correlation and were useful in analyzing the mechanisms of airway collapse in patients with OSA.
Resumo:
OBJETIVO: Avaliar os efeitos da infiltração de dióxido de carbono em adipócitos presentes na parede abdominal. MÉTODOS: Quinze voluntárias foram submetidas a sessões de infusão de CO2 durante três semanas consecutivas (duas sessões por semana com intervalos de dois a três dias entre cada sessão). O volume de gás carbônico infundido por sessão, em pontos previamente demarcados, foi sempre calculado com base na superfície da área a ser tratada, com volume infundido fixo de 250 mL/100cm² de superfície tratada. Os pontos de infiltração foram demarcados respeitando-se o limite eqüidistante 2cm entre eles. Em cada ponto se injetou 10mL, por sessão, com fluxo de 80mL/min. Foram colhidos fragmentos de tecido celular subcutâneo da parede abdominal anterior antes e após o tratamento. O número e as alterações histomorfológicas dos adipócitos (diâmetro médio, perímetro, comprimento, largura e número de adipócitos por campos de observação) foram mensurados por citometria computadorizada. Os resultados foram analisados com o teste t de Student pareado, adotando-se nível de significância de 5% (p<0,05). RESULTADOS: Encontrou-se redução significativa no número de adipócitos da parede abdominal e na área, diâmetro, perímetro, comprimento e largura após o uso da hipercapnia (p=0,0001). CONCLUSÃO: A infiltração percutânea de CO2 reduz a população e modifica a morfologia dos adipócitos presentes na parede abdominal anterior.
Resumo:
PURPOSE: The ability to predict and understand which biomechanical properties of the cornea are responsible for the stability or progression of keratoconus may be an important clinical and surgical tool for the eye-care professional. We have developed a finite element model of the cornea, that tries to predicts keratoconus-like behavior and its evolution based on material properties of the corneal tissue. METHODS: Corneal material properties were modeled using bibliographic data and corneal topography was based on literature values from a schematic eye model. Commercial software was used to simulate mechanical and surface properties when the cornea was subject to different local parameters, such as elasticity. RESULTS: The simulation has shown that, depending on the corneal initial surface shape, changes in local material properties and also different intraocular pressures values induce a localized protuberance and increase in curvature when compared to the remaining portion of the cornea. CONCLUSIONS: This technique provides a quantitative and accurate approach to the problem of understanding the biomechanical nature of keratoconus. The implemented model has shown that changes in local material properties of the cornea and intraocular pressure are intrinsically related to keratoconus pathology and its shape/curvature.
Resumo:
OBJETIVO: Desenvolver a instrumentação e o "software" para topografia de córnea de grande-ângulo usando o tradicional disco de Plácido. O objetivo é permitir o mapeamento de uma região maior da córnea para topógrafos de córnea que usem a técnica de Plácido, fazendo-se uma adaptação simples na mira. MÉTODOS: Utilizando o tradicional disco de Plácido de um topógrafo de córnea tradicional, 9 LEDs (Light Emitting Diodes) foram adaptados no anteparo cônico para que o paciente voluntário pudesse fixar o olhar em diferentes direções. Para cada direção imagens de Plácido foram digitalizadas e processadas para formar, por meio de algoritmo envolvendo elementos sofisticados de computação gráfica, um mapa tridimensional completo da córnea toda. RESULTADOS: Resultados apresentados neste trabalho mostram que uma região de até 100% maior pode ser mapeada usando esta técnica, permitindo que o clínico mapeie até próximo ao limbo da córnea. São apresentados aqui os resultados para uma superfície esférica de calibração e também para uma córnea in vivo com alto grau de astigmatismo, mostrando a curvatura e elevação. CONCLUSÃO: Acredita-se que esta nova técnica pode propiciar a melhoria de alguns processos, como por exemplo: adaptação de lentes de contato, algoritmos para ablações costumizadas para hipermetropia, entre outros.
Resumo:
Axial X-ray Computed tomography (CT) scanning provides a convenient means of recording the three-dimensional form of soil structure. The technique has been used for nearly two decades, but initial development has concentrated on qualitative description of images. More recently, increasing effort has been put into quantifying the geometry and topology of macropores likely to contribute to preferential now in soils. Here we describe a novel technique for tracing connected macropores in the CT scans. After object extraction, three-dimensional mathematical morphological filters are applied to quantify the reconstructed structure. These filters consist of sequences of so-called erosions and/or dilations of a 32-face structuring element to describe object distances and volumes of influence. The tracing and quantification methodologies were tested on a set of undisturbed soil cores collected in a Swiss pre-alpine meadow, where a new earthworm species (Aporrectodea nocturna) was accidentally introduced. Given the reduced number of samples analysed in this study, the results presented only illustrate the potential of the method to reconstruct and quantify macropores. Our results suggest that the introduction of the new species induced very limited chance to the soil structured for example, no difference in total macropore length or mean diameter was observed. However. in the zone colonised by, the new species. individual macropores tended to have a longer average length. be more vertical and be further apart at some depth. Overall, the approach proved well suited to the analysis of the three-dimensional architecture of macropores. It provides a framework for the analysis of complex structures, which are less satisfactorily observed and described using 2D imaging. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Functional brain imaging techniques such as functional MRI (fMRI) that allow the in vivo investigation of the human brain have been exponentially employed to address the neurophysiological substrates of emotional processing. Despite the growing number of fMRI studies in the field, when taken separately these individual imaging studies demonstrate contrasting findings and variable pictures, and are unable to definitively characterize the neural networks underlying each specific emotional condition. Different imaging packages, as well as the statistical approaches for image processing and analysis, probably have a detrimental role by increasing the heterogeneity of findings. In particular, it is unclear to what extent the observed neurofunctional response of the brain cortex during emotional processing depends on the fMRI package used in the analysis. In this pilot study, we performed a double analysis of an fMRI dataset using emotional faces. The Statistical Parametric Mapping (SPM) version 2.6 (Wellcome Department of Cognitive Neurology, London, UK) and the XBAM 3.4 (Brain Imaging Analysis Unit, Institute of Psychiatry, Kings College London, UK) programs, which use parametric and non-parametric analysis, respectively, were used to assess our results. Both packages revealed that processing of emotional faces was associated with an increased activation in the brain`s visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex, in the cingulate cortex (anterior and posterior cingulate), and in the dorsolateral and ventrolateral prefrontal cortex. However, blood oxygenation level-dependent (BOLD) response in the temporal regions, insula and putamen was evident in the XBAM analysis but not in the SPM analysis. Overall, SPM and XBAM analyses revealed comparable whole-group brain responses. Further Studies are needed to explore the between-group compatibility of the different imaging packages in other cognitive and emotional processing domains. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
An efficient representation method for arbitrarily shaped image segments is proposed. This method includes a smart way to select wavelet basis to approximate the given image segment, with improved image quality and reduced computational load.
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). Methods: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. Results: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. Conclusion: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.
Resumo:
Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.
Resumo:
It is well-known that ROVs require human intervention to guarantee the success of their assignment, as well as the equipment safety. However, as its teleoperation is quite complex to perform, there is a need for assisted teleoperation. This study aims to take on this challenge by developing vision-based assisted teleoperation maneuvers, since a standard camera is present in any ROV. The proposed approach is a visual servoing solution, that allows the user to select between several standard image processing methods and is applied to a 3-DOF ROV. The most interesting characteristic of the presented system is the exclusive use of the camera data to improve the teleoperation of an underactuated ROV. It is demonstrated through the comparison and evaluation of standard implementations of different vision methods and the execution of simple maneuvers to acquire experimental results, that the teleoperation of a small ROV can be drastically improved without the need to install additional sensors.