875 resultados para Image acquisition and representation
Resumo:
The aim was to propose a strategy for finding reasonable compromises between image noise and dose as a function of patient weight. Weighted CT dose index (CTDI(w)) was measured on a multidetector-row CT unit using CTDI test objects of 16, 24 and 32 cm in diameter at 80, 100, 120 and 140 kV. These test objects were then scanned in helical mode using a wide range of tube currents and voltages with a reconstructed slice thickness of 5 mm. For each set of acquisition parameter image noise was measured and the Rose model observer was used to test two strategies for proposing a reasonable compromise between dose and low-contrast detection performance: (1) the use of a unique noise level for all test object diameters, and (2) the use of a unique dose efficacy level defined as the noise reduction per unit dose. Published data were used to define four weight classes and an acquisition protocol was proposed for each class. The protocols have been applied in clinical routine for more than one year. CTDI(vol) values of 6.7, 9.4, 15.9 and 24.5 mGy were proposed for the following weight classes: 2.5-5, 5-15, 15-30 and 30-50 kg with image noise levels in the range of 10-15 HU. The proposed method allows patient dose and image noise to be controlled in such a way that dose reduction does not impair the detection of low-contrast lesions. The proposed values correspond to high- quality images and can be reduced if only high-contrast organs are assessed.
Free-breathing whole-heart coronary MRA with 3D radial SSFP and self-navigated image reconstruction.
Resumo:
Respiratory motion is a major source of artifacts in cardiac magnetic resonance imaging (MRI). Free-breathing techniques with pencil-beam navigators efficiently suppress respiratory motion and minimize the need for patient cooperation. However, the correlation between the measured navigator position and the actual position of the heart may be adversely affected by hysteretic effects, navigator position, and temporal delays between the navigators and the image acquisition. In addition, irregular breathing patterns during navigator-gated scanning may result in low scan efficiency and prolonged scan time. The purpose of this study was to develop and implement a self-navigated, free-breathing, whole-heart 3D coronary MRI technique that would overcome these shortcomings and improve the ease-of-use of coronary MRI. A signal synchronous with respiration was extracted directly from the echoes acquired for imaging, and the motion information was used for retrospective, rigid-body, through-plane motion correction. The images obtained from the self-navigated reconstruction were compared with the results from conventional, prospective, pencil-beam navigator tracking. Image quality was improved in phantom studies using self-navigation, while equivalent results were obtained with both techniques in preliminary in vivo studies.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Resumo:
Body image refers to an individual's internal representation ofhis/her outer self (Cash, 1994; Thompson, Heinberg, Altabe, & Tantleff-Dunn, 1999). It is a multidimensional construct which includes an individual's attitudes towards hislher own physical characteristics (Bane & McAuley, 1998; Cash, 1994; Cash, 2004; Davison & McCabe, 2005; Muth & Cash, 1997; Sabiston, Crocker, & Munroe-Chandler, 2005). Social comparison is the process of thinking about the self in relation to others in order to determine if one's opinions and abilities are adequate and to assess one's social status (Festinger, 1954; Wood, 1996). Research investigating the role of social comparisons on body image has provided some information on the types and nature of the comparisons that are made. The act of making social comparisons may have a negative impact on body image (van den Berg et ai., 2007). Although exercise may improve body image, the impact of social comparisons in exercise settings may be less positive, and there may be differences in the social comparison tendencies between non or infrequent exercisers and exercisers. The present study examined the nature of social comparisons that female collegeaged non or infrequent exercisers and exercisers made with respect to their bodies, and the relationship of these social comparisons to body image attitudes. Specifically, the frequency and direction of comparisons on specific tal-gets and body dimensions were examined in both non or infrequent exercisers and exercisers. Finally, the relationship between body-image attitudes and the frequency and direction with which body-related social comparisons were made for non or infrequent exercisers and exercisers were examined. One hundred and fifty-two participants completed the study (n = 70 non or ill infrequent exercisers; n = 82 exercisers). Participants completed measures of social physique anxiety (SPA), body dissatisfaction, body esteem, body image cognitions, leisure time physical activity, and social comparisons. Results suggested that both groups (non or infrequent exercisers and exercisers) generally made social comparisons and most frequently made comparisons with same-sex friends, and least frequently with same-sex parents. Also, both groups made more appearance-related comparisons than non-appearance-related comparisons. Further, both groups made more negative comparisons with almost all targets. However, non or infrequent exercisers generally made more negative comparisons on all body dimensions, while exercisers made negative comparisons only on weight and body shape dimensions. MANOV As were conducted to examine if any differences on social comparisons between the two groups existed. Results of the MANOVAs indicated that frequency of comparisons with targets, the frequency of comparisons on body dimensions, and direction of comparisons with targets did not differ based on exercise status. However, the direction of comparison of specific body dimensions revealed a significant (F (7, 144) = 3.26,p < .05; 1]2 = .132) difference based on exercise status. Follow-up ANOVAs showed significant differences on five variables: physical attractiveness (F (1, 150) = 6.33,p < .05; 1]2 = .041); fitness (F(l, 150) = 11.89,p < .05; 1]2 = .073); co-ordination (F(I, 150) = 5.61,p < .05; 1]2 = .036); strength (F(I, dO) = 12.83,p < .05; 1]2 = .079); muscle mass or tone (F(l, 150) = 17.34,p < .05; 1]2 = 1.04), with exercisers making more positive comparisons than non or infrequent exercisers. The results from the regression analyses for non or infrequent exercisers showed appearance orientation was a significant predictor of the frequency of social comparisons N (B = .429, SEB = .154, /3 = .312,p < .01). Also, trait body image measures accounted for significant variance in the direction of social comparisons (F(9, 57) = 13.43,p < .001, R2adj = .68). Specifically, SPA (B = -.583, SEB = .186, /3 = -.446,p < .01) and body esteem-weight concerns (B = .522, SEB = .207, /3 = .432,p < .01) were significant predictors of the direction of comparisons. For exercisers, regressions revealed that specific trait measures of body image significantly predicted the frequency of comparisons (F(9, 71) = 8.67,p < .001, R2adj = .463). Specifically, SPA (B = .508, SEB = .147, /3 = .497,p < .01) and appearance orientation (B = .457, SEB = .134, /3 = .335,p < .01) were significant predictors of the frequency of social comparisons. Lastly, for exercisers, the results for the regression of body image measures on the direction of social comparisons were also significant (F(9, 70) = 14.65,p < .001, R2adj = .609) with body dissatisfaction (B = .368, SEB = .143, /3 = .362,p < .05), appearan.ce orientation (B = .256, SEB = .123, /3 = .175,p < .05), and fitness orientation (B = .423, SEB = .194, /3 = .266,p < .05) significant predictors of the direction of social comparison. The results indicated that young women made frequent social comparisons regardless of exercise status. However, exercisers m,a de more positive comparisons on all the body dimensions than non or infrequent exercisers. Also, certain trait body image measures may be good predictors of one's body comp~son tendencies. However, the measures which predict comparison tendencies may be different for non or infrequent exercisers and exercisers. Future research should examine the effects of social comparisons in different populations (i.e., males, the obese, older adults, etc.). Implications for practice and research were discussed.
Resumo:
L’athérosclérose est une maladie qui cause, par l’accumulation de plaques lipidiques, le durcissement de la paroi des artères et le rétrécissement de la lumière. Ces lésions sont généralement localisées sur les segments artériels coronariens, carotidiens, aortiques, rénaux, digestifs et périphériques. En ce qui concerne l’atteinte périphérique, celle des membres inférieurs est particulièrement fréquente. En effet, la sévérité de ces lésions artérielles est souvent évaluée par le degré d’une sténose (réduction >50 % du diamètre de la lumière) en angiographie, imagerie par résonnance magnétique (IRM), tomodensitométrie ou échographie. Cependant, pour planifier une intervention chirurgicale, une représentation géométrique artérielle 3D est notamment préférable. Les méthodes d’imagerie par coupe (IRM et tomodensitométrie) sont très performantes pour générer une imagerie tridimensionnelle de bonne qualité mais leurs utilisations sont dispendieuses et invasives pour les patients. L’échographie 3D peut constituer une avenue très prometteuse en imagerie pour la localisation et la quantification des sténoses. Cette modalité d’imagerie offre des avantages distincts tels la commodité, des coûts peu élevés pour un diagnostic non invasif (sans irradiation ni agent de contraste néphrotoxique) et aussi l’option d’analyse en Doppler pour quantifier le flux sanguin. Étant donné que les robots médicaux ont déjà été utilisés avec succès en chirurgie et en orthopédie, notre équipe a conçu un nouveau système robotique d’échographie 3D pour détecter et quantifier les sténoses des membres inférieurs. Avec cette nouvelle technologie, un radiologue fait l’apprentissage manuel au robot d’un balayage échographique du vaisseau concerné. Par la suite, le robot répète à très haute précision la trajectoire apprise, contrôle simultanément le processus d’acquisition d’images échographiques à un pas d’échantillonnage constant et conserve de façon sécuritaire la force appliquée par la sonde sur la peau du patient. Par conséquent, la reconstruction d’une géométrie artérielle 3D des membres inférieurs à partir de ce système pourrait permettre une localisation et une quantification des sténoses à très grande fiabilité. L’objectif de ce projet de recherche consistait donc à valider et optimiser ce système robotisé d’imagerie échographique 3D. La fiabilité d’une géométrie reconstruite en 3D à partir d’un système référentiel robotique dépend beaucoup de la précision du positionnement et de la procédure de calibration. De ce fait, la précision pour le positionnement du bras robotique fut évaluée à travers son espace de travail avec un fantôme spécialement conçu pour simuler la configuration des artères des membres inférieurs (article 1 - chapitre 3). De plus, un fantôme de fils croisés en forme de Z a été conçu pour assurer une calibration précise du système robotique (article 2 - chapitre 4). Ces méthodes optimales ont été utilisées pour valider le système pour l’application clinique et trouver la transformation qui convertit les coordonnées de l’image échographique 2D dans le référentiel cartésien du bras robotisé. À partir de ces résultats, tout objet balayé par le système robotique peut être caractérisé pour une reconstruction 3D adéquate. Des fantômes vasculaires compatibles avec plusieurs modalités d’imagerie ont été utilisés pour simuler différentes représentations artérielles des membres inférieurs (article 2 - chapitre 4, article 3 - chapitre 5). La validation des géométries reconstruites a été effectuée à l`aide d`analyses comparatives. La précision pour localiser et quantifier les sténoses avec ce système robotisé d’imagerie échographique 3D a aussi été déterminée. Ces évaluations ont été réalisées in vivo pour percevoir le potentiel de l’utilisation d’un tel système en clinique (article 3- chapitre 5).
Resumo:
We present a set of techniques that can be used to represent and detect shapes in images. Our methods revolve around a particular shape representation based on the description of objects using triangulated polygons. This representation is similar to the medial axis transform and has important properties from a computational perspective. The first problem we consider is the detection of non-rigid objects in images using deformable models. We present an efficient algorithm to solve this problem in a wide range of situations, and show examples in both natural and medical images. We also consider the problem of learning an accurate non-rigid shape model for a class of objects from examples. We show how to learn good models while constraining them to the form required by the detection algorithm. Finally, we consider the problem of low-level image segmentation and grouping. We describe a stochastic grammar that generates arbitrary triangulated polygons while capturing Gestalt principles of shape regularity. This grammar is used as a prior model over random shapes in a low level algorithm that detects objects in images.
Resumo:
A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
Behavioral and neurophysiological studies suggest that skill learning can be mediated by discrete, experience-driven changes within specific neural representations subserving the performance of the trained task. We have shown that a few minutes of daily practice on a sequential finger opposition task induced large, incremental performance gains over a few weeks of training. These gains did not generalize to the contralateral hand nor to a matched sequence of identical component movements, suggesting that a lateralized representation of the learned sequence of movements evolved through practice. This interpretation was supported by functional MRI data showing that a more extensive representation of the trained sequence emerged in primary motor cortex after 3 weeks of training. The imaging data, however, also indicated important changes occurring in primary motor cortex during the initial scanning sessions, which we proposed may reflect the setting up of a task-specific motor processing routine. Here we provide behavioral and functional MRI data on experience-dependent changes induced by a limited amount of repetitions within the first imaging session. We show that this limited training experience can be sufficient to trigger performance gains that require time to become evident. We propose that skilled motor performance is acquired in several stages: “fast” learning, an initial, within-session improvement phase, followed by a period of consolidation of several hours duration, and then “slow” learning, consisting of delayed, incremental gains in performance emerging after continued practice. This time course may reflect basic mechanisms of neuronal plasticity in the adult brain that subserve the acquisition and retention of many different skills.
Resumo:
This paper presents the digital imaging results of a collaborative research project working toward the generation of an on-line interactive digital image database of signs from ancient cuneiform tablets. An important aim of this project is the application of forensic analysis to the cuneiform symbols to identify scribal hands. Cuneiform tablets are amongst the earliest records of written communication, and could be considered as one of the original information technologies; an accessible, portable and robust medium for communication across distance and time. The earliest examples are up to 5,000 years old, and the writing technique remained in use for some 3,000 years. Unfortunately, only a small fraction of these tablets can be made available for display in museums and much important academic work has yet to be performed on the very large numbers of tablets to which there is necessarily restricted access. Our paper will describe the challenges encountered in the 2D image capture of a sample set of tablets held in the British Museum, explaining the motivation for attempting 3D imaging and the results of initial experiments scanning the smaller, more densely inscribed cuneiform tablets. We will also discuss the tractability of 3D digital capture, representation and manipulation, and investigate the requirements for scaleable data compression and transmission methods. Additional information can be found on the project website: www.cuneiform.net
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
Background Schizophrenia has been associated with semantic memory impairment and previous studies report a difficulty in accessing semantic category exemplars (Moelter et al. 2005 Schizophr Res 78:209–217). The anterior temporal cortex (ATC) has been implicated in the representation of semantic knowledge (Rogers et al. 2004 Psychol Rev 111(1):205–235). We conducted a high-field (4T) fMRI study with the Category Judgment and Substitution Task (CJAST), an analogue of the Hayling test. We hypothesised that differential activation of the temporal lobe would be observed in schizophrenia patients versus controls. Methods Eight schizophrenia patients (7M : 1F) and eight matched controls performed the CJAST, involving a randomised series of 55 common nouns (from five semantic categories) across three conditions: semantic categorisation, anomalous categorisation and word reading. High-resolution 3D T1-weighted images and GE EPI with BOLD contrast and sparse temporal sampling were acquired on a 4T Bruker MedSpec system. Image processing and analyses were performed with SPM2. Results Differential activation in the left ATC was found for anomalous categorisation relative to category judgment, in patients versus controls. Conclusions We examined semantic memory deficits in schizophrenia using a novel fMRI task. Since the ATC corresponds to an area involved in accessing abstract semantic representations (Moelter et al. 2005), these results suggest schizophrenia patients utilise the same neural network as healthy controls, however it is compromised in the patients and the different ATC activity might be attributable to weakening of category-to-category associations.
Resumo:
Introduction: Recently developed portable dental X-ray units increase the mobility of the forensic odontologists and allow more efficient X-ray work in a disaster field, especially when used in combination with digital sensors. This type of machines might also have potential for application in remote areas, military and humanitarian missions, dental care of patients with mobility limitation, as well as imaging in operating rooms. Objective: To evaluate radiographic image quality acquired by three portable X-ray devices in combination with four image receptors and to evaluate their medical physics parameters. Materials and methods: Images of five samples consisting of four teeth and one formalin-fixed mandible were acquired by one conventional wall-mounted X-ray unit, MinRay (R) 60/70 kVp, used as a clinical standard, and three portable dental X-ray devices: AnyRay (R) 60 kVp, Nomad (R) 60 kVp and Rextar (R) 70 kVp, in combination with a phosphor image plate (PSP), a CCD, or a CMOS sensor. Three observers evaluated images for standard image quality besides forensic diagnostic quality on a 4-point rating scale. Furthermore, all machines underwent tests for occupational as well as patient dosimetry. Results: Statistical analysis showed good quality imaging for all system, with the combination of Nomad (R) and PSP yielding the best score. A significant difference in image quality between the combination of the four X-ray devices and four sensors was established (p < 0.05). For patient safety, the exposure rate was determined and exit dose rates for MinRay (R) at 60 kVp, MinRay (R) at 70 kVp, AnyRay (R), Nomad (R) and Rextar (R) were 3.4 mGy/s, 4.5 mGy/s, 13.5 mGy/s, 3.8 mGy/s and 2.6 mGy/s respectively. The kVp of the AnyRay (R) system was the most stable, with a ripple of 3.7%. Short-term variations in the tube output of all the devices were less than 10%. AnyRay (R) presented higher estimated effective dose than other machines. Occupational dosimetry showed doses at the operator`s hand being lowest with protective shielding (Nomad (R): 0.1 mu Gy). It was also low while using remote control (distance > 1 m: Rextar (R) < 0.2 mu Gy, MinRay (R) < 0.1 mu Gy). Conclusions: The present study demonstrated the feasibility of three portable X-ray systems to be used for specific indications, based on acceptable image quality and sufficient accuracy of the machines and following the standard guidelines for radiation hygiene. (C) 2010 Elsevier Ireland Ltd. All rights reserved.