897 resultados para Method of Theoretical Images
Resumo:
A method for characterizing the microroughness of samples in optical coating technology is developed. Measurements over different spatial-frequency ranges are composed into a single power spectral density (PSD) covering a large bandwidth. This is followed by the extraction of characteristic parameters through fitting of the PSD to a suitable combination of theoretical models. The method allows us to combine microroughness measurements performed with different techniques, and the fitting procedure can be adapted to any behavior of a combined PSD. The method has been applied to a set of ion-beam-sputtered fluoride vacuum-UV coatings with increasing number of alternative low- and high-index layers. Conclusions about roughness development and microstructural growth are drawn.
Resumo:
Real-world images are complex objects, difficult to describe but at the same time possessing a high degree of redundancy. A very recent study [1] on the statistical properties of natural images reveals that natural images can be viewed through different partitions which are essentially fractal in nature. One particular fractal component, related to the most singular (sharpest) transitions in the image, seems to be highly informative about the whole scene. In this paper we will show how to decompose the image into their fractal components.We will see that the most singular component is related to (but not coincident with) the edges of the objects present in the scenes. We will propose a new, simple method to reconstruct the image with information contained in that most informative component.We will see that the quality of the reconstruction is strongly dependent on the capability to extract the relevant edges in the determination of the most singular set.We will discuss the results from the perspective of coding, proposing this method as a starting point for future developments.
Resumo:
This paper presents the segmentation of bilateral parotid glands in the Head and Neck (H&N) CT images using an active contour based atlas registration. We compare segmentation results from three atlas selection strategies: (i) selection of "single-most-similar" atlas for each image to be segmented, (ii) fusion of segmentation results from multiple atlases using STAPLE, and (iii) fusion of segmentation results using majority voting. Among these three approaches, fusion using majority voting provided the best results. Finally, we present a detailed evaluation on a dataset of eight images (provided as a part of H&N auto segmentation challenge conducted in conjunction with MICCAI-2010 conference) using majority voting strategy.
Resumo:
Naive scale invariance is not a true property of natural images. Natural monochrome images possess a much richer geometrical structure, which is particularly well described in terms of multiscaling relations. This means that the pixels of a given image can be decomposed into sets, the fractal components of the image, with well-defined scaling exponents [Turiel and Parga, Neural Comput. 12, 763 (2000)]. Here it is shown that hyperspectral representations of natural scenes also exhibit multiscaling properties, observing the same kind of behavior. A precise measure of the informational relevance of the fractal components is also given, and it is shown that there are important differences between the intrinsically redundant red-green-blue system and the decorrelated one defined in Ruderman, Cronin, and Chiao [J. Opt. Soc. Am. A 15, 2036 (1998)].
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Resumo:
The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.
Resumo:
Although the molecular typing of Pseudomonas aeruginosa is important to understand the local epidemiology of this opportunistic pathogen, it remains challenging. Our aim was to develop a simple typing method based on the sequencing of two highly variable loci. Single-strand sequencing of three highly variable loci (ms172, ms217, and oprD) was performed on a collection of 282 isolates recovered between 1994 and 2007 (from patients and the environment). As expected, the resolution of each locus alone [number of types (NT) = 35-64; index of discrimination (ID) = 0.816-0.964] was lower than the combination of two loci (NT = 78-97; ID = 0.966-0.971). As each pairwise combination of loci gave similar results, we selected the most robust combination with ms172 [reverse; R] and ms217 [R] to constitute the double-locus sequence typing (DLST) scheme for P. aeruginosa. This combination gave: (i) a complete genotype for 276/282 isolates (typability of 98%), (ii) 86 different types, and (iii) an ID of 0.968. Analysis of multiple isolates from the same patients or taps showed that DLST genotypes are generally stable over a period of several months. The high typability, discriminatory power, and ease of use of the proposed DLST scheme makes it a method of choice for local epidemiological analyses of P. aeruginosa. Moreover, the possibility to give unambiguous definition of types allowed to develop an Internet database ( http://www.dlst.org ) accessible by all.
Resumo:
Objectives To consider the various specific substances-taking activities in sport an examination of three psychological models of doping behaviour utilised by researchers is presented in order to evaluate their real and potential impact, and to improve the relevance and efficiency of anti-doping campaigns. Design Adopting the notion of a "research program" (Lakatos, 1978) from the philosophy of science, a range of studies into the psychology of doping behaviour are classified and critically analysed. Method Theoretical and practical parameters of three research programs are critically evaluated (i) cognitive; (ii) drive; and (iii) situated-dynamic. Results The analysis reveals the diversity of theoretical commitments of the research programs and their practical consequences. The «cognitive program» assumes that athletes are accountable for their acts that reflect the endeavour to attain sporting and non-sporting goals. Attitudes, knowledge and rational decisions are understood to be the basis of doping behaviour. The «drive program» characterises the variety of traces and consequences on psychological and somatic states coming from athlete's experience with sport. Doping behaviour here is conceived of as a solution to reduce unconscious psychological and somatic distress. The «situated-dynamic program» considers a broader context of athletes' doping activity and its evolution during a sport career. Doping is considered as emergent and self-organized behaviour, grounded on temporally critical couplings between athletes' actions and situations and the specific dynamics of their development during the sporting life course. Conclusions These hypothetical, theoretical and methodological considerations offer a more nuanced understanding of doping behaviours, making an effective contribution to anti-doping education and research by enabling researchers and policy personnel to become more critically reflective about their explicit and implicit assumptions regarding models of explanations for doping behaviour.
Resumo:
Background/Purpose: Since the end of 2009, an ultrasound scoring call SONAR has been implemented for RA patients as a routine tool in the SCQM registry (Swiss Clinical Quality Management registry for rheumatic diseases). A cross-sectional evaluation of patients with active disease and clinical remission according to the DAS28ESR and the novel ACR/EULAR remission criteria from 2010 clearly indicated a good correlational external validity of synovial pathologies with clinical disease activity in RA (2012 EULAR meeting. Objective: of this study was to evaluate the sensitivity to change of B-mode and Power-Doppler scores in a longitudinal perspective along with the changes in DAS28ESR in two consecutive visits among the patients included in the SCQM registry Methods: All patients who had at least two SONAR scores and simultaneous DAS28ESR evaluations between December 2009 and June 2012 were included in this study. The data came from 20 different operators working mostly in hospitals but also in private practices, who had received a previous teaching over 3 days in a reference center. The SONAR score includes a semi-quantitative B mode and Power-Doppler evaluation of 22 joints from 0 to 3, maximum 66 points for each score. The selection of these 22 joints was done in analogy to a 28 joint count and further restricted to joint regions with published standard ultrasound images. Both elbows and wrist joints were dynamically scanned from the dorsal and the knee joints from a longitudinal suprapatellar view in flexion and in joint extension. The bilateral evaluation of the second to fifth metacarpophalangeal and proximal interphalangeal joints was done from a palmar view in full extension, and the Power-Doppler scoring from a dorsal view with hand and finger position in best relaxation. Results: From the 657 RA patients with at least one score performed, 128 RA patients with 2 or more consultations of DAS28ESR, and a complete SONAR data set could be included. The mean (SD) time between the two evaluations was 9.6 months (54). The mean (SD) DAS28ESR was: 3.5 (1.3) at the first visit and was significantly lower (mean 3.0, SD.2.0, p:_0.0001) at the second visit. The mean (SD) of the total B mode was 12 (9.5) at baseline and 9.6 (7.6) at follow-up (p_0.0004). The Power-Doppler score at entry was 2.9 (5.7) and 1.9 (3.6), at the second visit, p _0.0001. The Pearson r correlation between change in DAS28ESR and the B mode was 0.44 (95% CI: 0.29, 0.57, p_ 0.0001),and 0.35 (95% CI: 0.16, 0.50, p _ 0.0002) for the Power-Doppler score,. Clinical relevant change in DAS (_1.1) was associated with a change of total B mode score _3 in 23/32 patients and a change a Doppler score _0.5 in 19/26. Conclusion: This study confirms that the SONAR score is sensitive to change and provides a complementary method of assessing RA disease activity to the DAS that could be very useful in daily practice.
Resumo:
A discussion is presented of daytime sky imaging and techniques that may be applied to the analysis of full-color sky images to infer cloud macrophysical properties. Descriptions of two different types of skyimaging systems developed by the authors are presented, one of which has been developed into a commercially available instrument. Retrievals of fractional sky cover from automated processing methods are compared to human retrievals, both from direct observations and visual analyses of sky images. Although some uncertainty exists in fractional sky cover retrievals from sky images, this uncertainty is no greater than that attached to human observations for the commercially available sky-imager retrievals. Thus, the application of automatic digital image processing techniques on sky images is a useful method to complement, or even replace, traditional human observations of sky cover and, potentially, cloud type. Additionally, the possibilities for inferring other cloud parameters such as cloud brokenness and solar obstruction further enhance the usefulness of sky imagers
Resumo:
Résumé La thématique de cette thèse peut être résumée par le célèbre paradoxe de biologie évolutive sur le maintien du polymorphisme face à la sélection et par l'équation du changement de fréquence gamétique au cours du temps dû, à la sélection. La fréquence d'un gamète xi à la génération (t + 1) est: !!!Equation tronquée!!! Cette équation est utilisée pour générer des données utlisée tout au long de ce travail pour 2, 3 et 4 locus dialléliques. Le potentiel de l'avantage de l'hétérozygote pour le maintien du polymorphisme est le sujet de la première partie. La définition commune de l'avantage de l'hétérozygote n'etant applicable qu'a un locus ayant 2 allèles, cet avantage est redéfini pour un système multilocus sur les bases de précédentes études. En utilisant 5 définitions différentes de l'avantage de l'hétérozygote, je montre que cet avantage ne peut être un mécanisme général dans le maintien du polymorphisme sous sélection. L'étude de l'influence de locus non-détectés sur les processus évolutifs, seconde partie de cette thèse, est motivée par les travaux moléculaires ayant pour but de découvrir le nombre de locus codant pour un trait. La plupart de ces études sous-estiment le nombre de locus. Je montre que des locus non-détectés augmentent la probabilité d'observer du polymorphisme sous sélection. De plus, les conclusions sur les facteurs de maintien du polymorphisme peuvent être trompeuses si tous les locus ne sont pas détectés. Dans la troisième partie, je m'intéresse à la valeur attendue de variance additive après un goulot d'étranglement pour des traits sélectionés. Une études précédente montre que le niveau de variance additive après goulot d'étranglement augmente avec le nombre de loci. Je montre que le niveau de variance additive après un goulot d'étranglement augmente (comparé à des traits neutres), mais indépendamment du nombre de loci. Par contre, le taux de recombinaison a une forte influence, entre autre en regénérant les gamètes disparus suite au goulot d'étranglement. La dernière partie de ce travail de thèse décrit un programme pour le logiciel de statistique R. Ce programme permet d'itérer l'équation ci-dessus en variant les paramètres de sélection, recombinaison et de taille de populations pour 2, 3 et 4 locus dialléliques. Cette thèse montre qu'utiliser un système multilocus permet d'obtenir des résultats non-conformes à ceux issus de systèmes rnonolocus (la référence en génétique des populations). Ce programme ouvre donc d'intéressantes perspectives en génétique des populations. Abstract The subject of this PhD thesis can be summarized by one famous paradox of evolu-tionary biology: the maintenance of polymorphism in the face of selection, and one classical equation of theoretical population genetics: the changes in gametic frequencies due to selection and recombination. The frequency of gamete xi at generation (t + 1) is given by: !!! Truncated equation!!! This equation is used to generate data on selection at two, three, and four diallelic loci for the different parts of this work. The first part focuses on the potential of heterozygote advantage to maintain genetic polymorphism. Results of previous studies are used to (re)define heterozygote advantage for multilocus systems, since the classical definition is for one diallelic locus. I use 5 different definitions of heterozygote advantage. And for these five definitions, I show that heterozygote advantage is not a general mechanism for the maintenance of polymorphism. The study of the influence of undetected loci on evolutionary processes (second part of this work) is motivated by molecular works which aim at discovering the loci coding for a trait. For most of these works, some coding loci remains undetected. I show that undetected loci increases the probability of maintaining polymorphism under selection. In addition, conclusions about the factor that maintain polymorphism can be misleading if not all loci are considered. This is, therefore, only when all loci are detected that exact conclusions on the level of maintained polymorphism or on the factor(s) that maintain(s) polymorphism could be drawn. In the third part, the focus is on the expected release of additive genetic variance after bottleneck for selected traits. A previous study shows that the expected release of additive variance increases with an increase in the number of loci. I show that the expected release of additive variance after bottleneck increases for selected traits (compared with neutral), but this increase is not a function of the number of loci, but function of the recombination rate. Finally, the last part of this PhD thesis is a description of a package for the statistical software R that implements the Equation given above. It allows to generate data for different scenario regarding selection, recombination, and population size. This package opens perspectives for the theoretical population genetics that mainly focuses on one locus, while this work shows that increasing the number of loci leads not necessarily to straightforward results.
Resumo:
Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.
Resumo:
In this paper we introduce a highly efficient reversible data hiding system. It is based on dividing the image into tiles and shifting the histograms of each image tile between its minimum and maximum frequency. Data are then inserted at the pixel level with the largest frequency to maximize data hiding capacity. It exploits the special properties of medical images, where the histogram of their nonoverlapping image tiles mostly peak around some gray values and the rest of the spectrum is mainlyempty. The zeros (or minima) and peaks (maxima) of the histograms of the image tiles are then relocated to embed the data. The grey values of some pixels are therefore modified.High capacity, high fidelity, reversibility and multiple data insertions are the key requirements of data hiding in medical images. We show how histograms of image tiles of medical images can be exploited to achieve these requirements. Compared with data hiding method applied to the whole image, our scheme can result in 30%-200% capacity improvement and still with better image quality, depending on the medical image content. Additional advantages of the proposed method include hiding data in the regions of non-interest and better exploitation of spatial masking.
Resumo:
Thanks to the continuous progress made in recent years, medical imaging has become an important tool in the diagnosis of various pathologies. In particular, magnetic resonance imaging (MRI) permits to obtain images with a remarkably high resolution without the use of ionizing radiation and is consequently widely applied for a broad range of conditions in all parts of the body. Contrast agents are used in MRI to improve tissue discrimination. Different categories of contrast agents are clinically available, the most widely used being gadolinium chelates. One can distinguish between extracellular gadolinium chelates such as Gd-DTPA, and hepatobiliary gadolinium chelates such as Gd-BOPTA. The latter are able to enter hepatocytes from where they are partially excreted into the bile to an extent dependent on the contrast agent and animal species. Due to this property, hepatobiliary contrast agents are particularly interesting for the MRI of the liver. Actually, a change in signal intensity can result from a change in transport functions signaling the presence of impaired hepatocytes, e.g. in the case of focal (like cancer) or diffuse (like cirrhosis) liver diseases. Although the excretion mechanism into the bile is well known, the uptake mechanisms of hepatobiliary contrast agents into hepatocytes are still not completely understood and several hypotheses have been proposed. As a good knowledge of these transport mechanisms is required to allow an efficient diagnosis by MRI of the functional state of the liver, more fundamental research is needed and an efficient MRI compatible in vitro model would be an asset. So far, most data concerning these transport mechanisms have been obtained by MRI with in vivo models or by a method of detection other than MRI with cellular or sub-cellular models. Actually, no in vitro model is currently available for the study and quantification of contrast agents by MRI notably because high cellular densities are needed to allow detection, and no metallic devices can be used inside the magnet room, which is incompatible with most tissue or cell cultures that require controlled temperature and oxygenation. The aim of this thesis is thus to develop an MRI compatible in vitro cellular model to study the transport of hepatobiliary contrast agents, in particular Gd-BOPTA, into hepatocytes directly by MRI. A better understanding of this transport and especially of its modification in case of hepatic disorder could permit in a second step to extrapolate this knowledge to humans and to use the kinetics of hepatobiliary contrast agents as a tool for the diagnosis of hepatic diseases.