190 resultados para Iterative Closest Point (ICP) Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les POCT (point of care tests) ont un grand potentiel d'utilisation en médecine infectieuse ambulatoire grâce à leur rapidité d'exécution, leur impact sur l'administration d'antibiotiques et sur le diagnostic de certaines maladies transmissibles. Certains tests sont utilisés depuis plusieurs années (détection de Streptococcus pyogenes lors d'angine, anticorps anti-VIH, antigène urinaire de S. pneumoniae, antigène de Plasmodium falciparum). De nouvelles indications concernent les infections respiratoires, les diarrhées infantiles (rotavirus, E. coli entérohémorragique) et les infections sexuellement transmissibles. Des POCT, basés sur la détection d'acides nucléiques, viennent d'être introduits (streptocoque du groupe B chez la femme enceinte avant l'accouchement et la détection du portage de staphylocoque doré résistant à la méticilline). POCT have a great potential in ambulatory infectious diseases diagnosis, due to their impact on antibiotic administration and on communicable diseases prevention. Some are in use for long (S. pyogenes antigen, HIV antibodies) or short time (S. pneumoniae antigen, P. falciparum). The additional major indications will be community-acquired lower respiratory tract infections, infectious diarrhoea in children (rotavirus, enterotoxigenic E. coli), and hopefully sexually transmitted infections. Easy to use, these tests based on antigen-antibody reaction allow a rapid diagnosis in less than one hour; the new generation of POCT relying on nucleic acid detection are just introduced in practice (detection of GBS in pregnant women, carriage of MRSA), and will be extended to many pathogens

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enfants de moins de 10 ans fumant passivement 14 cigarettes ! D'avril 2010 à avril 2011, l'exposition de 148 enfants (81 garçons et 67 filles) a été testée: 10 enfants de moins d'un an, 25 de 1 à 5 ans, 19 de 5 à 10 ans, 30 de 10 à 15 ans et 64 de 15 à 18 ans. 10 d'entre eux sont des fumeurs et la plus jeune de 14 ans fume 10 cigarettes par jour. Leurs parents, ou parfois des jeunes eux-mêmes, ont commandé de manière volontaire, via les sites Internet des CIPRET Valais, Vaud et Genève, un badge MoNIC gratuit. Les résultats quant à l'exposition de ces enfants interpellent et méritent l'attention.Pour l'ensemble des enfants, la concentration moyenne de nicotine dans leur environnement intérieur mesurée via les dispositifs MoNIC est de 0,5 mg/m3, avec des maximums pouvant aller jusqu'à 21 mg/m3. Pour le collectif d'enfants âgés de moins de 10 ans (26 garçons et 28 filles; tous non-fumeurs), la concentration de nicotine n'est pas négligeable (moyenne 0,069 mg/m3, min 0, max 0,583 mg/m3). En convertissant ce résultat en équivalent de cigarettes inhalées passivement, nous obtenons des chiffres allant de 0 à 14 cigarettes par jour* avec une moyenne se situant à 1.6 cig/j. Encore plus surprenant, les enfants de moins d'un an (4 garçons et 6 filles) inhalent passivement, dans le cadre familial, en moyenne 1 cigarette (min 0, max 2.2). Pour les deux autres collectifs: 10-15 ans et 15-18 ans, les valeurs maximales avoisinent les 22 cigarettes. Notons cependant que ce résultat est influencé, ce qui n'est pas le cas des enfants plus jeunes, par le fait que ces jeunes sont également parfois des fumeurs actifs.* Quand la durée d'exposition dépassait 1 jour (8 heures), le nombre d'heures a toujours été divisé par 8 heures. Le résultat obtenu donne l'équivalent de cigarettes fumées passivement en huit heures. Il s'agit de ce fait d'une moyenne, ce qui veut dire que durant cette période les enfants ont pu être exposés irrégulièrement à des valeurs supérieures ou inférieures à cette moyenne. [Auteurs]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A major issue in the application of waveform inversion methods to crosshole georadar data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a time-domain waveform inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little-to-no trade-off between the wavelet estimation and the tomographic imaging procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les infections liées aux accès vasculaires sont l'une des causes principales des infections nosocomiales. Elles englobent leur colonisation par des micro-organismes, les infections du site d'insertion et les bactériémies et fongémies qui leur sont attribuées. Une bactériémie complique en moyenne 3 à 5 voies veineuses sur 100, ou représente de 2 à 14 épisodes pour 1000 jour-cathéters. Cette proportion n'est que la partie visible de l'iceberg puisque la plupart des épisodes de sepsis clinique sans foyer infectieux apparent associé sont actuellement considérés comme secondaires aux accès vasculaires. Les principes thérapeutiques sont présentés après une brève revue de leur physiopathologie. Plusieurs approches préventives sont ensuite discutées, y compris des éléments récents sur l'utilisation de cathéters imprégnés de désinfectants ou d'antibiotiques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epithelial Na+ channel (ENaC) belongs to a new class of channel proteins called the ENaC/DEG superfamily involved in epithelial Na+ transport, mechanotransduction, and neurotransmission. The role of ENaC in Na+ homeostasis and in the control of blood pressure has been demonstrated recently by the identification of mutations in ENaC beta and gamma subunits causing hypertension. The function of ENaC in Na+ reabsorption depends critically on its ability to discriminate between Na+ and other ions like K+ or Ca2+. ENaC is virtually impermeant to K+ ions, and the molecular basis for its high ionic selectivity is largely unknown. We have identified a conserved Ser residue in the second transmembrane domain of the ENaC alpha subunit (alphaS589), which when mutated allows larger ions such as K+, Rb+, Cs+, and divalent cations to pass through the channel. The relative ion permeability of each of the alphaS589 mutants is related inversely to the ionic radius of the permeant ion, indicating that alphaS589 mutations increase the molecular cutoff of the channel by modifying the pore geometry at the selectivity filter. Proper geometry of the pore is required to tightly accommodate Na+ and Li+ ions and to exclude larger cations. We provide evidence that ENaC discriminates between cations mainly on the basis of their size and the energy of dehydration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developmental constraints have been postulated to limit the space of feasible phenotypes and thus shape animal evolution. These constraints have been suggested to be the strongest during either early or mid-embryogenesis, which corresponds to the early conservation model or the hourglass model, respectively. Conflicting results have been reported, but in recent studies of animal transcriptomes the hourglass model has been favored. Studies usually report descriptive statistics calculated for all genes over all developmental time points. This introduces dependencies between the sets of compared genes and may lead to biased results. Here we overcome this problem using an alternative modular analysis. We used the Iterative Signature Algorithm to identify distinct modules of genes co-expressed specifically in consecutive stages of zebrafish development. We then performed a detailed comparison of several gene properties between modules, allowing for a less biased and more powerful analysis. Notably, our analysis corroborated the hourglass pattern at the regulatory level, with sequences of regulatory regions being most conserved for genes expressed in mid-development but not at the level of gene sequence, age, or expression, in contrast to some previous studies. The early conservation model was supported with gene duplication and birth that were the most rare for genes expressed in early development. Finally, for all gene properties, we observed the least conservation for genes expressed in late development or adult, consistent with both models. Overall, with the modular approach, we showed that different levels of molecular evolution follow different patterns of developmental constraints. Thus both models are valid, but with respect to different genomic features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D dose reconstruction is a verification of the delivered absorbed dose. Our aim was to describe and evaluate a 3D dose reconstruction method applied to phantoms in the context of narrow beams. A solid water phantom and a phantom containing a bone-equivalent material were irradiated on a 6 MV linac. The transmitted dose was measured by using one array of a 2D ion chamber detector. The dose reconstruction was obtained by an iterative algorithm. A phantom set-up error and organ interfraction motion were simulated to test the algorithm sensitivity. In all configurations convergence was obtained within three iterations. A local reconstructed dose agreement of at least 3% / 3mm with respect to the planned dose was obtained, except in a few points of the penumbra. The reconstructed primary fluences were consistent with the planned ones, which validates the whole reconstruction process. The results validate our method in a simple geometry and for narrow beams. The method is sensitive to a set-up error of a heterogeneous phantom and interfraction heterogeneous organ motion.