881 resultados para Automated segmentation
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
The large spatial inhomogeneity in transmit B, field (B-1(+)) observable in human MR images at hi h static magnetic fields (B-0) severely impairs image quality. To overcome this effect in brain T-1-weighted images the, MPRAGE sequence was modified to generate two different images at different inversion times MP2RAGE By combining the two images in a novel fashion, it was possible to create T-1-weigthed images where the result image was free of proton density contrast, T-2* contrast, reception bias field, and, to first order transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B-1(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T-1-weighted images, acquired within 12 min, high-resolution 3D T-1 maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T-1 maps were validated in phantom experiments. In humans, the T, values obtained at 7 T were 1.15 +/- 0.06 s for white matter (WM) and 1.92 +/- 0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min the T-1 values obtained (0.81 +/- 0.03 S for WM and 1.35 +/- 0.05 for GM) were once again found to be in very good agreement with values in the literature. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Objective: To compare pressure–volume (P–V) curves obtained with the Galileo ventilator with those obtained with the CPAP method in patients with ALI or ARDS receiving mechanical ventilation. P–V curves were fitted to a sigmoidal equation with a mean R2 of 0.994 ± 0.003. Lower (LIP) and upper inflection (UIP), and deflation maximum curvature (PMC) points calculated from the fitted variables showed a good correlation between methods with high intraclass correlation coefficients. Bias and limits of agreement for LIP, UIP and PMC obtained with the two methods in the same patient were clinically acceptable.
Resumo:
Coagulase-negative staphylococci (CoNS) are an important cause of nosocomial bacteremia, specially in patients with indwelling devices or those submitted to invasive medical procedures. The identification of species and the accurate and rapid detection of methicillin resistance are directly dependent on the quality of the identification and susceptibility tests used, either manual or automated. The objective of this study was to evaluate the accuracy of two automated systems MicroScan and Vitek - in the identification of CoNS species and determination of susceptibility to methicillin, considering as gold standard the biochemical tests and the characterization of the mecA gene by polymerase chain reaction, respectively. MicroScan presented better results in the identification of CoNS species (accuracy of 96.8 vs 78.8%, respectively); isolates from the following species had no precise identification: Staphylococcus haemolyticus, S. simulans, and S. capitis. Both systems were similar in the characterization of methicillin resistance. The higher discrepancies for gene mec detection were observed among species other than S. epidermidis (S. hominis, S. saprophyticus, S. sciuri, S. haemolyticus, S. warneri, S. cohnii), and those with borderline MICs.
Resumo:
In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.
Resumo:
In image segmentation, clustering algorithms are very popular because they are intuitive and, some of them, easy to implement. For instance, the k-means is one of the most used in the literature, and many authors successfully compare their new proposal with the results achieved by the k-means. However, it is well known that clustering image segmentation has many problems. For instance, the number of regions of the image has to be known a priori, as well as different initial seed placement (initial clusters) could produce different segmentation results. Most of these algorithms could be slightly improved by considering the coordinates of the image as features in the clustering process (to take spatial region information into account). In this paper we propose a significant improvement of clustering algorithms for image segmentation. The method is qualitatively and quantitative evaluated over a set of synthetic and real images, and compared with classical clustering approaches. Results demonstrate the validity of this new approach
Resumo:
In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram
Resumo:
Résumé Objectif: l'observation des variations de volume de la matière grise (MG), de la matière blanche (MB), et du liquide céphalo-rachidien (LCR) est particulièrement utile dans l'étude de nombreux processus physiopathologiques, la mesure quantitative 'in vivo' de ces volumes présente un intérêt considérable tant en recherche qu'en pratique clinique. Cette étude présente et valide une méthode de segmentation automatique du cerveau avec mesure des volumes de MG et MB sur des images de résonance magnétique. Matériel et Méthode: nous utilisons un algorithme génétique automatique pour segmenter le cerveau en MG, MB et LCR à partir d'images tri-dimensionnelles de résonance magnétique en pondération Ti. Une étude morphométrique a été conduite sur 136 sujets hommes et femmes de 15 à 74 ans. L'algorithme a ensuite été validé par 5 approches différentes: I. Comparaison de mesures de volume sur un cerveau de cadavre par méthode automatique et par mesure de déplacement d'eau selon la méthode d'Archimède. 2. Comparaison de mesures surfaces sur des images bidimensionnelles segmentées soit par un traçage manuel soit par la méthode automatique. 3. Evaluation de la fiabilité de la segmentation par acquisitions et segmentations itératives du même cerveau. 4. Les volumes de MG, MB et LCR ont été utilisés pour une étude du vieillissement normal de la population. 5. Comparaison avec les données existantes de la littérature. Résultats: nous avons pu observer une variation de la mesure de 4.17% supplémentaire entre le volume d'un cerveau de cadavre mesuré par la méthode d'Archimède, en majeure partie due à la persistance de tissus après dissection_ La comparaison des méthodes de comptage manuel de surface avec la méthode automatique n'a pas montré de variation significative. L'épreuve du repositionnement du même sujet à diverses reprises montre une très bonne fiabilité avec une déviation standard de 0.46% pour la MG, 1.02% pour la MB et 3.59% pour le LCR, soit 0.19% pour le volume intracrânien total (VICT). L'étude morphométrique corrobore les résultats des études anatomiques et radiologiques existantes. Conclusion: la segmentation du cerveau par un algorithme génétique permet une mesure 100% automatique, fiable et rapide des volumes cérébraux in vivo chez l'individu normal.
Resumo:
Introduction : Le monitoring de la tension artérielle à domicile est recommandé par plusieurs guidelines et a été montré être faisable chez la personne âgée. Les manomètres au poignet ont récemment été proposés pour la mesure de la tension artérielle à domicile, mais leur précision n'a pas été au préalable évaluée chez les patients âgés. Méthode : Quarante-huit participants (33 femmes et 15 hommes, moyenne d'âge 81.3±8.0 ans) ont leur tension artérielle mesurée avec un appareil au poignet avec capteur de position et un appareil au bras dans un ordre aléatoire et dans une position assise. Résultats : Les moyennes de mesures de tension artérielle étaient systématiquement plus basses avec l'appareil au poignet par rapport à celui du bras pour la pression systolique (120.1±2.2 vs. 130.5±2.2 mmHg, P < 0.001, moyenneidéviation standard) et pour la pression diastolique (66.011.3 vs. 69.7±1.3 mmHg, P < 0.001). De plus, une différence de lOmmHg ou plus grande entre l'appareil au bras et au poignet était observée dans 54.2 et 18,8% des mesures systoliques et diastoliques respectivement. Conclusion : Comparé à l'appareil au bras, l'appareil au poignet avec capteur de position sous-estimait systématiquement aussi bien la tension artérielle systolique que diastolique. L'ampleur de la différence est cliniquement significative et met en doute l'utilisation de l'appareil au poignet pour monitorer la tension artérielle chez la personne âgée. Cette étude indique le besoin de valider les appareils de mesures de la tension artérielle dans tous les groupes d'âge, y compris les personnes âgées.
Resumo:
Advances in clinical virology for detecting respiratory viruses have been focused on nucleic acids amplification techniques, which have converted in the reference method for the diagnosis of acute respiratory infections of viral aetiology. Improvements of current commercial molecular assays to reduce hands-on-time rely on two strategies, a stepwise automation (semi-automation) and the complete automation of the whole procedure. Contributions to the former strategy have been the use of automated nucleic acids extractors, multiplex PCR, real-time PCR and/or DNA arrays for detection of amplicons. Commercial fully-automated molecular systems are now available for the detection of respiratory viruses. Some of them could convert in point-of-care methods substituting antigen tests for detection of respiratory syncytial virus and influenza A and B viruses. This article describes laboratory methods for detection of respiratory viruses. A cost-effective and rational diagnostic algorithm is proposed, considering technical aspects of the available assays, infrastructure possibilities of each laboratory and clinic-epidemiologic factors of the infection.
Resumo:
Hepatitis B virus (HBV) and Hepatitis C virus (HCV) infections pose major public health problems because of their prevalence worldwide. Consequently, screening for these infections is an important part of routine laboratory activity. Serological and molecular markers are key elements in diagnosis, prognosis and treatment monitoring for HBV and HCV infections. Today, automated chemiluminescence immunoassay (CLIA) analyzers are widely used for virological diagnosis, particularly in high-volume clinical laboratories. Molecular biology techniques are routinely used to detect and quantify viral genomes as well as to analyze their sequence; in order to determine their genotype and detect resistance to antiviral drugs. Real-time PCR, which provides high sensitivity and a broad dynamic range, has gradually replaced other signal and target amplification technologies for the quantification and detection of nucleic acid. The next-generation DNA sequencing techniques are still restricted to research laboratories.The serological and molecular marker methods available for HBV and HCV are discussed in this article, along with their utility and limitations for use in Chronic Hepatitis B (CHB) diagnosis and monitoring.