910 resultados para Probabilistic interpretation
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
Altitudinal tree lines are mainly constrained by temperature, but can also be influenced by factors such as human activity, particularly in the European Alps, where centuries of agricultural use have affected the tree-line. Over the last decades this trend has been reversed due to changing agricultural practices and land-abandonment. We aimed to combine a statistical land-abandonment model with a forest dynamics model, to take into account the combined effects of climate and human land-use on the Alpine tree-line in Switzerland. Land-abandonment probability was expressed by a logistic regression function of degree-day sum, distance from forest edge, soil stoniness, slope, proportion of employees in the secondary and tertiary sectors, proportion of commuters and proportion of full-time farms. This was implemented in the TreeMig spatio-temporal forest model. Distance from forest edge and degree-day sum vary through feed-back from the dynamics part of TreeMig and climate change scenarios, while the other variables remain constant for each grid cell over time. The new model, TreeMig-LAb, was tested on theoretical landscapes, where the variables in the land-abandonment model were varied one by one. This confirmed the strong influence of distance from forest and slope on the abandonment probability. Degree-day sum has a more complex role, with opposite influences on land-abandonment and forest growth. TreeMig-LAb was also applied to a case study area in the Upper Engadine (Swiss Alps), along with a model where abandonment probability was a constant. Two scenarios were used: natural succession only (100% probability) and a probability of abandonment based on past transition proportions in that area (2.1% per decade). The former showed new forest growing in all but the highest-altitude locations. The latter was more realistic as to numbers of newly forested cells, but their location was random and the resulting landscape heterogeneous. Using the logistic regression model gave results consistent with observed patterns of land-abandonment: existing forests expanded and gaps closed, leading to an increasingly homogeneous landscape.
Resumo:
The advent and application of high-resolution array-based comparative genome hybridization (array CGH) has led to the detection of large numbers of copy number variants (CNVs) in patients with developmental delay and/or multiple congenital anomalies as well as in healthy individuals. The notion that CNVs are also abundantly present in the normal population challenges the interpretation of the clinical significance of detected CNVs in patients. In this review we will illustrate a general clinical workflow based on our own experience that can be used in routine diagnostics for the interpretation of CNVs.
Resumo:
A survey of medical ambulatory practice was carried out in February-March 1981 in the two Swiss cantons of Vaud and Fribourg (total population: 700,000), in which 205 physicians participated. The methodology used was inspired from the U.S. National Ambulatory Medical Care Survey, the data collection instrument of which was adapted to our conditions; in addition, data were gathered on all referrals prescribed by 154 physicians during two weeks. (The instruments used are presented.) The potential and limits of this type of survey are discussed, as well as the representativity of the participating physicians and of the recorded visits, which are a systematic sample of over 43,000 visits.
Resumo:
Differences between genomes can be due to single nucleotide variants, translocations, inversions, and copy number variants (CNVs, gain or loss of DNA). The latter can range from sub-microscopic events to complete chromosomal aneuploidies. Small CNVs are often benign but those larger than 500 kb are strongly associated with morbid consequences such as developmental disorders and cancer. Detecting CNVs within and between populations is essential to better understand the plasticity of our genome and to elucidate its possible contribution to disease. Hence there is a need for better-tailored and more robust tools for the detection and genome-wide analyses of CNVs. While a link between a given CNV and a disease may have often been established, the relative CNV contribution to disease progression and impact on drug response is not necessarily understood. In this review we discuss the progress, challenges, and limitations that occur at different stages of CNV analysis from the detection (using DNA microarrays and next-generation sequencing) and identification of recurrent CNVs to the association with phenotypes. We emphasize the importance of germline CNVs and propose strategies to aid clinicians to better interpret structural variations and assess their clinical implications.
Resumo:
The classical statistical study of the wind speed in the atmospheric surface layer is madegenerally from the analysis of the three habitual components that perform the wind data,that is, the component W-E, the component S-N and the vertical component,considering these components independent.When the goal of the study of these data is the Aeolian energy, so is when wind isstudied from an energetic point of view and the squares of wind components can beconsidered as compositional variables. To do so, each component has to be divided bythe module of the corresponding vector.In this work the theoretical analysis of the components of the wind as compositionaldata is presented and also the conclusions that can be obtained from the point of view ofthe practical applications as well as those that can be derived from the application ofthis technique in different conditions of weather
Resumo:
This paper proposes MSISpIC, a probabilistic sonar scan matching algorithm for the localization of an autonomous underwater vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), the robot displacement estimated through dead-reckoning using a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method is an extension of the pIC algorithm. An extended Kalman filter (EKF) is used to estimate the robot-path during the scan in order to reference all the range and bearing measurements as well as their uncertainty to a scan fixed frame before registering. The major contribution consists of experimentally proving that probabilistic sonar scan matching techniques have the potential to improve the DVL-based navigation. The algorithm has been tested on an AUV guided along a 600 m path within an abandoned marina underwater environment with satisfactory results
Resumo:
PURPOSE: The aim of the present report is to describe abnormal (18)F-fluorodeoxyglucose (FDG) accumulation patterns in the pleura and lung parenchyma in a group of lung cancer patients in whom lung infarction was present at the time of positron emission tomography (PET). METHODS: Between November 2002 and December 2003, a total of 145 patients (102 males, 43 females; age range 38-85 years) were subjected to whole-body FDG PET for initial staging (n=117) or restaging (n=11) of lung cancer or for evaluation of solitary pulmonary nodules (n=17). Of these patients, 24 displayed abnormal FDG accumulation in the lung parenchyma that was not consistent with the primary lesion under investigation (ipsilateral n=12, contralateral n=9 or bilateral n=3). Without correlative imaging, this additional FDG uptake would have been considered indeterminate in differential diagnosis. RESULTS: Of the 24 patients who were identified as having such lesions, six harboured secondary tumour nodules diagnosed as metastases, while in three the diagnosis of a synchronous second primary lung tumour was established. Additionally, nine patients were identified as having post-stenotic pneumonia and/or atelectasis (n=6) or granulomatous lung disease (n=3). In the remaining six (4% of all patients), a diagnosis of recent pulmonary embolism that topographically matched the additional FDG accumulation (SUV(max) range 1.4-8.6, mean 3.9) was made. Four of these six patients were known to have pulmonary embolism, and hence false positive interpretation was avoided by correlating the PET findings with those of the pre-existing diagnostic work-up. The remaining two patients were harbouring small occult infarctions that mimicked satellite nodules in the lung periphery. Based on histopathological results, the abnormal FDG accumulation in these two patients was attributed to the inflammatory reaction and tissue repair associated with the pathological cascade of pulmonary embolism. CONCLUSION: In patients with pulmonary malignancies, synchronous lung infarction may induce pathological FDG accumulation that can mimic active tumour manifestations. Identifying this potential pitfall may allow avoidance of false positive FDG PET interpretation.
Resumo:
Assessment of volume status is often challenging in daily clinical practice. One of the clinician's tasks is to prevent or to treat organ systems failures that arise from a mismatch between the transport of oxygen and metabolic needs. Renal failure is a frequently encountered in-hospital diagnosis that is known to alter significantly the prognosis. In patients with acute renal failure in particular, the consequences of an inadequate volume management further increase morbidity and mortality.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
OBJECTIVE: Low and high body mass index (BMI) values have been shown to increase health risks and mortality and result in variations in fat-free mass (FFM) and body fat mass (BF). Currently, there are no published ranges for a fat-free mass index (FFMI; kg/m(2)), a body fat mass index (BFMI; kg/m(2)), and percentage of body fat (%BF). The purpose of this population study was to determine predicted FFMI and BFMI values in subjects with low, normal, overweight, and obese BMI. METHODS: FFM and BF were determined in 2986 healthy white men and 2649 white women, age 15 to 98 y, by a previously validated 50-kHz bioelectrical impedance analysis equation. FFMI, BFMI, and %BF were calculated. RESULTS: FFMI values were 16.7 to 19.8 kg/m(2) for men and 14.6 to 16.8 kg/m(2) for women within the normal BMI ranges. BFMI values were 1.8 to 5.2 kg/m(2) for men and 3.9 to 8.2 kg/m(2) for women within the normal BMI ranges. BFMI values were 8.3 and 11.8 kg/m(2) in men and women, respectively, for obese BMI (>30 kg/m(2)). Normal ranges for %BF were 13.4 to 21.7 and 24.6 to 33.2 for men and women, respectively. CONCLUSION: BMI alone cannot provide information about the respective contribution of FFM or fat mass to body weight. This study presents FFMI and BFMI values that correspond to low, normal, overweight, and obese BMIs. FFMI and BFMI provide information about body compartments, regardless of height.
Resumo:
In the histomorphological grading of prostate carcinoma, pathologists have regularly assigned comparable scores for the architectural Gleason and the now-obsolete nuclear World Health Organization (WHO) grading systems. Although both systems demonstrate good correspondence between grade and survival, they are based on fundamentally different biological criteria. We tested the hypothesis that this apparent concurrence between the two grading systems originates from an interpretation bias in the minds of diagnostic pathologists, rather than reflecting a biological reality. Three pathologists graded 178 prostatectomy specimens, assigning Gleason and WHO scores on glass slides and on digital images of nuclei isolated out of their architectural context. The results were analysed with respect to interdependencies among the grading systems, to tumour recurrence (PSA relapse > 0.1 ng/ml at 48 months) and robust nuclear morphometry, as assessed by computer-assisted image analysis. WHO and Gleason grades were strongly correlated (r = 0.82) and demonstrated identical prognostic power. However, WHO grades correlated poorly with nuclear morphology (r = 0.19). Grading of nuclei isolated out of their architectural context significantly improved accuracy for nuclear morphology (r = 0.55), but the prognostic power was virtually lost. In conclusion, the architectural organization of a tumour, which the pathologist cannot avoid noticing during initial slide viewing at low magnification, unwittingly influences the subsequent nuclear grade assignment. In our study, the prognostic power of the WHO grading system was dependent on visual assessment of tumour growth pattern. We demonstrate for the first time the influence a cognitive bias can have in the generation of an error in diagnostic pathology and highlight a considerable problem in histopathological tumour grading.
Resumo:
This paper presents and discusses further aspects of the subjectivist interpretation of probability (also known as the 'personalist' view of probabilities) as initiated in earlier forensic and legal literature. It shows that operational devices to elicit subjective probabilities - in particular the so-called scoring rules - provide additional arguments in support of the standpoint according to which categorical claims of forensic individualisation do not follow from a formal analysis under that view of probability theory.
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics