990 resultados para LIKELIHOOD APPROACH
Resumo:
In a distribution problem, and specfii cally in bankruptcy issues, the Proportional (P) and the Egalitarian (EA) divisions are two of the most popular ways to resolve the conflict. The Constrained Equal Awards rule (CEA) is introduced in bankruptcy literature to ensure that no agent receives more than her claim, a problem that can arise when using the egalitarian division. We propose an alternative modi cation, by using a convex combination of P and EA. The recursive application of this new rule finishes at the CEA rule. Our solution concept ensures a minimum amount to each agent, and distributes the remaining estate in a proportional way. Keywords: Bankruptcy problems, Proportional rule, Equal Awards, Convex combination of rules, Lorenz dominance. JEL classi fication: C71, D63, D71.
Resumo:
The solution for the ‘Contested Garment Problem’, proposed in the Babylonic Talmud, suggests that each agent should receive at least some part of the resources whenever the demand overcomes the available amount. In this context, we propose a new method to define lower bounds on awards, an idea that has underlied the theoretical analysis of bankruptcy problems from its beginning (O’Neill, 1982) to present day (Dominguez and Thomson, 2006). Specifically, starting from the fact that a society establishes its own set of ‘Commonly Accepted Equity Principles’, our proposal ensures to each agent the smallest amount she gets according to all the admissible rules. As in general this new bound will not exhaust the estate, we analyze its recursive application for different sets of equity principles. Keywords: Bankruptcy problems, Bankruptcy rules, Lower bounds, Recursive process
Resumo:
Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.
Resumo:
There is considerable variation in the level of fecal egg excretion during Schistosoma mansoni infections. Within a single endemic area, the distribution of egg counts is typically overdispersed, with the majority of eggs excreted coming from a minority of residents. The purpose of this study was to quantify the influence of genetic factors on patterns of fecal egg excretion in a rural study sample in Brazil. Individual fecal egg excretions, expressed in eggs per gram of feces, were determined by the Kato-Katz method on stool samples collected on three different days. Detailed genealogic information was gathered at the time of sampling, which allowed assignment of 461 individuals to 14 pedigrees containing between 3 and 422 individuals. Using a maximum likelihood variance decomposition approach, we performed quantitative genetic analyses to determine if genetic factors could partially account for the observed pattern of fecal egg excretion. The quantitative genetic analysis indicated that between 21-37% of the variation in S. mansoni egg counts was attributable to additive genetic factors and that shared environment, as assessed by common household, accounted for a further 12-21% of the observed variation. A maximum likelihood heritability (h²) estimate of 0.44 ± 0.14 (mean ± SE) was found for the 9,604 second- and higher-degree pairwise relationships in the study sample, which is consistent with the upper limit (37%) of the genetic factor determined in the variance decomposition analysis. These analyses point to the significant influence of additive host genes on the pattern of S. mansoni fecal egg excretion in this endemic area.
Resumo:
BACKGROUND AND PURPOSE: Recent evidence suggests that there may be more than one Gilles de la Tourette syndrome (GTS)/tic disorder phenotype. However, little is known about the common patterns of these GTS/tic disorder-related comorbidities. In addition, sex-specific phenomenological data of GTS/tic disorder-affected adults are rare. Therefore, this community-based study used latent class analyses (LCA) to investigate sex-related and non-sex-related subtypes of GTS/tic disorders and their most common comorbidities. METHODS: The data were drawn from the PsyCoLaus study (n = 3691), a population-based survey conducted in Lausanne, Switzerland. LCA were performed on the data of 80 subjects manifesting motor/vocal tics during their childhood/adolescence. Comorbid attention-deficit hyperactivity disorder (ADHD), obsessive-compulsive disorder, depressive, phobia and panic symptoms/syndromes comprised the selected indicators. The resultant classes were characterized by psychosocial correlates. RESULTS: In LCA, four latent classes provided the best fit to the data. We identified two male-related classes. The first class exhibited both ADHD and depression. The second class comprised males with only depression. Class three was a female-related class depicting obsessive thoughts/compulsive acts, phobias and panic attacks. This class manifested high psychosocial impairment. Class four had a balanced sex proportion and comorbid symptoms/syndromes such as phobias and panic attacks. The complementary occurrence of comorbid obsessive thoughts/compulsive acts and ADHD impulsivity was remarkable. CONCLUSIONS: To the best of our knowledge, this is the first study applying LCA to community data of GTS symptoms/tic disorder-affected persons. Our findings support the utility of differentiating GTS/tic disorder subphenotypes on the basis of comorbid syndromes.
Resumo:
PURPOSE: Retinal detachment (RD) is a major complication of cataract surgery, which can be treated by either primary vitrectomy without indentation or the scleral buckling procedure. The aim of this study is to compare the results of these two techniques for the treatment of pseudophakic RD. PATIENTS AND METHODS: The charts of 40 patients (40 eyes) treated with scleral buckling for a primary pseudophakic RD were retrospectively studied and compared to the charts of 32 patients (32 eyes) treated with primary vitrectomy without scleral buckle during the same period by the same surgeons. To obtain comparable samples, patients with giant retinal tears, vitreous hemorrhage, and severe preoperative proliferative vitreoretinopathy (PVR) were not included. Minimal follow-up was 6 months. RESULTS: The primary success rate was 84% in the vitrectomy group and 82.5% in the ab-externo group. Final anatomical success was observed in 100% of cases in the vitrectomy group and in 95% of cases in the ab-externo group. Final visual acuity was 0.5 or better in 44% of cases in the vitrectomy group and 37.5% in the ab-externo group. The duration of the surgery was significantly lower in the ab-externo group, whereas the hospital stay tended to be lower in the vitrectomy group. In the vitrectomy group, postoperative PVR developed in 3 eyes and new or undetected breaks were responsible for failure of the initial procedure in 2 eyes. CONCLUSION: Primary vitrectomy appears to be as effective as scleral buckling procedures for the treatment of pseudophakic RD.
Resumo:
The article is composed of two sections. The first one is a critical review of the three main alternative indices to GDP which were proposed in the last decades – the Human Development Index (HDI), the Genuine Progress Indicator (GPI), and the Happy Planet Index (HPI) – which is made on the basis of conceptual foundations, rather than looking at issues of statistical consistency or mathematical refinement as most of the literature does. The pars construens aims to propose an alternative measure, the composite wealth index, consistent with an approach to development based on the notion of composite wealth, which is in turn derived from an empirical common sense criterion. Arguably, this approach is suitable to be conveyed into an easily understandable and coherent indicator, and thus appropriate to track development in its various dimensions: simple in its formulation, the wealth approach can incorporate social and ecological goals without significant alterations in conceptual foundations, while reducing to a minimum arbitrary weighting.
The Dynamical Systems Approach to Cognition. Studies of Nonlinear Phenomena in Life Science - Vol 10
Resumo:
This paper characterizes and evaluates the potential of three commercial CT iterative reconstruction methods (ASIR?, VEO? and iDose(4 ()?())) for dose reduction and image quality improvement. We measured CT number accuracy, standard deviation (SD), noise power spectrum (NPS) and modulation transfer function (MTF) metrics on Catphan phantom images while five human observers performed four-alternative forced-choice (4AFC) experiments to assess the detectability of low- and high-contrast objects embedded in two pediatric phantoms. Results show that 40% and 100% ASIR as well as iDose(4) levels 3 and 6 do not affect CT number and strongly decrease image noise with relative SD constant in a large range of dose. However, while ASIR produces a shift of the NPS curve apex, less change is observed with iDose(4) with respect to FBP methods. With second-generation iterative reconstruction VEO, physical metrics are even further improved: SD decreased to 70.4% at 0.5 mGy and spatial resolution improved to 37% (MTF(50%)). 4AFC experiments show that few improvements in detection task performance are obtained with ASIR and iDose(4), whereas VEO makes excellent detections possible even at an ultra-low-dose (0.3 mGy), leading to a potential dose reduction of a factor 3 to 7 (67%-86%). In spite of its longer reconstruction time and the fact that clinical studies are still required to complete these results, VEO clearly confirms the tremendous potential of iterative reconstructions for dose reduction in CT and appears to be an important tool for patient follow-up, especially for pediatric patients where cumulative lifetime dose still remains high.
Resumo:
The two main alternative methods used to identify key sectors within the input-output approach, the Classical Multiplier method (CMM) and the Hypothetical Extraction method (HEM), are formally and empirically compared in this paper. Our findings indicate that the main distinction between the two approaches stems from the role of the internal effects. These internal effects are quantified under the CMM while under the HEM only external impacts are considered. In our comparison, we find, however that CMM backward measures are more influenced by within-block effects than the proposed forward indices under this approach. The conclusions of this comparison allow us to develop a hybrid proposal that combines these two existing approaches. This hybrid model has the advantage of making it possible to distinguish and disaggregate external effects from those that a purely internal. This proposal has also an additional interest in terms of policy implications. Indeed, the hybrid approach may provide useful information for the design of ''second best'' stimulus policies that aim at a more balanced perspective between overall economy-wide impacts and their sectoral distribution.