916 resultados para Bayes Rule
Resumo:
Research Masters
Resumo:
We present a new domain of preferences under which the majority relation is always quasi-transitive and thus Condorcet winners always exist. We model situations where a set of individuals must choose one individual in the group. Agents are connected through some relationship that can be interpreted as expressing neighborhood, and which is formalized by a graph. Our restriction on preferences is as follows: each agent can freely rank his immediate neighbors, but then he is indifferent between each neighbor and all other agents that this neighbor "leads to". Hence, agents can be highly perceptive regarding their neighbors, while being insensitive to the differences between these and other agents which are further removed from them. We show quasi-transitivity of the majority relation when the graph expressing the neighborhood relation is a tree. We also discuss a further restriction allowing to extend the result for more general graphs. Finally, we compare the proposed restriction with others in the literature, to conclude that it is independent of any previously discussed domain restriction.
Resumo:
The influence of altitude and latitude on some structure sizes of Lutzomyia intermedia was noted; several structures of insects collected in higher localities were greater, according to Bergmann's rule. This influence was more remarkable in two localities of the State of Espírito Santo, probably due to greater differences in altitude. Comparing insects from different latitudes, more differences were noted in comparisons of insects from low altitude localities than in those of material from higher altitudes. The small number of differences between insects collected in July and in December does not indicate a defined influence of season and temperature on the size of adults. The possible epidemiological implications of these variations are discussed.
Resumo:
BACKGROUND: A simple prognostic model could help identify patients with pulmonary embolism who are at low risk of death and are candidates for outpatient treatment. METHODS: We randomly allocated 15,531 retrospectively identified inpatients who had a discharge diagnosis of pulmonary embolism from 186 Pennsylvania hospitals to derivation (67%) and internal validation (33%) samples. We derived our rule to predict 30-day mortality using classification tree analysis and patient data routinely available at initial examination as potential predictor variables. We used data from a European prospective study to externally validate the rule among 221 inpatients with pulmonary embolism. We determined mortality and nonfatal adverse medical outcomes across derivation and validation samples. RESULTS: Our final model consisted of 10 patient factors (age > or = 70 years; history of cancer, heart failure, chronic lung disease, chronic renal disease, and cerebrovascular disease; and clinical variables of pulse rate > or = 110 beats/min, systolic blood pressure < 100 mm Hg, altered mental status, and arterial oxygen saturation < 90%). Patients with none of these factors were defined as low risk. The 30-day mortality rates for low-risk patients were 0.6%, 1.5%, and 0% in the derivation, internal validation, and external validation samples, respectively. The rates of nonfatal adverse medical outcomes were less than 1% among low-risk patients across all study samples. CONCLUSIONS: This simple prediction rule accurately identifies patients with pulmonary embolism who are at low risk of short-term mortality and other adverse medical outcomes. Prospective validation of this rule is important before its implementation as a decision aid for outpatient treatment.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
Genes affect not only the behavior and fitness of their carriers but also that of other individuals. According to Hamilton's rule, whether a mutant gene will spread in the gene pool depends on the effects of its carrier on the fitness of all individuals in the population, each weighted by its relatedness to the carrier. However, social behaviors may affect not only recipients living in the generation of the actor but also individuals living in subsequent generations. In this note, I evaluate space-time relatedness coefficients for localized dispersal. These relatedness coefficients weight the selection pressures on long-lasting behaviors, which stem from a multigenerational gap between phenotypic expression by actors and the resulting environmental feedback on the fitness of recipients. Explicit values of space-time relatedness coefficients reveal that they can be surprisingly large for typical dispersal rates, even for hundreds of generations in the future.
Resumo:
In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.
Resumo:
Studying the geographic variation of phenotypic traits can provide key information about the potential adaptive function of alternative phenotypes. Gloger's rule posits that animals should be dark-vs. light-colored in warm and humid vs. cold and dry habitats, respectively. The rule is based on the assumption that melanin pigments and/or dark coloration confer selective advantages in warm and humid regions. This rule may not apply, however, if genes for color are acting on other traits conferring fitness benefits in specific climes. Covariation between coloration and climate will therefore depend on the relative importance of coloration or melanin pigments and the genetically correlated physiological and behavioral processes that enable an animal to deal with climatic factors. The Barn Owl (Tyto alba) displays three melanin-based plumage traits, and we tested whether geographic variation in these traits at the scale of the North American continent supported Gloger's rule. An analysis of variation of pheomelanin-based reddish coloration and of the number and size of black feather spots in 1,369 museum skin specimens showed that geographic variation was correlated with ambient temperature and precipitation. Owls were darker red in color and displayed larger but fewer black feather spots in colder regions. Owls also exhibited more and larger black spots in regions where the climate was dry in winter. We propose that the associations between pigmentation and ambient temperature are of opposite sign for reddish coloration and spot size vs. the number of spots because selection exerted by climate (or a correlated variable) is plumage trait-specific or because plumage traits are genetically correlated with different adaptations.
Resumo:
Directed evolution of life through millions of years, such as increasing adult body size, is one of the most intriguing patterns displayed by fossil lineages. Processes and causes of such evolutionary trends are still poorly understood. Ammonoids (externally shelled marine cephalopods) are well known to have experienced repetitive morphological evolutionary trends of their adult size, shell geometry and ornamentation. This study analyses the evolutionary trends of the family Acrochordiceratidae Arthaber, 1911 from the Early to Middle Triassic (251228 Ma). Exceptionally large and bed-rock-controlled collections of this ammonoid family were obtained from strata of Anisian age (Middle Triassic) in north-west Nevada and north-east British Columbia. They enable quantitative and statistical analyses of its morphological evolutionary trends. This study demonstrates that the monophyletic clade Acrochordiceratidae underwent the classical evolute to involute evolutionary trend (i.e. increasing coiling of the shell), an increase in its shell adult size (conch diameter) and an increase in the indentation of its shell suture shape. These evolutionary trends are statistically robust and seem more or less gradual. Furthermore, they are nonrandom with the sustained shift in the mean, the minimum and the maximum of studied shell characters. These results can be classically interpreted as being constrained by the persistence and common selection pressure on this mostly anagenetic lineage characterized by relatively moderate evolutionary rates. Increasing involution of ammonites is traditionally interpreted by increasing adaptation mostly in terms of improved hydrodynamics. However, this trend in ammonoid geometry can also be explained as a case of Copes rule (increasing adult body size) instead of functional explanation of coiling, because both shell diameter and shell involution are two possible paths for ammonoids to accommodate size increase.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
1.1 Fundamentals Chest pain is a common complaint in primary care patients (1 to 3% of all consultations) (1) and its aetiology can be miscellaneous, from harmless to potentially life threatening conditions. In primary care practice, the most prevalent aetiologies are: chest wall syndrome (43%), coronary heart disease (12%) and anxiety (7%) (2). In up to 20% of cases, potentially serious conditions as cardiac, respiratory or neoplasic diseases underlie chest pain. In this context, a large number of laboratory tests are run (42%) and over 16% of patients are referred to a specialist or hospitalized (2).¦A cardiovascular origin to chest pain can threaten patient's life and investigations run to exclude a serious condition can be expensive and involve a large number of exams or referral to specialist -‐ often without real clinical need. In emergency settings, up to 80% of chest pains in patients are due to cardiovascular events (3) and scoring methods have been developed to identify conditions such as coronary heart disease (HD) quickly and efficiently (4-‐6). In primary care, a cardiovascular origin is present in only about 12% of patients with chest pain (2) and general practitioners (GPs) need to exclude as safely as possible a potential serious condition underlying chest pain. A simple clinical prediction rule (CPR) like those available in emergency settings may therefore help GPs and spare time and extra investigations in ruling out CHD in primary care patients. Such a tool may also help GPs reassure patients with more common origin to chest pain.
Resumo:
The paper deals with a bilateral accident situation in which victims haveheterogeneous costs of care. With perfect information,efficient care bythe injurer raises with the victim's cost. When the injurer cannot observeat all the victim's type, and this fact can be verified by Courts, first-bestcannot be implemented with the use of a negligence rule based on thefirst-best levels of care. Second-best leads the injurer to intermediate care,and the two types of victims to choose the best response to it. This second-bestsolution can be easily implemented by a negligence rule with second-best as duecare. We explore imperfect observation of the victim's type, characterizing theoptimal solution and examining the different legal alternatives when Courts cannotverify the injurers' statements. Counterintuitively, we show that there is nodifference at all between the use by Courts of a rule of complete trust and arule of complete distrust towards the injurers' statements. We then relate thefindings of the model to existing rules and doctrines in Common Law and Civil Lawlegal systems.