999 resultados para Beta Distributions
Resumo:
This paper introduces a mixture model based on the beta distribution, without preestablishedmeans and variances, to analyze a large set of Beauty-Contest data obtainedfrom diverse groups of experiments (Bosch-Domenech et al. 2002). This model gives a bettert of the experimental data, and more precision to the hypothesis that a large proportionof individuals follow a common pattern of reasoning, described as iterated best reply (degenerate),than mixture models based on the normal distribution. The analysis shows thatthe means of the distributions across the groups of experiments are pretty stable, while theproportions of choices at dierent levels of reasoning vary across groups.
Resumo:
This paper considers the issue of modeling fractional data observed on [0,1), (0,1] or [0,1]. Mixed continuous-discrete distributions are proposed. The beta distribution is used to describe the continuous component of the model since its density can have quite different shapes depending on the values of the two parameters that index the distribution. Properties of the proposed distributions are examined. Also, estimation based on maximum likelihood and conditional moments is discussed. Finally, practical applications that employ real data are presented.
Resumo:
2000 Mathematics Subject Classification: 33C90, 62E99.
Resumo:
We give reasons why demographic parameters such as survival and reproduction rates are often modelled well in stochastic population simulation using beta distributions. In practice, it is frequently expected that these parameters will be correlated, for example with survival rates for all age classes tending to be high or low in the same year. We therefore discuss a method for producing correlated beta random variables by transforming correlated normal random variables, and show how it can be applied in practice by means of a simple example. We also note how the same approach can be used to produce correlated uniform triangular, and exponential random variables. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
INTRODUCTION Extended-spectrum beta-lactamases (ESBL) and AmpC beta-lactamases (AmpC) are of concern for veterinary and public health because of their ability to cause treatment failure due to antimicrobial resistance in Enterobacteriaceae. The main objective was to assess the relative contribution (RC) of different types of meat to the exposure of consumers to ESBL/AmpC and their potential importance for human infections in Denmark. MATERIAL AND METHODS The prevalence of each genotype of ESBL/AmpC-producing E. coli in imported and nationally produced broiler meat, pork and beef was weighted by the meat consumption patterns. Data originated from the Danish surveillance program for antibiotic use and antibiotic resistance (DANMAP) from 2009 to 2011. DANMAP also provided data about human ESBL/AmpC cases in 2011, which were used to assess a possible genotype overlap. Uncertainty about the occurrence of ESBL/AmpC-producing E. coli in meat was assessed by inspecting beta distributions given the available data of the genotypes in each type of meat. RESULTS AND DISCUSSION Broiler meat represented the largest part (83.8%) of the estimated ESBL/AmpC-contaminated pool of meat compared to pork (12.5%) and beef (3.7%). CMY-2 was the genotype with the highest RC to human exposure (58.3%). However, this genotype is rarely found in human infections in Denmark. CONCLUSION The overlap between ESBL/AmpC genotypes in meat and human E. coli infections was limited. This suggests that meat might constitute a less important source of ESBL/AmpC exposure to humans in Denmark than previously thought - maybe because the use of cephalosporins is restricted in cattle and banned in poultry and pigs. Nonetheless, more detailed surveillance data are required to determine the contribution of meat compared to other sources, such as travelling, pets, water resources, community and hospitals in the pursuit of a full source attribution model.
Resumo:
In the area of stress-strength models there has been a large amount of work as regards estimation of the reliability R = Pr(X2 < X1 ) when X1 and X2 are independent random variables belonging to the same univariate family of distributions. The algebraic form for R = Pr(X2 < X1 ) has been worked out for the majority of the well-known distributions including Normal, uniform, exponential, gamma, weibull and pareto. However, there are still many other distributions for which the form of R is not known. We have identified at least some 30 distributions with no known form for R. In this paper we consider some of these distributions and derive the corresponding forms for the reliability R. The calculations involve the use of various special functions.
Resumo:
In this paper, we study the relationship between the failure rate and the mean residual life of doubly truncated random variables. Accordingly, we develop characterizations for exponential, Pareto 11 and beta distributions. Further, we generalize the identities for fire Pearson and the exponential family of distributions given respectively in Nair and Sankaran (1991) and Consul (1995). Applications of these measures in file context of lengthbiased models are also explored
Resumo:
Exercises and solutions in PDF
Resumo:
Exercises and solutions in LaTex
Resumo:
Exercises and solutions in LaTex
Resumo:
Exercises and solutions in PDF
Resumo:
This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.
Resumo:
Setting out from the database of Operophtera brumata, L. in between 1973 and 2000 due to the Light Trap Network in Hungary, we introduce a simple theta-logistic population dynamical model based on endogenous and exogenous factors, only. We create an indicator set from which we can choose some elements with which we can improve the fitting results the most effectively. Than we extend the basic simple model with additive climatic factors. The parameter optimization is based on the minimized root mean square error. The best model is chosen according to the Akaike Information Criterion. Finally we run the calibrated extended model with daily outputs of the regional climate model RegCM3.1, regarding 1961-1990 as reference period and 2021-2050 with 2071-2100 as future predictions. The results of the three time intervals are fitted with Beta distributions and compared statistically. The expected changes are discussed.