914 resultados para distribution (probability theory)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The p-median problem is often used to locate P service facilities in a geographically distributed population. Important for the performance of such a model is the distance measure. Distance measure can vary if the accuracy of the road network varies. The rst aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the road network is alternated. It is hard to nd an exact optimal solution for p-median problems. Therefore, in this study two heuristic solutions are applied, simulating annealing and a classic heuristic. The secondary aim is to compare the optimal location solutions using dierent algorithms for large p-median problem. The investigation is conducted by the means of a case study in a rural region with an asymmetrically distributed population, Dalecarlia. The study shows that the use of more accurate road networks gives better solutions for optimal location, regardless what algorithm that is used and regardless how many service facilities that is optimized for. It is also shown that the simulated annealing algorithm not just is much faster than the classic heuristic used here, but also in most cases gives better location solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The issue of information sharing and exchanging is one of the most important issues in the areas of artificial intelligence and knowledge-based systems (KBSs), or even in the broader areas of computer and information technology. This paper deals with a special case of this issue by carrying out a case study of information sharing between two well-known heterogeneous uncertain reasoning models: the certainty factor model and the subjective Bayesian method. More precisely, this paper discovers a family of exactly isomorphic transformations between these two uncertain reasoning models. More interestingly, among isomorphic transformation functions in this family, different ones can handle different degrees to which a domain expert is positive or negative when performing such a transformation task. The direct motivation of the investigation lies in a realistic consideration. In the past, expert systems exploited mainly these two models to deal with uncertainties. In other words, a lot of stand-alone expert systems which use the two uncertain reasoning models are available. If there is a reasonable transformation mechanism between these two uncertain reasoning models, we can use the Internet to couple these pre-existing expert systems together so that the integrated systems are able to exchange and share useful information with each other, thereby improving their performance through cooperation. Also, the issue of transformation between heterogeneous uncertain reasoning models is significant in the research area of multi-agent systems because different agents in a multi-agent system could employ different expert systems with heterogeneous uncertain reasonings for their action selections and the information sharing and exchanging is unavoidable between different agents. In addition, we make clear the relationship between the certainty factor model and probability theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper makes use of the idea of prediction intervals (PIs) to capture the uncertainty associated with wind power generation in power systems. Since the forecasting errors cannot be appropriately modeled using distribution probability functions, here we employ a powerful nonparametric approach called lower upper bound estimation (LUBE) method to construct the PIs. The proposed LUBE method uses a new framework based on a combination of PIs to overcome the performance instability of neural networks (NNs) used in the LUBE method. Also, a new fuzzy-based cost function is proposed with the purpose of having more freedom and flexibility in adjusting NN parameters used for construction of PIs. In comparison with the other cost functions in the literature, this new formulation allows the decision-makers to apply their preferences for satisfying the PI coverage probability and PI normalized average width individually. As the optimization tool, bat algorithm with a new modification is introduced to solve the problem. The feasibility and satisfying performance of the proposed method are examined using datasets taken from different wind farms in Australia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The idea of considering imprecision in probabilities is old, beginning with the Booles George work, who in 1854 wanted to reconcile the classical logic, which allows the modeling of complete ignorance, with probabilities. In 1921, John Maynard Keynes in his book made explicit use of intervals to represent the imprecision in probabilities. But only from the work ofWalley in 1991 that were established principles that should be respected by a probability theory that deals with inaccuracies. With the emergence of the theory of fuzzy sets by Lotfi Zadeh in 1965, there is another way of dealing with uncertainty and imprecision of concepts. Quickly, they began to propose several ways to consider the ideas of Zadeh in probabilities, to deal with inaccuracies, either in the events associated with the probabilities or in the values of probabilities. In particular, James Buckley, from 2003 begins to develop a probability theory in which the fuzzy values of the probabilities are fuzzy numbers. This fuzzy probability, follows analogous principles to Walley imprecise probabilities. On the other hand, the uses of real numbers between 0 and 1 as truth degrees, as originally proposed by Zadeh, has the drawback to use very precise values for dealing with uncertainties (as one can distinguish a fairly element satisfies a property with a 0.423 level of something that meets with grade 0.424?). This motivated the development of several extensions of fuzzy set theory which includes some kind of inaccuracy. This work consider the Krassimir Atanassov extension proposed in 1983, which add an extra degree of uncertainty to model the moment of hesitation to assign the membership degree, and therefore a value indicate the degree to which the object belongs to the set while the other, the degree to which it not belongs to the set. In the Zadeh fuzzy set theory, this non membership degree is, by default, the complement of the membership degree. Thus, in this approach the non-membership degree is somehow independent of the membership degree, and this difference between the non-membership degree and the complement of the membership degree reveals the hesitation at the moment to assign a membership degree. This new extension today is called of Atanassov s intuitionistic fuzzy sets theory. It is worth noting that the term intuitionistic here has no relation to the term intuitionistic as known in the context of intuitionistic logic. In this work, will be developed two proposals for interval probability: the restricted interval probability and the unrestricted interval probability, are also introduced two notions of fuzzy probability: the constrained fuzzy probability and the unconstrained fuzzy probability and will eventually be introduced two notions of intuitionistic fuzzy probability: the restricted intuitionistic fuzzy probability and the unrestricted intuitionistic fuzzy probability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the setting of high-dimensional linear models with Gaussian noise, we investigate the possibility of confidence statements connected to model selection. Although there exist numerous procedures for adaptive (point) estimation, the construction of adaptive confidence regions is severely limited (cf. Li in Ann Stat 17:1001–1008, 1989). The present paper sheds new light on this gap. We develop exact and adaptive confidence regions for the best approximating model in terms of risk. One of our constructions is based on a multiscale procedure and a particular coupling argument. Utilizing exponential inequalities for noncentral χ2-distributions, we show that the risk and quadratic loss of all models within our confidence region are uniformly bounded by the minimal risk times a factor close to one.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use Voiculescu’s free probability theory to prove the existence of prime factors, hence answering a longstanding problem in the theory of von Neumann algebras.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite many diverse theories that address closely related themes—e.g., probability theory, algorithmic complexity, cryptoanalysis, and pseudorandom number generation—a near-void remains in constructive methods certified to yield the desired “random” output. Herein, we provide explicit techniques to produce broad sets of both highly irregular finite and normal infinite sequences, based on constructions and properties derived from approximate entropy (ApEn), a computable formulation of sequential irregularity. Furthermore, for infinite sequences, we considerably refine normality, by providing methods for constructing diverse classes of normal numbers, classified by the extent to which initial segments deviate from maximal irregularity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Application of electric fields tangent to the plane of a confined patch of fluid bilayer membrane can create lateral concentration gradients of the lipids. A thermodynamic model of this steady-state behavior is developed for binary systems and tested with experiments in supported lipid bilayers. The model uses Flory’s approximation for the entropy of mixing and allows for effects arising when the components have different molecular areas. In the special case of equal area molecules the concentration gradient reduces to a Fermi–Dirac distribution. The theory is extended to include effects from charged molecules in the membrane. Calculations show that surface charge on the supporting substrate substantially screens electrostatic interactions within the membrane. It also is shown that concentration profiles can be affected by other intermolecular interactions such as clustering. Qualitative agreement with this prediction is provided by comparing phosphatidylserine- and cardiolipin-containing membranes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Allelic association between pairs of loci is derived in terms of the association probability ρ as a function of recombination θ, effective population size N, linear systematic pressure v, and time t, predicting both ρrt, the decrease of association from founders and ρct, the increase by genetic drift, with ρt = ρrt + ρct. These results conform to the Malecot equation, with time replaced by distance on the genetic map, or on the physical map if recombination in the region is uniform. Earlier evidence suggested that ρ is less sensitive to variations in marker allele frequencies than alternative metrics for which there is no probability theory. This robustness is confirmed for six alternatives in eight samples. In none of these 48 tests was the residual variance as small as for ρ. Overall, efficiency was less than 80% for all alternatives, and less than 30% for two of them. Efficiency of alternatives did not increase when information was estimated simultaneously. The swept radius within which substantial values of ρ are conserved lies between 385 and 893 kb, but deviation of parameters between measures is enormously significant. The large effort now being devoted to allelic association has little value unless the ρ metric with the strongest theoretical basis and least sensitivity to marker allele frequencies is used for mapping of marker association and localization of disease loci.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates.