920 resultados para Bayesian statistical decision theory
Resumo:
Fractal mathematics has been used to characterize water and solute transport in porous media and also to characterize and simulate porous media properties. The objective of this study was to evaluate the correlation between the soil infiltration parameters sorptivity (S) and time exponent (n) and the parameters dimension (D) and the Hurst exponent (H). For this purpose, ten horizontal columns with pure (either clay or loam) and heterogeneous porous media (clay and loam distributed in layers in the column) were simulated following the distribution of a deterministic Cantor Bar with fractal dimension H" 0.63. Horizontal water infiltration experiments were then simulated using Hydrus 2D software. The sorptivity (S) and time exponent (n) parameters of the Philip equation were estimated for each simulation, using the nonlinear regression procedure of the statistical software package SAS®. Sorptivity increased in the columns with the loam content, which was attributed to the relation of S with the capillary radius. The time exponent estimated by nonlinear regression was found to be less than the traditional value of 0.5. The fractal dimension estimated from the Hurst exponent was 17.5 % lower than the fractal dimension of the Cantor Bar used to generate the columns.
Resumo:
We have investigated the structure of double quantum dots vertically coupled at zero magnetic field within local-spin-density functional theory. The dots are identical and have a finite width, and the whole system is axially symmetric. We first discuss the effect of thickness on the addition spectrum of one single dot. Next we describe the structure of coupled dots as a function of the interdot distance for different electron numbers. Addition spectra, Hund's rule, and molecular-type configurations are discussed. It is shown that self-interaction corrections to the density-functional results do not play a very important role in the calculated addition spectra
Resumo:
We consider the effects of external, multiplicative white noise on the relaxation time of a general representation of a bistable system from the points of view provided by two, quite different, theoretical approaches: the classical Stratonovich decoupling of correlations and the new method due to Jung and Risken. Experimental results, obtained from a bistable electronic circuit, are compared to the theoretical predictions. We show that the phenomenon of critical slowing down appears as a function of the noise parameters, thereby providing a correct characterization of a noise-induced transition.
Resumo:
The general theory of nonlinear relaxation times is developed for the case of Gaussian colored noise. General expressions are obtained and applied to the study of the characteristic decay time of unstable states in different situations, including white and colored noise, with emphasis on the distributed initial conditions. Universal effects of the coupling between colored noise and random initial conditions are predicted.
Resumo:
We present the relationship between nonlinear-relaxation-time (NLRT) and quasideterministic approaches to characterize the decay of an unstable state. The universal character of the NLRT is established. The theoretical results are applied to study the dynamical relaxation of the Landau model in one and n variables and also a laser model.
Resumo:
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Resumo:
We extend the recent microscopic analysis of extremal dyonic Kaluza-Klein (D0-D6) black holes to cover the regime of fast rotation in addition to slow rotation. Fastly rotating black holes, in contrast to slow ones, have nonzero angular velocity and possess ergospheres, so they are more similar to the Kerr black hole. The D-brane model reproduces their entropy exactly, but the mass gets renormalized from weak to strong coupling, in agreement with recent macroscopic analyses of rotating attractors. We discuss how the existence of the ergosphere and superradiance manifest themselves within the microscopic model. In addition, we show in full generality how Myers-Perry black holes are obtained as a limit of Kaluza-Klein black holes, and discuss the slow and fast rotation regimes and superradiance in this context.
Resumo:
We conduct a large-scale comparative study on linearly combining superparent-one-dependence estimators (SPODEs), a popular family of seminaive Bayesian classifiers. Altogether, 16 model selection and weighing schemes, 58 benchmark data sets, and various statistical tests are employed. This paper's main contributions are threefold. First, it formally presents each scheme's definition, rationale, and time complexity and hence can serve as a comprehensive reference for researchers interested in ensemble learning. Second, it offers bias-variance analysis for each scheme's classification error performance. Third, it identifies effective schemes that meet various needs in practice. This leads to accurate and fast classification algorithms which have an immediate and significant impact on real-world applications. Another important feature of our study is using a variety of statistical tests to evaluate multiple learning methods across multiple data sets.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
In this paper, we develop a new decision making model and apply it in political Surveys of economic climate collect opinions of managers about the short-term future evolution of their business. Interviews are carried out on a regular basis and responses measure optimistic, neutral or pessimistic views about the economic perspectives. We propose a method to evaluate the sampling error of the average opinion derived from a particular type of survey data. Our variance estimate is useful to interpret historical trends and to decide whether changes in the index from one period to another are due to a structural change or whether ups and downs can be attributed to sampling randomness. An illustration using real data from a survey of business managers opinions is discussed.
Resumo:
Pharmacokinetic variability in drug levels represent for some drugs a major determinant of treatment success, since sub-therapeutic concentrations might lead to toxic reactions, treatment discontinuation or inefficacy. This is true for most antiretroviral drugs, which exhibit high inter-patient variability in their pharmacokinetics that has been partially explained by some genetic and non-genetic factors. The population pharmacokinetic approach represents a very useful tool for the description of the dose-concentration relationship, the quantification of variability in the target population of patients and the identification of influencing factors. It can thus be used to make predictions and dosage adjustment optimization based on Bayesian therapeutic drug monitoring (TDM). This approach has been used to characterize the pharmacokinetics of nevirapine (NVP) in 137 HIV-positive patients followed within the frame of a TDM program. Among tested covariates, body weight, co-administration of a cytochrome (CYP) 3A4 inducer or boosted atazanavir as well as elevated aspartate transaminases showed an effect on NVP elimination. In addition, genetic polymorphism in the CYP2B6 was associated with reduced NVP clearance. Altogether, these factors could explain 26% in NVP variability. Model-based simulations were used to compare the adequacy of different dosage regimens in relation to the therapeutic target associated with treatment efficacy. In conclusion, the population approach is very useful to characterize the pharmacokinetic profile of drugs in a population of interest. The quantification and the identification of the sources of variability is a rational approach to making optimal dosage decision for certain drugs administered chronically.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.