984 resultados para EXPONENTIAL MODEL
Resumo:
A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.
In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.
A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.
Resumo:
Instability triggering and transient growth of thermoacoustic oscillations were experimentally investigated in combination with linear/nonlinear flame transfer function (FTF) methodology in a model lean-premixed gas turbine combustor operated with CH 4 and air at atmospheric pressure. A fully premixed flame with 10kW thermal power and an equivalence ratio of 0.60 was chosen for detailed characterization of the nonlinear transient behaviors. Flame transfer functions were experimentally determined by simultaneous measurements of inlet velocity fluctuations and heat release rate oscillations using a constant temperature anemometer and OH */CH * chemiluminescence emissions, respectively. The phase-resolved variation of the local flame structure at a limit cycle was measured by planar laser-induced fluorescence of OH. Simultaneous measurements of inlet velocity, OH */CH * emission, and acoustic pressure were performed to investigate the temporal evolution of the system from a stable to a limit cycle operation. This measurement allows us to describe an unsteady instability triggering event in terms of several distinct stages: (i) initiation of a small perturbation, (ii) exponential amplification, (iii) saturation, (iv) nonlinear evolution of the perturbations towards a new unstable periodic state, (v) quasi-steady low-amplitude periodic oscillation, and (vi) fully-developed high-amplitude limit cycle oscillation. Phase-plane portraits of instantaneous inlet velocity and heat release rate clearly show the presence of two different attractors. Depending on its initial position in phase space at infinitesimally small amplitude, the system evolves towards either a high-amplitude oscillatory state or a low-amplitude oscillatory state. This transient phenomenon was analyzed using frequency- and amplitude-dependent damping mechanisms, and compared to subcritical and supercritical bifurcation theories. The results presented in this paper experimentally demonstrate the hypothesis proposed by Preetham et al. based on analytical and computational solutions of the nonlinear G-equation [J. Propul. Power 24 (2008) 1390-1402]. Good quantitative agreement was obtained between measurements and predictions in terms of the conditions for the onset of triggering and the amplitude of triggered combustion instabilities. © 2011 The Combustion Institute.
Resumo:
In this article, we report a combined experimental and theoretical study on the luminescence dynamics of localized carriers in disordered InGaN/GaN quantum wells. The luminescence intensity of localized carriers is found to exhibit an unusual non-exponential decay. Adopting a new model taking the radiative recombination and phonon-assisted hopping transition between different localized states into account, which was recently developed by Rubel et al., the non-exponential decay behavior of the carriers can be quantitatively interpreted. Combining with precise structure characterization, the theoretical simulations show that the localization length of localized carriers is a key parameter governing their luminescence decay dynamics. (c) 2006 Optical Society of America.
Resumo:
A four-level model of P-6(7/2) excited state of Eu2+ ion in KMgF3: Eu2+ has been proposed. The decay profiles of the P-6(7/2) excited sstate of Eu2+ are two exponential and the physical implication of each term in the fit equation responsible for the model is interpreted. The data obtained spectroscopically are in good agreement with the fit results.
Resumo:
This paper studies a problem of dynamic pricing faced by a retailer with limited inventory, uncertain about the demand rate model, aiming to maximize expected discounted revenue over an infinite time horizon. The retailer doubts his demand model which is generated by historical data and views it as an approximation. Uncertainty in the demand rate model is represented by a notion of generalized relative entropy process, and the robust pricing problem is formulated as a two-player zero-sum stochastic differential game. The pricing policy is obtained through the Hamilton-Jacobi-Isaacs (HJI) equation. The existence and uniqueness of the solution of the HJI equation is shown and a verification theorem is proved to show that the solution of the HJI equation is indeed the value function of the pricing problem. The results are illustrated by an example with exponential nominal demand rate.
Resumo:
A parametric regression model for right-censored data with a log-linear median regression function and a transformation in both response and regression parts, named parametric Transform-Both-Sides (TBS) model, is presented. The TBS model has a parameter that handles data asymmetry while allowing various different distributions for the error, as long as they are unimodal symmetric distributions centered at zero. The discussion is focused on the estimation procedure with five important error distributions (normal, double-exponential, Student's t, Cauchy and logistic) and presents properties, associated functions (that is, survival and hazard functions) and estimation methods based on maximum likelihood and on the Bayesian paradigm. These procedures are implemented in TBSSurvival, an open-source fully documented R package. The use of the package is illustrated and the performance of the model is analyzed using both simulated and real data sets.
Resumo:
We present TANC, a TAN classifier (tree-augmented naive) based on imprecise probabilities. TANC models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM). A first contribution of this paper is the experimental comparison between EDM and the global Imprecise Dirichlet Model using the naive credal classifier (NCC), with the aim of showing that EDM is a sensible approximation of the global IDM. TANC is able to deal with missing data in a conservative manner by considering all possible completions (without assuming them to be missing-at-random), but avoiding an exponential increase of the computational time. By experiments on real data sets, we show that TANC is more reliable than the Bayesian TAN and that it provides better performance compared to previous TANs based on imprecise probabilities. Yet, TANC is sometimes outperformed by NCC because the learned TAN structures are too complex; this calls for novel algorithms for learning the TAN structures, better suited for an imprecise probability classifier.
Resumo:
In this paper we present TANC, i.e., a tree-augmented naive credal classifier based on imprecise probabilities; it models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM) (Cano et al., 2007) and deals conservatively with missing data in the training set, without assuming them to be missing-at-random. The EDM is an approximation of the global Imprecise Dirichlet Model (IDM), which considerably simplifies the computation of upper and lower probabilities; yet, having been only recently introduced, the quality of the provided approximation needs still to be verified. As first contribution, we extensively compare the output of the naive credal classifier (one of the few cases in which the global IDM can be exactly implemented) when learned with the EDM and the global IDM; the output of the classifier appears to be identical in the vast majority of cases, thus supporting the adoption of the EDM in real classification problems. Then, by experiments we show that TANC is more reliable than the precise TAN (learned with uniform prior), and also that it provides better performance compared to a previous (Zaffalon, 2003) TAN model based on imprecise probabilities. TANC treats missing data by considering all possible completions of the training set, but avoiding an exponential increase of the computational times; eventually, we present some preliminary results with missing data.
Resumo:
Recent developments of high-end processors recognize temperature monitoring and tuning as one of the main challenges towards achieving higher performance given the growing power and temperature constraints. To address this challenge, one needs both suitable thermal energy abstraction and corresponding instrumentation. Our model is based on application-specific parameters such as power consumption, execution time, and asymptotic temperature as well as hardware-specific parameters such as half time for thermal rise or fall. As observed with our out-of-band instrumentation and monitoring infrastructure, the temperature changes follow a relatively slow capacitor-style charge-discharge process. Therefore, we use the lumped thermal model that initiates an exponential process whenever there is a change in processor’s power consumption. Initial experiments with two codes – Firestarter and Nekbone – validate our thermal energy model and demonstrate its use for analyzing and potentially improving the application-specific balance between temperature, power, and performance.
Resumo:
This work deals with the numerical simulation of air stripping process for the pre-treatment of groundwater used in human consumption. The model established in steady state presents an exponential solution that is used, together with the Tau Method, to get a spectral approach of the solution of the system of partial differential equations associated to the model in transient state.
Resumo:
This thesis Entitled Bayesian inference in Exponential and pareto populations in the presence of outliers. The main theme of the present thesis is focussed on various estimation problems using the Bayesian appraoch, falling under the general category of accommodation procedures for analysing Pareto data containing outlier. In Chapter II. the problem of estimation of parameters in the classical Pareto distribution specified by the density function. In Chapter IV. we discuss the estimation of (1.19) when the sample contain a known number of outliers under three different data generating mechanisms, viz. the exchangeable model. Chapter V the prediction of a future observation based on a random sample that contains one contaminant. Chapter VI is devoted to the study of estimation problems concerning the exponential parameters under a k-outlier model.
Resumo:
We propose a short-range generalization of the p-spin interaction spin-glass model. The model is well suited to test the idea that an entropy collapse is at the bottom line of the dynamical singularity encountered in structural glasses. The model is studied in three dimensions through Monte Carlo simulations, which put in evidence fragile glass behavior with stretched exponential relaxation and super-Arrhenius behavior of the relaxation time. Our data are in favor of a Vogel-Fulcher behavior of the relaxation time, related to an entropy collapse at the Kauzmann temperature. We, however, encounter difficulties analogous to those found in experimental systems when extrapolating thermodynamical data at low temperatures. We study the spin-glass susceptibility, investigating the behavior of the correlation length in the system. We find that the increase of the relaxation time is accompanied by a very slow growth of the correlation length. We discuss the scaling properties of off-equilibrium dynamics in the glassy regime, finding qualitative agreement with the mean-field theory.
Resumo:
The aim of this paper is essentially twofold: first, to describe the use of spherical nonparametric estimators for determining statistical diagnostic fields from ensembles of feature tracks on a global domain, and second, to report the application of these techniques to data derived from a modern general circulation model. New spherical kernel functions are introduced that are more efficiently computed than the traditional exponential kernels. The data-driven techniques of cross-validation to determine the amount elf smoothing objectively, and adaptive smoothing to vary the smoothing locally, are also considered. Also introduced are techniques for combining seasonal statistical distributions to produce longer-term statistical distributions. Although all calculations are performed globally, only the results for the Northern Hemisphere winter (December, January, February) and Southern Hemisphere winter (June, July, August) cyclonic activity are presented, discussed, and compared with previous studies. Overall, results for the two hemispheric winters are in good agreement with previous studies, both for model-based studies and observational studies.
Resumo:
A simple theoretical model for the intensification of tropical cyclones and polar lows is developed using a minimal set of physical assumptions. These disturbances are assumed to be balanced systems intensifying through the WISHE (Wind-Induced Surface Heat Exchange) intensification mechanism, driven by surface fluxes of heat and moisture into an atmosphere which is neutral to moist convection. The equation set is linearized about a resting basic state and solved as an initial-value problem. A system is predicted to intensify with an exponential perturbation growth rate scaled by the radial gradient of an efficiency parameter which crudely represents the effects of unsaturated processes. The form of this efficiency parameter is assumed to be defined by initial conditions, dependent on the nature of a pre-existing vortex required to precondition the atmosphere to a state in which the vortex can intensify. Evaluation of the simple model using a primitive-equation, nonlinear numerical model provides support for the prediction of exponential perturbation growth. Good agreement is found between the simple and numerical models for the sensitivities of the measured growth rate to various parameters, including surface roughness, the rate of transfer of heat and moisture from the ocean surface, and the scale for the growing vortex.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.