112 resultados para Random parameter Logit Model
Resumo:
The generalized Birnbaum-Saunders distribution pertains to a class of lifetime models including both lighter and heavier tailed distributions. This model adapts well to lifetime data, even when outliers exist, and has other good theoretical properties and application perspectives. However, statistical inference tools may not exist in closed form for this model. Hence, simulation and numerical studies are needed, which require a random number generator. Three different ways to generate observations from this model are considered here. These generators are compared by utilizing a goodness-of-fit procedure as well as their effectiveness in predicting the true parameter values by using Monte Carlo simulations. This goodness-of-fit procedure may also be used as an estimation method. The quality of this estimation method is studied here. Finally, through a real data set, the generalized and classical Birnbaum-Saunders models are compared by using this estimation method.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.
Resumo:
The general mechanism for the photodegradation of polyethyleneglycol (PEG) by H2O2/UV was determined studying the photooxidation of small model molecules, like low molecular weight ethyleneglycols (tetra-, tri-, di-, and ethyleneglycol). After 30 min of irradiation the average molar mass (Mw) of the degradated PEG, analysed by GPC, fall to half of its initial value, with a concomitant increase in polydispersitivity and number of average chain scission (S), characterizing a random chain scission process yielding oligomers and smaller size ethyleneglycols. HPLC analysis of the photodegradation of the model ethyleneglycols proved that the oxidation mechanism involved consecutive reactions, where the larger ethyleneglycols gave rise, successively, to smaller ones. The photodegradation of ethyleneglycol lead to the formation of low molecular weight carboxylic acids, like glycolic, oxalic and formic acids.
Resumo:
Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
The dynamics of a dissipative vibro-impact system called impact-pair is investigated. This system is similar to Fermi-Ulam accelerator model and consists of an oscillating one-dimensional box containing a point mass moving freely between successive inelastic collisions with the rigid walls of the box. In our numerical simulations, we observed multistable regimes, for which the corresponding basins of attraction present a quite complicated structure with smooth boundary. In addition, we characterize the system in a two-dimensional parameter space by using the largest Lyapunov exponents, identifying self-similar periodic sets. Copyright (C) 2009 Silvio L.T. de Souza et al.
Resumo:
The Bell-Lavis model for liquid water is investigated through numerical simulations. The lattice-gas model on a triangular lattice presents orientational states and is known to present a highly bonded low density phase and a loosely bonded high density phase. We show that the model liquid-liquid transition is continuous, in contradiction with mean-field results on the Husimi cactus and from the cluster variational method. We define an order parameter which allows interpretation of the transition as an order-disorder transition of the bond network. Our results indicate that the order-disorder transition is in the Ising universality class. Previous proposal of an Ehrenfest second order transition is discarded. A detailed investigation of anomalous properties has also been undertaken. The line of density maxima in the HDL phase is stabilized by fluctuations, absent in the mean-field solution. (C) 2009 American Institute of Physics. [doi:10.1063/1.3253297]
Resumo:
It is shown that the deviations of the experimental statistics of six chaotic acoustic resonators from Wigner-Dyson random matrix theory predictions are explained by a recent model of random missing levels. In these resonatorsa made of aluminum plates a the larger deviations occur in the spectral rigidity (SRs) while the nearest-neighbor distributions (NNDs) are still close to the Wigner surmise. Good fits to the experimental NNDs and SRs are obtained by adjusting only one parameter, which is the fraction of remaining levels of the complete spectra. For two Sinai stadiums, one Sinai stadium without planar symmetry, two triangles, and a sixth of the three-leaf clover shapes, was found that 7%, 4%, 7%, and 2%, respectively, of eigenfrequencies were not detected.
Resumo:
We introduce a simple mean-field lattice model to describe the behavior of nematic elastomers. This model combines the Maier-Saupe-Zwanzig approach to liquid crystals and an extension to lattice systems of the Warner-Terentjev theory of elasticity, with the addition of quenched random fields. We use standard techniques of statistical mechanics to obtain analytic solutions for the full range of parameters. Among other results, we show the existence of a stress-strain coexistence curve below a freezing temperature, analogous to the P-V diagram of a simple fluid, with the disorder strength playing the role of temperature. Below a critical value of disorder, the tie lines in this diagram resemble the experimental stress-strain plateau and may be interpreted as signatures of the characteristic polydomain-monodomain transition. Also, in the monodomain case, we show that random fields may soften the first-order transition between nematic and isotropic phases, provided the samples are formed in the nematic state.
Resumo:
We propose a field theory model for dark energy and dark matter in interaction. Comparing the classical solutions of the field equations with the observations of the CMB shift parameter, baryonic acoustic oscillations, lookback time, and the Gold supernovae sample, we observe a possible interaction between dark sectors with energy decay from dark energy into dark matter. The observed interaction provides an alleviation to the coincidence problem.
Resumo:
We investigate a neutrino mass model in which the neutrino data is accounted for by bilinear R-parity violating supersymmetry with anomaly mediated supersymmetry breaking. We focus on the CERN Large Hadron Collider (LHC) phenomenology, studying the reach of generic supersymmetry search channels with leptons, missing energy and jets. A special feature of this model is the existence of long-lived neutralinos and charginos which decay inside the detector leading to detached vertices. We demonstrate that the largest reach is obtained in the displaced vertices channel and that practically all of the reasonable parameter space will be covered with an integrated luminosity of 10 fb(-1). We also compare the displaced vertex reaches of the LHC and Tevatron.
Resumo:
It is shown that the families of generalized matrix ensembles recently considered which give rise to an orthogonal invariant stable Levy ensemble can be generated by the simple procedure of dividing Gaussian matrices by a random variable. The nonergodicity of this kind of disordered ensembles is investigated. It is shown that the same procedure applied to random graphs gives rise to a family that interpolates between the Erdos-Renyi and the scale free models.