956 resultados para Multinomial logit models with random coefficients (RCL)
Resumo:
In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.
Resumo:
We derive nonlinear diffusion equations and equations containing corrections due to fluctuations for a coarse-grained concentration field. To deal with diffusion coefficients with an explicit dependence on the concentration values, we generalize the Van Kampen method of expansion of the master equation to field variables. We apply these results to the derivation of equations of phase-separation dynamics and interfacial growth instabilities.
Resumo:
We present an exact solution for the order parameters that characterize the stationary behavior of a population of Kuramotos phase oscillators under random external fields [Y. Kuramoto, in International Symposium on Mathematical Problems in Theoretical Physics, Lecture Notes in Physics, Vol. 39 (Springer, Berlin, 1975), p. 420]. From these results it is possible to generate the phase diagram of models with an arbitrary distribution of random frequencies and random fields.
Resumo:
Aim To evaluate the effects of using distinct alternative sets of climatic predictor variables on the performance, spatial predictions and future projections of species distribution models (SDMs) for rare plants in an arid environment. . Location Atacama and Peruvian Deserts, South America (18º30'S - 31º30'S, 0 - 3 000 m) Methods We modelled the present and future potential distributions of 13 species of Heliotropium sect. Cochranea, a plant group with a centre of diversity in the Atacama Desert. We developed and applied a sequential procedure, starting from climate monthly variables, to derive six alternative sets of climatic predictor variables. We used them to fit models with eight modelling techniques within an ensemble forecasting framework, and derived climate change projections for each of them. We evaluated the effects of using these alternative sets of predictor variables on performance, spatial predictions and projections of SDMs using Generalised Linear Mixed Models (GLMM). Results The use of distinct sets of climatic predictor variables did not have a significant effect on overall metrics of model performance, but had significant effects on present and future spatial predictions. Main conclusion Using different sets of climatic predictors can yield the same model fits but different spatial predictions of current and future species distributions. This represents a new form of uncertainty in model-based estimates of extinction risk that may need to be better acknowledged and quantified in future SDM studies.
Resumo:
This paper presents a methodology to determine the parameters used in the simulation of delamination in composite materials using decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is proposed. The procedure ensures that the energy dissipated by the fracture process is correctly computed. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used in the simulation of fracture processes
Resumo:
The objective of this paper was to evaluate the potential of neural networks (NN) as an alternative method to the basic epidemiological approach to describe epidemics of coffee rust. The NN was developed from the intensities of coffee (Coffea arabica) rust along with the climatic variables collected in Lavras-MG between 13 February 1998 and 20 April 2001. The NN was built with climatic variables that were either selected in a stepwise regression analysis or by the Braincel® system, software for NN building. Fifty-nine networks and 26 regression models were tested. The best models were selected based on small values of the mean square deviation (MSD) and of the mean prediction error (MPE). For the regression models, the highest coefficients of determination (R²) were used. The best model developed with neural networks had an MSD of 4.36 and an MPE of 2.43%. This model used the variables of minimum temperature, production, relative humidity of the air, and irradiance 30 days before the evaluation of disease. The best regression model was developed from 29 selected climatic variables in the network. The summary statistics for this model were: MPE=6.58%, MSE=4.36, and R²=0.80. The elaborated neural networks from a time series also were evaluated to describe the epidemic. The incidence of coffee rust at four previous fortnights resulted in a model with MPE=4.72% and an MSD=3.95.
Resumo:
The present study was conducted at the Department of Rural Engineering and the Department of Animal Morphology and Physiology of FCAV/Unesp, Jaboticabal, SP, Brazil. The objective was to verify the influence of roof slope, exposure and roofing material on the internal temperature of reduced models of animal production facilities. For the development of the research, 48 reduced and dissemble models with dimensions 1.00 × 1.00 × 0.50 m were used. The roof was shed-type, and the models faced to the North or South directions, with 24 models for each side of exposure. Ceramic, galvanized-steel and fibro tiles were used to build the roofs. Slopes varied between 20, 30, 40 and 50% for the ceramic tile and 10, 30, 40 and 50% for the other two. Inside the models, temperature readings were performed at every hour, for 12 months. The results were evaluated in a general linear model in a nested 3 × 4 × 2 factorial arrangement, in which the effects of roofing material and exposure were nested on the factor Slope. Means were compared by the Tukey test at 5% of probability. After analyzing the data, we observed that with the increase in the slope and exposure to the South, there was a drop in the internal temperature within the model at the geographic coordinates of Jaboticabal city (SP/Brazil).
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.
Resumo:
This study is about the analysis of some queueing models related to N-policy.The optimal value the queue size has to attain in order to turn on a single server, assuming that the policy is to turn on a single server when the queue size reaches a certain number, N, and turn him off when the system is empty.The operating policy is the usual N-policy, but with random N and in model 2, a system similar to the one described here.This study analyses “ Tandem queue with two servers”.Here assume that the first server is a specialized one.In a queueing system,under N-policy ,the server will be on vacation until N units accumulate for the first time after becoming idle.A modified version of the N-policy for an M│M│1 queueing system is considered here.The novel feature of this model is that a busy service unit prevents the access of new customers to servers further down the line.It is deals with a queueing model consisting of two servers connected in series with a finite intermediate waiting room of capacity k.Here assume that server I is a specialized server.For this model ,the steady state probability vector and the stability condition are obtained using matrix – geometric method.
Resumo:
The present study gave emphasis on characterizing continuous probability distributions and its weighted versions in univariate set up. Therefore a possible work in this direction is to study the properties of weighted distributions for truncated random variables in discrete set up. The problem of extending the measures into higher dimensions as well as its weighted versions is yet to be examined. As the present study focused attention to length-biased models, the problem of studying the properties of weighted models with various other weight functions and their functional relationships is yet to be examined.
Predicting random level and seasonality of hotel prices. A structural equation growth curve approach
Resumo:
This article examines the effect on price of different characteristics of holiday hotels in the sun-and-beach segment, under the hedonic function perspective. Monthly prices of the majority of hotels in the Spanish continental Mediterranean coast are gathered from May to October 1999 from the tour operator catalogues. Hedonic functions are specified as random-effect models and parametrized as structural equation models with two latent variables, a random peak season price and a random width of seasonal fluctuations. Characteristics of the hotel and the region where they are located are used as predictors of both latent variables. Besides hotel category, region, distance to the beach, availability of parking place and room equipment have an effect on peak price and also on seasonality. 3- star hotels have the highest seasonality and hotels located in the southern regions the lowest, which could be explained by a warmer climate in autumn
Resumo:
La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).
Resumo:
La literatura sobre determinantes de los ingresos laborales ha evolucionado en sus fundamentos teóricos, metodológicos y estimaciones empíricas. Colombia no ha sido ajena a este proceso, pero su evolución, notoria en fertilidad, ha venido relajándose en rigor conceptual: se tiende a considerar a los cuenta propia y asalariados como categorías relativamente semejantes. Para mostrar el efecto de dicha relajación realizamos estimaciones conjuntas de determinantes del ingreso laboral para ocupados asalariados y cuenta propia, y luego las contrastamos con estimaciones más detalladas y desagregadas ilustrando los sesgos efectivos que se generan si no se tienen en cuenta las características laborales de los ocupados cuenta propia y asalariado.