956 resultados para Multinomial logit models with random coefficients (RCL)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the time-harmonic Maxwell equations with constant coefficients in a bounded, uniformly star-shaped polyhedron. We prove wavenumber-explicit norm bounds for weak solutions. This result is pivotal for convergence proofs in numerical analysis and may be a tool in the analysis of electromagnetic boundary integral operators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analytical model is developed to predict the surface drag exerted by internal gravity waves on an isolated axisymmetric mountain over which there is a stratified flow with a velocity profile that varies relatively slowly with height. The model is linear with respect to the perturbations induced by the mountain, and solves the Taylor–Goldstein equation with variable coefficients using a Wentzel–Kramers–Brillouin (WKB) approximation, formally valid for high Richardson numbers, Ri. The WKB solution is extended to a higher order than in previous studies, enabling a rigorous treatment of the effects of shear and curvature of the wind profile on the surface drag. In the hydrostatic approximation, closed formulas for the drag are derived for generic wind profiles, where the relative magnitude of the corrections to the leading-order drag (valid for a constant wind profile) does not depend on the detailed shape of the orography. The drag is found to vary proportionally to Ri21, decreasing as Ri decreases for a wind that varies linearly with height, and increasing as Ri decreases for a wind that rotates with height maintaining its magnitude. In these two cases the surface drag is predicted to be aligned with the surface wind. When one of the wind components varies linearly with height and the other is constant, the surface drag is misaligned with the surface wind, especially for relatively small Ri. All these results are shown to be in fairly good agreement with numerical simulations of mesoscale nonhydrostatic models, for high and even moderate values of Ri.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data augmentation is a powerful technique for estimating models with latent or missing data, but applications in agricultural economics have thus far been few. This paper showcases the technique in an application to data on milk market participation in the Ethiopian highlands. There, a key impediment to economic development is an apparently low rate of market participation. Consequently, economic interest centers on the “locations” of nonparticipants in relation to the market and their “reservation values” across covariates. These quantities are of policy interest because they provide measures of the additional inputs necessary in order for nonparticipants to enter the market. One quantity of primary interest is the minimum amount of surplus milk (the “minimum efficient scale of operations”) that the household must acquire before market participation becomes feasible. We estimate this quantity through routine application of data augmentation and Gibbs sampling applied to a random-censored Tobit regression. Incorporating random censoring affects markedly the marketable-surplus requirements of the household, but only slightly the covariates requirements estimates and, generally, leads to more plausible policy estimates than the estimates obtained from the zero-censored formulation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Habitat use and the processes which determine fish distribution were evaluated at the reef flat and reef crest zones of a tropical, algal-dominated reef. Our comparisons indicated significant differences in the majority of the evaluated environmental characteristics between zones. Also, significant differences in the abundances of twelve, from thirteen analyzed species, were observed within and between-sites. According to null models, non-random patterns of species co-occurrences were significant, suggesting that fish guilds in both zones were non-randomly structured. Unexpectedly, structural complexity negatively affected overall species richness, but had a major positive influence on highly site-attached species such as a damselfish. Depth and substrate composition, particularly macroalgae cover, were positive determinants for the fish assemblage structure in the studied reef, prevailing over factors such as structural complexity and live coral cover. Our results are conflicting with other studies carried out in coral-dominated reefs of the Caribbean and Pacific, therefore supporting the idea that the factors which may potentially influence reef fish composition are highly site-dependent and variable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Letter we deal with a nonlinear Schrodinger equation with chaotic, random, and nonperiodic cubic nonlinearity. Our goal is to study the soliton evolution, with the strength of the nonlinearity perturbed in the space and time coordinates and to check its robustness under these conditions. Here we show that the chaotic perturbation is more effective in destroying the soliton behavior, when compared with random or nonperiodic perturbation. For a real system, the perturbation can be related to, e.g., impurities in crystalline structures, or coupling to a thermal reservoir which, on the average, enhances the nonlinearity. We also discuss the relevance of such random perturbations to the dynamics of Bose-Einstein condensates and their collective excitations and transport. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the issue of performing residual and local influence analyses in beta regression models with varying dispersion, which are useful for modelling random variables that assume values in the standard unit interval. In such models, both the mean and the dispersion depend upon independent variables. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. An application using real data is presented and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let G be a group. We give some formulas for the first group homology and cohomology of a group G with coefficients in an arbitrary G-module (Z) over tilde. More explicit calculations are done in the special cases of free groups, abelian groups and nilpotent groups. We also perform calculations for certain G-module M, by reducing it to the case where the coefficient is a G-module (Z) over tilde. As a result of the well known equalities H-1(X, M) = H-1(pi(1)(X), M) and H-1(X, M) = H-1(pi(1) (X), M), for any G-module M, we are able to calculate the first homology and cohomology groups of topological spaces with certain local system of coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este estudo investiga o poder preditivo fora da amostra, um mês à frente, de um modelo baseado na regra de Taylor para previsão de taxas de câmbio. Revisamos trabalhos relevantes que concluem que modelos macroeconômicos podem explicar a taxa de câmbio de curto prazo. Também apresentamos estudos que são céticos em relação à capacidade de variáveis macroeconômicas preverem as variações cambiais. Para contribuir com o tema, este trabalho apresenta sua própria evidência através da implementação do modelo que demonstrou o melhor resultado preditivo descrito por Molodtsova e Papell (2009), o “symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant”. Para isso, utilizamos uma amostra de 14 moedas em relação ao dólar norte-americano que permitiu a geração de previsões mensais fora da amostra de janeiro de 2000 até março de 2014. Assim como o critério adotado por Galimberti e Moura (2012), focamos em países que adotaram o regime de câmbio flutuante e metas de inflação, porém escolhemos moedas de países desenvolvidos e em desenvolvimento. Os resultados da nossa pesquisa corroboram o estudo de Rogoff e Stavrakeva (2008), ao constatar que a conclusão da previsibilidade da taxa de câmbio depende do teste estatístico adotado, sendo necessária a adoção de testes robustos e rigorosos para adequada avaliação do modelo. Após constatar não ser possível afirmar que o modelo implementado provém previsões mais precisas do que as de um passeio aleatório, avaliamos se, pelo menos, o modelo é capaz de gerar previsões “racionais”, ou “consistentes”. Para isso, usamos o arcabouço teórico e instrumental definido e implementado por Cheung e Chinn (1998) e concluímos que as previsões oriundas do modelo de regra de Taylor são “inconsistentes”. Finalmente, realizamos testes de causalidade de Granger com o intuito de verificar se os valores defasados dos retornos previstos pelo modelo estrutural explicam os valores contemporâneos observados. Apuramos que o modelo fundamental é incapaz de antecipar os retornos realizados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to evaluate the use of probit and logit link functions for the genetic evaluation of early pregnancy using simulated data. The following simulation/analysis structures were constructed: logit/logit, logit/probit, probit/logit, and probit/probit. The percentages of precocious females were 5, 10, 15, 20, 25 and 30% and were adjusted based on a change in the mean of the latent variable. The parametric heritability (h²) was 0.40. Simulation and genetic evaluation were implemented in the R software. Heritability estimates (ĥ²) were compared with h² using the mean squared error. Pearson correlations between predicted and true breeding values and the percentage of coincidence between true and predicted ranking, considering the 10% of bulls with the highest breeding values (TOP10) were calculated. The mean ĥ² values were under- and overestimated for all percentages of precocious females when logit/probit and probit/logit models used. In addition, the mean squared errors of these models were high when compared with those obtained with the probit/probit and logit/logit models. Considering ĥ², probit/probit and logit/logit were also superior to logit/probit and probit/logit, providing values close to the parametric heritability. Logit/probit and probit/logit presented low Pearson correlations, whereas the correlations obtained with probit/probit and logit/logit ranged from moderate to high. With respect to the TOP10 bulls, logit/probit and probit/logit presented much lower percentages than probit/probit and logit/logit. The genetic parameter estimates and predictions of breeding values of the animals obtained with the logit/logit and probit/probit models were similar. In contrast, the results obtained with probit/logit and logit/probit were not satisfactory. There is need to compare the estimation and prediction ability of logit and probit link functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste trabalho foi analizar a distribuição espacial da compactação do solo e a influência da umidade do solo na resistência à penetração. Esta última variável foi descrita pelo índice de cone. O solo estudado foi Nitossolo e os dados de índice de cone foram obtidos usando um penetrômetro. A resistência do solo foi avaliada a 5 profundidades diferentes, 0-10 cm, 10-20 cm, 20-30 cm, 30-40 cm e mais de 40 cm, porém o conteúdo de umidade do solo foi medido a 0-20 cm e 20-40 cm. As condições hídricas do solo variaram nas diferentes amostragems. Os coeficientes de variação para o índice de cone foram 16,5% a 45,8% e os do conteúdo de umidade do solo variaram entre 8,96% e 21,38%. Os resultados sugeriram elevada correlação entre a resistência do solo, estimada pelo índice de cone e a profundidade do solo. Sem embargo, a relação esperada com a umidade do solo não foi apreciada. Observou-se dependência espacial em 31 de 35 séries de dados de índice de cone e umidade do solo. Esta dependência foi ajustada por modelos exponenciais com efeito pepita variável de 0 a 90% o valor do patamar. em séries de dados o comportamento foi aleatório. Portanto, a técnica das distâncias inversas foi utilizada para cartografar a distribuição das variáveis que não tiveram estrutura espacial. Na krigagem constatou-se uma suavização dos mapas comparados com esses das distâncias inversas. A krigagem indicadora foi utilizada para cartografar a variabilidade espacial do índice de cone e recomendar melhor manejo do solo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivou-se comparar modelos de regressão aleatória com diferentes estruturas de variância residual, a fim de se buscar a melhor modelagem para a característica tamanho da leitegada ao nascer (TLN). Utilizaram-se 1.701 registros de TLN, que foram analisados por meio de modelo animal, unicaracterística, de regressão aleatória. As regressões fixa e aleatórias foram representadas por funções contínuas sobre a ordem de parto, ajustadas por polinômios ortogonais de Legendre de ordem 3. Para averiguar a melhor modelagem para a variância residual, considerou-se a heterogeneidade de variância por meio de 1 a 7 classes de variância residual. O modelo geral de análise incluiu grupo de contemporâneo como efeito fixo; os coeficientes de regressão fixa para modelar a trajetória média da população; os coeficientes de regressão aleatória do efeito genético aditivo-direto, do comum-de-leitegada e do de ambiente permanente de animal; e o efeito aleatório residual. O teste da razão de verossimilhança, o critério de informação de Akaike e o critério de informação bayesiano de Schwarz apontaram o modelo que considerou homogeneidade de variância como o que proporcionou melhor ajuste aos dados utilizados. As herdabilidades obtidas foram próximas a zero (0,002 a 0,006). O efeito de ambiente permanente foi crescente da 1ª (0,06) à 5ª (0,28) ordem, mas decrescente desse ponto até a 7ª ordem (0,18). O comum-de-leitegada apresentou valores baixos (0,01 a 0,02). A utilização de homogeneidade de variância residual foi mais adequada para modelar as variâncias associadas à característica tamanho da leitegada ao nascer nesse conjunto de dado.