843 resultados para nonlinear mixed effects models
Resumo:
Of key importance to oil and gas companies is the size distribution of fields in the areas that they are drilling. Recent arguments suggest that there are many more fields yet to be discovered in mature provinces than had previously been thought because the underlying distribution is monotonic not peaked. According to this view the peaked nature of the distribution for discovered fields reflects not the underlying distribution but the effect of economic truncation. This paper contributes to the discussion by analysing up-to-date exploration and discovery data for two mature provinces using the discovery-process model, based on sampling without replacement and implicitly including economic truncation effects. The maximum likelihood estimation involved generates a high-dimensional mixed-integer nonlinear optimization problem. A highly efficient solution strategy is tested, exploiting the separable structure and handling the integer constraints by treating the problem as a masked allocation problem in dynamic programming.
Resumo:
Emotion research has long been dominated by the “standard method” of displaying posed or acted static images of facial expressions of emotion. While this method has been useful it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose Generalized Additive Models and Generalized Additive Mixed Models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The mixed model GAMM approach is preferred as it can account for autocorrelation in time series data and allows emotion decoding participants to be modelled as random effects. To increase confidence in linear differences we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition we provide comments on the use of Generalized Additive Models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.
Resumo:
Esta tese investiga a caracterização (e modelação) de dispositivos que realizam o interface entre os domínios digital e analógico, tal como os buffers de saída dos circuitos integrados (CI). Os terminais sem fios da atualidade estão a ser desenvolvidos tendo em vista o conceito de rádio-definido-por-software introduzido por Mitola. Idealmente esta arquitetura tira partido de poderosos processadores e estende a operação dos blocos digitais o mais próximo possível da antena. Neste sentido, não é de estranhar que haja uma crescente preocupação, no seio da comunidade científica, relativamente à caracterização dos blocos que fazem o interface entre os domínios analógico e digital, sendo os conversores digital-analógico e analógico-digital dois bons exemplos destes circuitos. Dentro dos circuitos digitais de alta velocidade, tais como as memórias Flash, um papel semelhante é desempenhado pelos buffers de saída. Estes realizam o interface entre o domínio digital (núcleo lógico) e o domínio analógico (encapsulamento dos CI e parasitas associados às linhas de transmissão), determinando a integridade do sinal transmitido. Por forma a acelerar a análise de integridade do sinal, aquando do projeto de um CI, é fundamental ter modelos que são simultaneamente eficientes (em termos computacionais) e precisos. Tipicamente a extração/validação dos modelos para buffers de saída é feita usando dados obtidos da simulação de um modelo detalhado (ao nível do transístor) ou a partir de resultados experimentais. A última abordagem não envolve problemas de propriedade intelectual; contudo é raramente mencionada na literatura referente à caracterização de buffers de saída. Neste sentido, esta tese de Doutoramento foca-se no desenvolvimento de uma nova configuração de medição para a caracterização e modelação de buffers de saída de alta velocidade, com a natural extensão aos dispositivos amplificadores comutados RF-CMOS. Tendo por base um procedimento experimental bem definido, um modelo estado-da-arte é extraído e validado. A configuração de medição desenvolvida aborda não apenas a integridade dos sinais de saída mas também do barramento de alimentação. Por forma a determinar a sensibilidade das quantias estimadas (tensão e corrente) aos erros presentes nas diversas variáveis associadas ao procedimento experimental, uma análise de incerteza é também apresentada.
Resumo:
Communication and cooperation between billions of neurons underlie the power of the brain. How do complex functions of the brain arise from its cellular constituents? How do groups of neurons self-organize into patterns of activity? These are crucial questions in neuroscience. In order to answer them, it is necessary to have solid theoretical understanding of how single neurons communicate at the microscopic level, and how cooperative activity emerges. In this thesis we aim to understand how complex collective phenomena can arise in a simple model of neuronal networks. We use a model with balanced excitation and inhibition and complex network architecture, and we develop analytical and numerical methods for describing its neuronal dynamics. We study how interaction between neurons generates various collective phenomena, such as spontaneous appearance of network oscillations and seizures, and early warnings of these transitions in neuronal networks. Within our model, we show that phase transitions separate various dynamical regimes, and we investigate the corresponding bifurcations and critical phenomena. It permits us to suggest a qualitative explanation of the Berger effect, and to investigate phenomena such as avalanches, band-pass filter, and stochastic resonance. The role of modular structure in the detection of weak signals is also discussed. Moreover, we find nonlinear excitations that can describe paroxysmal spikes observed in electroencephalograms from epileptic brains. It allows us to propose a method to predict epileptic seizures. Memory and learning are key functions of the brain. There are evidences that these processes result from dynamical changes in the structure of the brain. At the microscopic level, synaptic connections are plastic and are modified according to the dynamics of neurons. Thus, we generalize our cortical model to take into account synaptic plasticity and we show that the repertoire of dynamical regimes becomes richer. In particular, we find mixed-mode oscillations and a chaotic regime in neuronal network dynamics.
Resumo:
This paper studies the application of the simulated method of moments (SMM) for the estimation of nonlinear dynamic stochastic general equilibrium (DSGE) models. Monte Carlo analysis is employed to examine the small-sample properties of SMM in specifications with different curvature. Results show that SMM is computationally efficient and delivers accurate estimates, even when the simulated series are relatively short. However, asymptotic standard errors tend to overstate the actual variability of the estimates and, consequently, statistical inference is conservative. A simple strategy to incorporate priors in a method of moments context is proposed. An empirical application to the macroeconomic effects of rare events indicates that negatively skewed productivity shocks induce agents to accumulate additional capital and can endogenously generate asymmetric business cycles.
Resumo:
We extend the relativistic mean field theory model of Sugahara and Toki by adding new couplings suggested by modern effective field theories. An improved set of parameters is developed with the goal to test the ability of the models based on effective field theory to describe the properties of finite nuclei and, at the same time, to be consistent with the trends of Dirac-Brueckner-Hartree-Fock calculations at densities away from the saturation region. We compare our calculations with other relativistic nuclear force parameters for various nuclear phenomena.
Resumo:
The objective of this paper is to introduce a diVerent approach, called the ecological-longitudinal, to carrying out pooled analysis in time series ecological studies. Because it gives a larger number of data points and, hence, increases the statistical power of the analysis, this approach, unlike conventional ones, allows the complementation of aspects such as accommodation of random effect models, of lags, of interaction between pollutants and between pollutants and meteorological variables, that are hardly implemented in conventional approaches. Design—The approach is illustrated by providing quantitative estimates of the short-termeVects of air pollution on mortality in three Spanish cities, Barcelona,Valencia and Vigo, for the period 1992–1994. Because the dependent variable was a count, a Poisson generalised linear model was first specified. Several modelling issues are worth mentioning. Firstly, because the relations between mortality and explanatory variables were nonlinear, cubic splines were used for covariate control, leading to a generalised additive model, GAM. Secondly, the effects of the predictors on the response were allowed to occur with some lag. Thirdly, the residual autocorrelation, because of imperfect control, was controlled for by means of an autoregressive Poisson GAM. Finally, the longitudinal design demanded the consideration of the existence of individual heterogeneity, requiring the consideration of mixed models. Main results—The estimates of the relative risks obtained from the individual analyses varied across cities, particularly those associated with sulphur dioxide. The highest relative risks corresponded to black smoke in Valencia. These estimates were higher than those obtained from the ecological-longitudinal analysis. Relative risks estimated from this latter analysis were practically identical across cities, 1.00638 (95% confidence intervals 1.0002, 1.0011) for a black smoke increase of 10 μg/m3 and 1.00415 (95% CI 1.0001, 1.0007) for a increase of 10 μg/m3 of sulphur dioxide. Because the statistical power is higher than in the individual analysis more interactions were statistically significant,especially those among air pollutants and meteorological variables. Conclusions—Air pollutant levels were related to mortality in the three cities of the study, Barcelona, Valencia and Vigo. These results were consistent with similar studies in other cities, with other multicentric studies and coherent with both, previous individual, for each city, and multicentric studies for all three cities
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally. the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.
Resumo:
Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.
Resumo:
In this paper, we propose nonlinear elliptical models for correlated data with heteroscedastic and/or autoregressive structures. Our aim is to extend the models proposed by Russo et al. [22] by considering a more sophisticated scale structure to deal with variations in data dispersion and/or a possible autocorrelation among measurements taken throughout the same experimental unit. Moreover, to avoid the possible influence of outlying observations or to take into account the non-normal symmetric tails of the data, we assume elliptical contours for the joint distribution of random effects and errors, which allows us to attribute different weights to the observations. We propose an iterative algorithm to obtain the maximum-likelihood estimates for the parameters and derive the local influence curvatures for some specific perturbation schemes. The motivation for this work comes from a pharmacokinetic indomethacin data set, which was analysed previously by Bocheng and Xuping [1] under normality.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.