31 resultados para Random effect model
em Aston University Research Archive
Resumo:
Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).
Resumo:
There is an alternative model of the 1-way ANOVA called the 'random effects' model or ‘nested’ design in which the objective is not to test specific effects but to estimate the degree of variation of a particular measurement and to compare different sources of variation that influence the measurement in space and/or time. The most important statistics from a random effects model are the components of variance which estimate the variance associated with each of the sources of variation influencing a measurement. The nested design is particularly useful in preliminary experiments designed to estimate different sources of variation and in the planning of appropriate sampling strategies.
Resumo:
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered. © 2002 The College of Optometrists.
Resumo:
BACKGROUND: The behavioral and psychological symptoms related to dementia (BPSD) are difficult to manage and are associated with adverse patient outcomes. OBJECTIVE: To systematically analyze the data on memantine in the treatment of BPSD. METHODS: We searched MEDLINE, EMBASE, Pharm-line, the Cochrane Centre Collaboration, www.clinicaltrials.gov, www.controlled-trials.com, and PsycINFO (1966-July 2007). We contacted manufacturers and scrutinized the reference sections of articles identified in our search for further references, including conference proceedings. Two researchers (IM and CF) independently reviewed all studies identified by the search strategy. We included 6 randomized, parallel-group, double-blind studies that rated BPSD with the Neuropsychiatric Inventory (NPI) in our meta-analysis. Patients had probable Alzheimer's disease and received treatment with memantine for at least one month. Overall efficacy of memantine on the NPI was established with a t-test for the average difference between means across studies, using a random effects model. RESULTS: Five of the 6 studies identified had NPI outcome data. In these 5 studies, 868 patients were treated with memantine and 882 patients were treated with placebo. Patients on memantine improved by 1.99 on the NPI scale (95% Cl -0.08 to -3.91; p = 0.041) compared with the placebo group. CONCLUSIONS: Initial data appear to indicate that memantine decreases NPI scores and may have a role in managing BPSD. However, there are a number of limitations with the current data; the effect size was relatively small, and whether memantine produces significant clinical benefit is not clear.
Resumo:
In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.
Resumo:
Assessing factors that predict new product success (NPS) holds critical importance for companies, as research shows that despite considerable new product investment, success rates are generally below 25%. Over the decades, meta-analytical attempts have been made to summarize empirical findings on NPS factors. However, market environment changes such as increased global competition, as well as methodological advancements in meta-analytical research, present a timely opportunity to augment their results. Hence, a key objective of this research is to provide an updated and extended meta-analytic investigation of the factors affecting NPS. Using Henard and Szymanski's meta-analysis as the most comprehensive recent summary of empirical findings, this study updates their findings by analyzing articles published from 1999 through 2011, the period following the original meta-analysis. Based on 233 empirical studies (from 204 manuscripts) on NPS, with a total 2618 effect sizes, this study also takes advantage of more recent methodological developments by re-calculating effects of the meta-analysis employing a random effects model. The study's scope broadens by including overlooked but important additional variables, notably “country culture,” and discusses substantive differences between the updated meta-analysis and its predecessor. Results reveal generally weaker effect sizes than those reported by Henard and Szymanski in 2001, and provide evolutionary evidence of decreased effects of common success factors over time. Moreover, culture emerges as an important moderating factor, weakening effect sizes for individualistic countries and strengthening effects for risk-averse countries, highlighting the importance of further investigating culture's role in product innovation studies, and of tracking changes of success factors of product innovations. Finally, a sharp increase since 1999 in studies investigating product and process characteristics identifies a significant shift in research interest in new product development success factors. The finding that the importance of success factors generally declines over time calls for new theoretical approaches to better capture the nature of new product development (NPD) success factors. One might speculate that the potential to create competitive advantages through an understanding of NPD success factors is reduced as knowledge of these factors becomes more widespread among managers. Results also imply that managers attempting to improve success rates of NPDs need to consider national culture as this factor exhibits a strong moderating effect: Working in varied cultural contexts will result in differing antecedents of successful new product ventures.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.
Resumo:
Experimental results are presented which show that the indentation size effect for pyramidal and spherical indenters can be correlated. For a pyramidal indenter, the hardness measured in crystalline materials usually increases with decreasing depth of penetration, which is known as the indentation size effect. Spherical indentation also shows an indentation size effect. However, for a spherical indenter, hardness is not affected by depth, but increases with decreasing sphere radius. The correlation for pyramidal and spherical indenter shapes is based on geometrically necessary dislocations and work-hardening. The Nix and Gao indentation size effect model (J. Mech. Phys. Solids 46 (1998) 411) for conical indenters is extended to indenters of various shapes and compared to the experimental results. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this article, we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweep-adjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates, but the success of such models depends on the stability of money demand functions and the specifications of the models.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.
Resumo:
In order to generate sales promotion response predictions, marketing analysts estimate demand models using either disaggregated (consumer-level) or aggregated (store-level) scanner data. Comparison of predictions from these demand models is complicated by the fact that models may accommodate different forms of consumer heterogeneity depending on the level of data aggregation. This study shows via simulation that demand models with various heterogeneity specifications do not produce more accurate sales response predictions than a homogeneous demand model applied to store-level data, with one major exception: a random coefficients model designed to capture within-store heterogeneity using store-level data produced significantly more accurate sales response predictions (as well as better fit) compared to other model specifications. An empirical application to the paper towel product category adds additional insights. This article has supplementary material online.
Resumo:
Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this paper we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweepadjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates but the success of such models depends on the stability of money demand functions and the specifications of the models.
Resumo:
We use non-parametric procedures to identify breaks in the underlying series of UK household sector money demand functions. Money demand functions are estimated using cointegration techniques and by employing both the Simple Sum and Divisia measures of money. P-star models are also estimated for out-of-sample inflation forecasting. Our findings suggest that the presence of breaks affects both the estimation of cointegrated money demand functions and the inflation forecasts. P-star forecast models based on Divisia measures appear more accurate at longer horizons and the majority of models with fundamentals perform better than a random walk model.