25 resultados para Generalized least squares
em University of Queensland eSpace - Australia
Resumo:
It is shown that variance-balanced designs can be obtained from Type I orthogonal arrays for many general models with two kinds of treatment effects, including ones for interference, with general dependence structures. These designs can be used to obtain optimal and efficient designs. Some examples and design comparisons are given. (C) 2002 Elsevier B.V. All rights reserved.
Resumo:
In this article we investigate the asymptotic and finite-sample properties of predictors of regression models with autocorrelated errors. We prove new theorems associated with the predictive efficiency of generalized least squares (GLS) and incorrectly structured GLS predictors. We also establish the form associated with their predictive mean squared errors as well as the magnitude of these errors relative to each other and to those generated from the ordinary least squares (OLS) predictor. A large simulation study is used to evaluate the finite-sample performance of forecasts generated from models using different corrections for the serial correlation.
Resumo:
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.
Resumo:
In early generation variety trials, large numbers of new breeders' lines (varieties) may be compared, with each having little seed available. A so-called unreplicated trial has each new variety on just one plot at a site, but includes several replicated control varieties, making up around 10% and 20% of the trial. The aim of the trial is to choose some (usually around one third) good performing new varieties to go on for further testing, rather than precise estimation of their mean yields. Now that spatial analyses of data from field experiments are becoming more common, there is interest in an efficient layout of an experiment given a proposed spatial analysis and an efficiency criterion. Common optimal design criteria values depend on the usual C-matrix, which is very large, and hence it is time consuming to calculate its inverse. Since most varieties are unreplicated, the variety incidence matrix has a simple form, and some matrix manipulations can dramatically reduce the computation needed. However, there are many designs to compare, and numerical optimisation lacks insight into good design features. Some possible design criteria are discussed, and approximations to their values considered. These allow the features of efficient layouts under spatial dependence to be given and compared. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Residence time distribution studies of gas through a rotating drum bioreactor for solid-state fermentation were performed using carbon monoxide as a tracer gas. The exit concentration as a function of time differed considerably from profiles expected for plug flow, plug flow with axial dispersion, and continuous stirred tank reactor (CSTR) models. The data were then fitted by least-squares analysis to mathematical models describing a central plug flow region surrounded by either one dead region (a three-parameter model) or two dead regions (a five-parameter model). Model parameters were the dispersion coefficient in the central plug flow region, the volumes of the dead regions, and the exchange rates between the different regions. The superficial velocity of the gas through the reactor has a large effect on parameter values. Increased superficial velocity tends to decrease dead region volumes, interregion transfer rates, and axial dispersion. The significant deviation from CSTR, plug flow, and plug flow with axial dispersion of the residence time distribution of gas within small-scale reactors can lead to underestimation of the calculation of mass and heat transfer coefficients and hence has implications for reactor design and scaleup. (C) 2001 John Wiley & Sons, Inc.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
The microwave and thermal cure processes for the epoxy-amine systems N,N,N',N'-tetraglycidyl-4,4'-diaminodiphenyl methane (TGDDM) with diaminodiphenyl sulfone (DDS) and diaminodiphenyl methane (DDM) have been investigated. The DDS system was studied at a single cure temperature of 433 K and a single stoichiometry of 27 wt% and the DDM system was studied at two stoichiometries, 19 and 32 wt%, and a range temperatures between 373 and 413 K. The best values the kinetic rate parameters for the consumption of amines have been determined by a least squares curve Ft to a model for epoxy-amine cure. The activation energies for the rate parameters for the MY721/DDM system were determined as was the overall activation energy for the cure reaction which was found to be 62 kJ mol(-1). No evidence was found for any specific effect of the microwave radiation on the rate parameters, and the systems were both found to be characterized by a negative substitution effect. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
This paper examines the trade relationship between the Gulf Cooperation Council (GCC) and the European Union (EU). A simultaneous equation regression model is developed and estimated to assist with the analysis. The regression results, using both the two stage least squares (2SLS) and ordinary least squares (OLS) estimation methods, reveal the existence of feedback effects between the two economic integrations. The results also show that during times of slack in oil prices, the GCC income from its investments overseas helped to finance its imports from the EU.