22 resultados para partial least-squares regression
em University of Queensland eSpace - Australia
Resumo:
In this note we show by counter-example that the direct product of two weak uniquely completable partial latin squares is not necessarily a uniquely completable partial latin square. This counter-example rejects a conjecture by Gower (see [3]) on the direct product of two uniquely completable partial latin squares.
Resumo:
Alcohol dependence is characterized by tolerance, physical dependence, and craving. The neuroadaptations underlying these effects of chronic alcohol abuse are likely due to altered gene expression. Previous gene expression studies using human post-mortem brain demonstrated that several gene families were altered by alcohol abuse. However, most of these changes in gene expression were small. It is not clear if gene expression profiles have sufficient power to discriminate control from alcoholic individuals and how consistent gene expression changes are when a relatively large sample size is examined. In the present study, microarray analysis (similar to 47 000 elements) was performed on the superior frontal cortex of 27 individual human cases ( 14 well characterized alcoholics and 13 matched controls). A partial least squares statistical procedure was applied to identify genes with altered expression levels in alcoholics. We found that genes involved in myelination, ubiquitination, apoptosis, cell adhesion, neurogenesis, and neural disease showed altered expression levels. Importantly, genes involved in neurodegenerative diseases such as Alzheimer's disease were significantly altered suggesting a link between alcoholism and other neurodegenerative conditions. A total of 27 genes identified in this study were previously shown to be changed by alcohol abuse in previous studies of human post-mortem brain. These results revealed a consistent re-programming of gene expression in alcohol abusers that reliably discriminates alcoholic from non-alcoholic individuals.
Resumo:
Objective: This study examined a sample of patients in Victoria, Australia, to identify factors in selection for conditional release from an initial hospitalization that occurred within 30 days of entry into the mental health system. Methods: Data were from the Victorian Psychiatric Case Register. All patients first hospitalized and conditionally released between 1990 and 2000 were identified (N = 8,879), and three comparison groups were created. Two groups were hospitalized within 30 days of entering the system: those who were given conditional release and those who were not. A third group was conditionally released from a hospitalization that occurred after or extended beyond 30 days after system entry. Logistic regression identified characteristics that distinguished the first group. Ordinary least-squares regression was used to evaluate the contribution of conditional release early in treatment to reducing inpatient episodes, inpatient days, days per episode, and inpatient days per 30 days in the system. Results: Conditional release early in treatment was used for 11 percent of the sample, or more than a third of those who were eligible for this intervention. Factors significantly associated with selection for early conditional release were those related to a better prognosis ( initial hospitalization at a later age and having greater than an 11th grade education), a lower likelihood of a diagnosis of dementia or schizophrenia, involuntary status at first inpatient admission, and greater community involvement ( being employed and being married). When the analyses controlled for these factors, use of conditional release early in treatment was significantly associated with a reduction in use of subsequent inpatient care.
Resumo:
In this article we investigate the asymptotic and finite-sample properties of predictors of regression models with autocorrelated errors. We prove new theorems associated with the predictive efficiency of generalized least squares (GLS) and incorrectly structured GLS predictors. We also establish the form associated with their predictive mean squared errors as well as the magnitude of these errors relative to each other and to those generated from the ordinary least squares (OLS) predictor. A large simulation study is used to evaluate the finite-sample performance of forecasts generated from models using different corrections for the serial correlation.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
This paper examines the trade relationship between the Gulf Cooperation Council (GCC) and the European Union (EU). A simultaneous equation regression model is developed and estimated to assist with the analysis. The regression results, using both the two stage least squares (2SLS) and ordinary least squares (OLS) estimation methods, reveal the existence of feedback effects between the two economic integrations. The results also show that during times of slack in oil prices, the GCC income from its investments overseas helped to finance its imports from the EU.
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Residence time distribution studies of gas through a rotating drum bioreactor for solid-state fermentation were performed using carbon monoxide as a tracer gas. The exit concentration as a function of time differed considerably from profiles expected for plug flow, plug flow with axial dispersion, and continuous stirred tank reactor (CSTR) models. The data were then fitted by least-squares analysis to mathematical models describing a central plug flow region surrounded by either one dead region (a three-parameter model) or two dead regions (a five-parameter model). Model parameters were the dispersion coefficient in the central plug flow region, the volumes of the dead regions, and the exchange rates between the different regions. The superficial velocity of the gas through the reactor has a large effect on parameter values. Increased superficial velocity tends to decrease dead region volumes, interregion transfer rates, and axial dispersion. The significant deviation from CSTR, plug flow, and plug flow with axial dispersion of the residence time distribution of gas within small-scale reactors can lead to underestimation of the calculation of mass and heat transfer coefficients and hence has implications for reactor design and scaleup. (C) 2001 John Wiley & Sons, Inc.