918 resultados para Generalized Least Squares
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
Data from 9 studies were compiled to evaluate the effects of 20 yr of selection for postweaning weight (PWW) on carcass characteristics and meat quality in experimental herds of control Nellore (NeC) and selected Nellore (NeS), Caracu (CaS), Guzerah (GuS), and Gir (GiS) breeds. These studies were conducted with animals from a genetic selection program at the Experimental Station of Sertaozinho, Sao Paulo State, Brazil. After the performance test (168 d postweaning), bulls (n = 490) from the calf crops born between 1992 and 2000 were finished and slaughtered to evaluate carcass traits and meat quality. Treatments were different across studies. A meta-analysis was conducted with a random coefficients model in which herd was considered a fixed effect and treatments within year and year were considered as random effects. Either calculated maturity degree or initial BW was used interchangeably as the covariate, and least squares means were used in the multiple-comparison analysis. The CaS and NeS had heavier (P = 0.002) carcasses than the NeC and GiS; GuS were intermediate. The CaS had the longest carcass (P < 0.001) and heaviest spare ribs (P < 0.001), striploin (P < 0.001), and beef plate (P = 0.013). Although the body, carcass, and quarter weights of NeS were similar to those of CaS, NeS had more edible meat in the leg region than did CaS bulls. Selection for PWW increased rib-eye area in Nellore bulls. Selected Caracu had the lowest (most favorable) shear force values compared with the NeS (P = 0.003), NeC (P = 0.005), GuS (P = 0.003), and GiS (P = 0.008). Selection for PWW increased body, carcass, and meat retail weights in the Nellore without altering dressing percentage and body fat percentage.
Resumo:
Tuberculosis is an infection caused mainly by Mycobacterium tuberculosis. A first-line antimycobacterial drug is pyrazinamide (PZA), which acts partially as a prodrug activated by a pyrazinamidase releasing the active agent, pyrazinoic acid (POA). As pyrazinoic acid presents some difficulty to cross the mycobacterial cell wall, and also the pyrazinamide-resistant strains do not express the pyrazinamidase, a set of pyrazinoic acid esters have been evaluated as antimycobacterial agents. In this work, a QSAR approach was applied to a set of forty-three pyrazinoates against M. tuberculosis ATCC 27294, using genetic algorithm function and partial least squares regression (WOLF 5.5 program). The independent variables selected were the Balaban index (I), calculated n-octanol/water partition coefficient (ClogP), van-der-Waals surface area, dipole moment, and stretching-energy contribution. The final QSAR model (N = 32, r(2) = 0.68, q(2) = 0.59, LOF = 0.25, and LSE = 0.19) was fully validated employing leave-N-out cross-validation and y-scrambling techniques. The test set (N = 11) presented an external prediction power of 73%. In conclusion, the QSAR model generated can be used as a valuable tool to optimize the activity of future pyrazinoic acid esters in the designing of new antituberculosis agents.
Resumo:
Histamine is an important biogenic amine, which acts with a group of four G-protein coupled receptors (GPCRs), namely H(1) to H(4) (H(1)R - H(4)R) receptors. The actions of histamine at H(4)R are related to immunological and inflammatory processes, particularly in pathophysiology of asthma, and H(4)R ligands having antagonistic properties could be helpful as antiinflammatory agents. In this work, molecular modeling and QSAR studies of a set of 30 compounds, indole and benzimidazole derivatives, as H(4)R antagonists were performed. The QSAR models were built and optimized using a genetic algorithm function and partial least squares regression (WOLF 5.5 program). The best QSAR model constructed with training set (N = 25) presented the following statistical measures: r (2) = 0.76, q (2) = 0.62, LOF = 0.15, and LSE = 0.07, and was validated using the LNO and y-randomization techniques. Four of five compounds of test set were well predicted by the selected QSAR model, which presented an external prediction power of 80%. These findings can be quite useful to aid the designing of new anti-H(4) compounds with improved biological response.
Resumo:
Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Residence time distribution studies of gas through a rotating drum bioreactor for solid-state fermentation were performed using carbon monoxide as a tracer gas. The exit concentration as a function of time differed considerably from profiles expected for plug flow, plug flow with axial dispersion, and continuous stirred tank reactor (CSTR) models. The data were then fitted by least-squares analysis to mathematical models describing a central plug flow region surrounded by either one dead region (a three-parameter model) or two dead regions (a five-parameter model). Model parameters were the dispersion coefficient in the central plug flow region, the volumes of the dead regions, and the exchange rates between the different regions. The superficial velocity of the gas through the reactor has a large effect on parameter values. Increased superficial velocity tends to decrease dead region volumes, interregion transfer rates, and axial dispersion. The significant deviation from CSTR, plug flow, and plug flow with axial dispersion of the residence time distribution of gas within small-scale reactors can lead to underestimation of the calculation of mass and heat transfer coefficients and hence has implications for reactor design and scaleup. (C) 2001 John Wiley & Sons, Inc.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
Objectives. To determine the effect of ion exchange on slow crack growth (SCG) parameters (n, stress corrosion susceptibility coefficient, and sigma(f0), scaling parameter) and Weibull parameters (m, Weibull modulus, and sigma(0), characteristic strength) of a dental porcelain. Methods. 160 porcelain discs were fabricated according to manufacturer`s instructions, polished through 1 mu m and divided into two groups: GC (control) and GI (submitted to an ion exchange procedure using a KNO(3) paste at 470 degrees C for 15 min). SCG parameters were determined by biaxial flexural strength test in artificial saliva at 37 degrees C using five constant stress rates (n =10). 20 specimens of each group were tested at 1 MPa/s to determine Weibull parameters. The SPT diagram was constructed using the least-squares fit of the strength data versus probability of failure. Results. Mean values of m and sigma(0) (95% confidence interval), n and sigma(f0) (standard deviation) were, respectively: 13.8 (10.1-18.8) and 60.4 (58.5 - 62.2), 24.1 (2.5) and 58.1 (0.01) for GC and 7.4 (5.3 -10.0) and 136.8 (129.1-144.7), 36.7 (7.3) and 127.9 (0.01) for GI. Fracture stresses (MPa) calculated using the SPT diagram for lifetimes of 1 day, 1 year and 10 years (at a 5% failure probability) were, respectively, 31.8, 24.9 and 22.7 for GC and 71.2, 60.6 and 56.9 for GI. Significance. For the porcelain tested, the ion exchange process improved strength and resistance to SCG, however, the material`s reliability decreased. The predicted fracture stress at 5% failure probability for a lifetime of 10 years was also higher for the ion treated group. (C) 009 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
The microwave and thermal cure processes for the epoxy-amine systems N,N,N',N'-tetraglycidyl-4,4'-diaminodiphenyl methane (TGDDM) with diaminodiphenyl sulfone (DDS) and diaminodiphenyl methane (DDM) have been investigated. The DDS system was studied at a single cure temperature of 433 K and a single stoichiometry of 27 wt% and the DDM system was studied at two stoichiometries, 19 and 32 wt%, and a range temperatures between 373 and 413 K. The best values the kinetic rate parameters for the consumption of amines have been determined by a least squares curve Ft to a model for epoxy-amine cure. The activation energies for the rate parameters for the MY721/DDM system were determined as was the overall activation energy for the cure reaction which was found to be 62 kJ mol(-1). No evidence was found for any specific effect of the microwave radiation on the rate parameters, and the systems were both found to be characterized by a negative substitution effect. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
This paper examines the trade relationship between the Gulf Cooperation Council (GCC) and the European Union (EU). A simultaneous equation regression model is developed and estimated to assist with the analysis. The regression results, using both the two stage least squares (2SLS) and ordinary least squares (OLS) estimation methods, reveal the existence of feedback effects between the two economic integrations. The results also show that during times of slack in oil prices, the GCC income from its investments overseas helped to finance its imports from the EU.