920 resultados para Multivariate curve resolution-alternating least squares


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three new bimetallic oxamato-based magnets with the proligand 4,5-dimethyl-1,2-phenylenebis-(oxamato) (dmopba) were synthesized using water or dimethylsulfoxide (DMSO) as solvents. Single crystal X-ray diffraction provided structures for two of them: [MnCu(dmopba)(H(2)O)(3)]n center dot 4nH(2)O (1) and [MnCu(dmopba)(DMSO)(3)](n center dot)nDMSO (2). The crystalline structures for both 1 and 2 consist of linearly ordered oxamato-bridged Mn(II)Cu(II) bimetallic chains. The magnetic characterization revealed a typical behaviour of ferrimagnetic chains for 1 and 2. Least-squares fits of the experimental magnetic data performed in the 300-20 K temperature range led to J(MnCu) = -27.9 cm(-1), g(Cu) = 2.09 and g(Mn) = 1.98 for 1 and J(MnCu) = -30.5 cm(-1), g(Cu) = 2.09 and g(Mn) = 2.02 for 2 (H = -J(MnCu)Sigma S(Mn, i)(S(Cu, i) + S(Cu, i-1))). The two-dimensional ferrimagnetic system [Me(4)N](2n){Co(2)[Cu(dmopba)](3)}center dot 4nDMSO center dot nH(2)O (3) was prepared by reaction of Co(II) ions and an excess of [Cu(dmopba)](2-) in DMSO. The study of the temperature dependence of the magnetic susceptibility as well as the temperature and field dependences of the magnetization revealed a cluster glass-like behaviour for 3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article analyzes the Brazilian political system from the local perspective. Following Cox (1997), we review the problems with electoral coordination that emerge from a given institutional framework. Due to the characteristics of the Brazilian Federal system and its electoral rules, linkage between the three levels of government is not guaranteed a priori, but demands a coordinating effort by the parties' leadership. According to our hypothesis, the parties are capable of coordinating their election strategies at different levels in the party system. Regression models based on two-stage least squares (2SLS) and TOBIT, analyzing a panel of Brazilian municipalities with data from the 1994 and 2000 elections, show that the proportion of votes received by a party in a given election correlates closely with its previous votes in majoritarian elections. Despite institutional incentives, the Brazilian party system shows evidence that it is organized nationally to the extent that it links the competition for votes at the three levels of government (National, State, and Municipal).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work, the sensitivity of NIR spectroscopy toward the evolution of particle size was studied during emulsion homopolymerization of styrene (Sty) and emulsion copolymerization of vinyl acetate-butyl acrylate conducted in a semibatch stirred tank and a tubular pulsed sieve plate reactor, respectively. All NIR spectra were collected online with a transflectance probe immersed into the reaction medium. The spectral range used for the NIR monitoring was from 9 500 to 13 000 cm(-1), where the absorbance of the chemical components present is minimal and the changes in the NIR spectrum can be ascribed to the effects of light scattering by the polymer particles. Off-line measurements of the average diameter of the polymer particles by DLS were used as reference values for the development of the multi-variate NIR calibration models based on partial least squares. Results indicated that, in the spectral range studied, it is possible to monitor the evolution of the average size of the polymer particles during emulsion polymerization reactions. The inclusion of an additional spectral range, from 5 701 to 6 447 cm(-1), containing information on absorbances (""chemical information"") in the calibration models was also evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data from 9 studies were compiled to evaluate the effects of 20 yr of selection for postweaning weight (PWW) on carcass characteristics and meat quality in experimental herds of control Nellore (NeC) and selected Nellore (NeS), Caracu (CaS), Guzerah (GuS), and Gir (GiS) breeds. These studies were conducted with animals from a genetic selection program at the Experimental Station of Sertaozinho, Sao Paulo State, Brazil. After the performance test (168 d postweaning), bulls (n = 490) from the calf crops born between 1992 and 2000 were finished and slaughtered to evaluate carcass traits and meat quality. Treatments were different across studies. A meta-analysis was conducted with a random coefficients model in which herd was considered a fixed effect and treatments within year and year were considered as random effects. Either calculated maturity degree or initial BW was used interchangeably as the covariate, and least squares means were used in the multiple-comparison analysis. The CaS and NeS had heavier (P = 0.002) carcasses than the NeC and GiS; GuS were intermediate. The CaS had the longest carcass (P < 0.001) and heaviest spare ribs (P < 0.001), striploin (P < 0.001), and beef plate (P = 0.013). Although the body, carcass, and quarter weights of NeS were similar to those of CaS, NeS had more edible meat in the leg region than did CaS bulls. Selection for PWW increased rib-eye area in Nellore bulls. Selected Caracu had the lowest (most favorable) shear force values compared with the NeS (P = 0.003), NeC (P = 0.005), GuS (P = 0.003), and GiS (P = 0.008). Selection for PWW increased body, carcass, and meat retail weights in the Nellore without altering dressing percentage and body fat percentage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tuberculosis is an infection caused mainly by Mycobacterium tuberculosis. A first-line antimycobacterial drug is pyrazinamide (PZA), which acts partially as a prodrug activated by a pyrazinamidase releasing the active agent, pyrazinoic acid (POA). As pyrazinoic acid presents some difficulty to cross the mycobacterial cell wall, and also the pyrazinamide-resistant strains do not express the pyrazinamidase, a set of pyrazinoic acid esters have been evaluated as antimycobacterial agents. In this work, a QSAR approach was applied to a set of forty-three pyrazinoates against M. tuberculosis ATCC 27294, using genetic algorithm function and partial least squares regression (WOLF 5.5 program). The independent variables selected were the Balaban index (I), calculated n-octanol/water partition coefficient (ClogP), van-der-Waals surface area, dipole moment, and stretching-energy contribution. The final QSAR model (N = 32, r(2) = 0.68, q(2) = 0.59, LOF = 0.25, and LSE = 0.19) was fully validated employing leave-N-out cross-validation and y-scrambling techniques. The test set (N = 11) presented an external prediction power of 73%. In conclusion, the QSAR model generated can be used as a valuable tool to optimize the activity of future pyrazinoic acid esters in the designing of new antituberculosis agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Histamine is an important biogenic amine, which acts with a group of four G-protein coupled receptors (GPCRs), namely H(1) to H(4) (H(1)R - H(4)R) receptors. The actions of histamine at H(4)R are related to immunological and inflammatory processes, particularly in pathophysiology of asthma, and H(4)R ligands having antagonistic properties could be helpful as antiinflammatory agents. In this work, molecular modeling and QSAR studies of a set of 30 compounds, indole and benzimidazole derivatives, as H(4)R antagonists were performed. The QSAR models were built and optimized using a genetic algorithm function and partial least squares regression (WOLF 5.5 program). The best QSAR model constructed with training set (N = 25) presented the following statistical measures: r (2) = 0.76, q (2) = 0.62, LOF = 0.15, and LSE = 0.07, and was validated using the LNO and y-randomization techniques. Four of five compounds of test set were well predicted by the selected QSAR model, which presented an external prediction power of 80%. These findings can be quite useful to aid the designing of new anti-H(4) compounds with improved biological response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.