905 resultados para partial least-squares regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. The presence of pulsations in late-type Be stars is still a matter of controversy. It constitutes an important issue to establish the relationship between non-radial pulsations and the mass-loss mechanism in Be stars. Aims. To contribute to this discussion, we analyse the photometric time series of the B8IVe star HD 50 209 observed by the CoRoT mission in the seismology field. Methods. We use standard Fourier techniques and linear and non-linear least squares fitting methods to analyse the CoRoT light curve. In addition, we applied detailed modelling of high-resolution spectra to obtain the fundamental physical parameters of the star. Results. We have found four frequencies which correspond to gravity modes with azimuthal order m = 0,-1,-2,-3 with the same pulsational frequency in the co-rotating frame. We also found a rotational period with a frequency of 0.679 cd(-1) (7.754 mu Hz). Conclusions. HD 50 209 is a pulsating Be star as expected from its position in the HR diagram, close to the SPB instability strip.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The least squares collocation is a mathematical technique which is used in Geodesy for representation of the Earth's anomalous gravity field from heterogeneous data in type and precision. The use of this technique in the representation of the gravity field requires the statistical characteristics of data through covariance function. The covariances reflect the behavior of the gravity field, in magnitude and roughness. From the statistical point of view, the covariance function represents the statistical dependence among quantities of the gravity field at distinct points or, in other words, shows the tendency to have the same magnitude and the same sign. The determination of the covariance functions is necessary either to describe the behavior of the gravity field or to evaluate its functionals. This paper aims at presenting the results of a study on the plane and spherical covariance functions in determining gravimetric geoid models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The title compound, C13H12N4O, crystallizes with two independent molecules in the asymmetric unit. The compound crystallizes as the ZE isomer, where Z and E refer to the configuration around the C=N and N-C bonds, respectively, with an N-H center dot center dot center dot N-py (py is pyridine) intramolecular hydrogen bond. The dihedral angles between the least-squares planes through the semicarbazone group and the pyridyl ring are 22.70 (9) and 27.26 (9)degrees for the two molecules. There are intermolecular N-H center dot center dot center dot O hydrogen bonds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three new bimetallic oxamato-based magnets with the proligand 4,5-dimethyl-1,2-phenylenebis-(oxamato) (dmopba) were synthesized using water or dimethylsulfoxide (DMSO) as solvents. Single crystal X-ray diffraction provided structures for two of them: [MnCu(dmopba)(H(2)O)(3)]n center dot 4nH(2)O (1) and [MnCu(dmopba)(DMSO)(3)](n center dot)nDMSO (2). The crystalline structures for both 1 and 2 consist of linearly ordered oxamato-bridged Mn(II)Cu(II) bimetallic chains. The magnetic characterization revealed a typical behaviour of ferrimagnetic chains for 1 and 2. Least-squares fits of the experimental magnetic data performed in the 300-20 K temperature range led to J(MnCu) = -27.9 cm(-1), g(Cu) = 2.09 and g(Mn) = 1.98 for 1 and J(MnCu) = -30.5 cm(-1), g(Cu) = 2.09 and g(Mn) = 2.02 for 2 (H = -J(MnCu)Sigma S(Mn, i)(S(Cu, i) + S(Cu, i-1))). The two-dimensional ferrimagnetic system [Me(4)N](2n){Co(2)[Cu(dmopba)](3)}center dot 4nDMSO center dot nH(2)O (3) was prepared by reaction of Co(II) ions and an excess of [Cu(dmopba)](2-) in DMSO. The study of the temperature dependence of the magnetic susceptibility as well as the temperature and field dependences of the magnetization revealed a cluster glass-like behaviour for 3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data from 9 studies were compiled to evaluate the effects of 20 yr of selection for postweaning weight (PWW) on carcass characteristics and meat quality in experimental herds of control Nellore (NeC) and selected Nellore (NeS), Caracu (CaS), Guzerah (GuS), and Gir (GiS) breeds. These studies were conducted with animals from a genetic selection program at the Experimental Station of Sertaozinho, Sao Paulo State, Brazil. After the performance test (168 d postweaning), bulls (n = 490) from the calf crops born between 1992 and 2000 were finished and slaughtered to evaluate carcass traits and meat quality. Treatments were different across studies. A meta-analysis was conducted with a random coefficients model in which herd was considered a fixed effect and treatments within year and year were considered as random effects. Either calculated maturity degree or initial BW was used interchangeably as the covariate, and least squares means were used in the multiple-comparison analysis. The CaS and NeS had heavier (P = 0.002) carcasses than the NeC and GiS; GuS were intermediate. The CaS had the longest carcass (P < 0.001) and heaviest spare ribs (P < 0.001), striploin (P < 0.001), and beef plate (P = 0.013). Although the body, carcass, and quarter weights of NeS were similar to those of CaS, NeS had more edible meat in the leg region than did CaS bulls. Selection for PWW increased rib-eye area in Nellore bulls. Selected Caracu had the lowest (most favorable) shear force values compared with the NeS (P = 0.003), NeC (P = 0.005), GuS (P = 0.003), and GiS (P = 0.008). Selection for PWW increased body, carcass, and meat retail weights in the Nellore without altering dressing percentage and body fat percentage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Residence time distribution studies of gas through a rotating drum bioreactor for solid-state fermentation were performed using carbon monoxide as a tracer gas. The exit concentration as a function of time differed considerably from profiles expected for plug flow, plug flow with axial dispersion, and continuous stirred tank reactor (CSTR) models. The data were then fitted by least-squares analysis to mathematical models describing a central plug flow region surrounded by either one dead region (a three-parameter model) or two dead regions (a five-parameter model). Model parameters were the dispersion coefficient in the central plug flow region, the volumes of the dead regions, and the exchange rates between the different regions. The superficial velocity of the gas through the reactor has a large effect on parameter values. Increased superficial velocity tends to decrease dead region volumes, interregion transfer rates, and axial dispersion. The significant deviation from CSTR, plug flow, and plug flow with axial dispersion of the residence time distribution of gas within small-scale reactors can lead to underestimation of the calculation of mass and heat transfer coefficients and hence has implications for reactor design and scaleup. (C) 2001 John Wiley & Sons, Inc.