900 resultados para weighted least squares
Resumo:
The aim of this study was to investigate the effects of numerous milk compositional factors on milk coagulation properties using Partial Least Squares (PLS). Milk from herds of Jersey and Holstein- Friesian cattle was collected across the year and blended (n=55), to maximise variation in composition and coagulation. The milk was analysed for casein, protein, fat, titratable acidity, lactose, Ca2+, urea content, micelles size, fat globule size, somatic cell count and pH. Milk coagulation properties were defined as coagulation time, curd firmness and curd firmness rate measured by a controlled strain rheometer. The models derived from PLS had higher predictive power than previous models demonstrating the value of measuring more milk components. In addition to the well-established relationships with casein and protein levels, CMS and fat globule size were found to have as strong impact on all of the three models. The study also found a positive impact of fat on milk coagulation properties and a strong relationship between lactose and curd firmness, and urea and curd firmness rate, all of which warrant further investigation due to current lack of knowledge of the underlying mechanism. These findings demonstrate the importance of using a wider range of milk compositional variables for the prediction of the milk coagulation properties, and hence as indicators of milk suitability for cheese making.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
A practical orthogonal frequency-division multiplexing (OFDM) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. In this contribution, we advocate a novel nonlinear equalization scheme for OFDM Hammerstein systems. We model the nonlinear HPA, which represents the static nonlinearity of the OFDM Hammerstein channel, by a B-spline neural network, and we develop a highly effective alternating least squares algorithm for estimating the parameters of the OFDM Hammerstein channel, including channel impulse response coefficients and the parameters of the B-spline model. Moreover, we also use another B-spline neural network to model the inversion of the HPA’s nonlinearity, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalization of the OFDM Hammerstein channel can then be accomplished by the usual one-tap linear equalization as well as the inverse B-spline neural network model obtained. The effectiveness of our nonlinear equalization scheme for OFDM Hammerstein channels is demonstrated by simulation results.
Resumo:
A practical single-carrier (SC) block transmission with frequency domain equalisation (FDE) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such Hammerstein channels, the standard SC-FDE scheme no longer works. We propose a novel Bspline neural network based nonlinear SC-FDE scheme for Hammerstein channels. In particular, we model the nonlinear HPA, which represents the complex-valued static nonlinearity of the Hammerstein channel, by two real-valued B-spline neural networks, one for modelling the nonlinear amplitude response of the HPA and the other for the nonlinear phase response of the HPA. We then develop an efficient alternating least squares algorithm for estimating the parameters of the Hammerstein channel, including the channel impulse response coefficients and the parameters of the two B-spline models. Moreover, we also use another real-valued B-spline neural network to model the inversion of the HPA’s nonlinear amplitude response, and the parameters of this inverting B-spline model can be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse Bspline neural network model obtained in time domain. The effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels is demonstrated in a simulation study.
Resumo:
High bandwidth-efficiency quadrature amplitude modulation (QAM) signaling widely adopted in high-rate communication systems suffers from a drawback of high peak-toaverage power ratio, which may cause the nonlinear saturation of the high power amplifier (HPA) at transmitter. Thus, practical high-throughput QAM communication systems exhibit nonlinear and dispersive channel characteristics that must be modeled as a Hammerstein channel. Standard linear equalization becomes inadequate for such Hammerstein communication systems. In this paper, we advocate an adaptive B-Spline neural network based nonlinear equalizer. Specifically, during the training phase, an efficient alternating least squares (LS) scheme is employed to estimate the parameters of the Hammerstein channel, including both the channel impulse response (CIR) coefficients and the parameters of the B-spline neural network that models the HPA’s nonlinearity. In addition, another B-spline neural network is used to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard LS algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Nonlinear equalisation of the Hammerstein channel is then accomplished by the linear equalization based on the estimated CIR as well as the inverse B-spline neural network model. Furthermore, during the data communication phase, the decision-directed LS channel estimation is adopted to track the time-varying CIR. Extensive simulation results demonstrate the effectiveness of our proposed B-Spline neural network based nonlinear equalization scheme.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.
Resumo:
We present models for the upper-mantle velocity structure beneath SE and Central Brazil using independent tomographic inversions of P- and S-wave relative arrival-time residuals (including core phases) from teleseismic earthquakes. The events were recorded by a total of 92 stations deployed through different projects, institutions and time periods during the years 1992-2004. Our results show correlations with the main tectonic structures and reveal new anomalies not yet observed in previous works. All interpretations are based on robust anomalies, which appear in the different inversions for P-and S-waves. The resolution is variable through our study volume and has been analyzed through different theoretical test inversions. High-velocity anomalies are observed in the western portion of the Sao Francisco Craton, supporting the hypothesis that this Craton was part of a major Neoproterozoic plate (San Franciscan Plate). Low-velocity anomalies beneath the Tocantins Province (mainly fold belts between the Amazon and Sao Francisco Cratons) are interpreted as due to lithospheric thinning, which is consistent with the good correlation between intraplate seismicity and low-velocity anomalies in this region. Our results show that the basement of the Parana Basin is formed by several blocks, separated by suture zones, according to model of Milani & Ramos. The slab of the Nazca Plate can be observed as a high-velocity anomaly beneath the Parana Basin, between the depths of 700 and 1200 km. Further, we confirm the low-velocity anomaly in the NE area of the Parana Basin which has been interpreted by VanDecar et al. as a fossil conduct of the Tristan da Cunha Plume related to the Parana flood basalt eruptions during the opening of the South Atlantic.
Resumo:
Sea surface gradients derived from the Geosat and ERS-1 satellite altimetry geodetic missions were integrated with marine gravity data from the National Geophysical Data Center and Brazilian national surveys. Using the least squares collocation method, models of free-air gravity anomaly and geoid height were calculated for the coast of Brazil with a resolution of 2` x 2`. The integration of satellite and shipborne data showed better statistical results in regions near the coast than using satellite data only, suggesting an improvement when compared to the state-of-the-art global gravity models. Furthermore, these results were obtained with considerably less input information than was used by those reference models. The least squares collocation presented a very low content of high-frequency noise in the predicted gravity anomalies. This may be considered essential to improve the high resolution representation of the gravity field in regions of ocean-continent transition. (C) 2010 Elsevier Ltd. All rights reserved.
Effects of roads, topography, and land use on forest cover dynamics in the Brazilian Atlantic Forest
Resumo:
Roads and topography can determine patterns of land use and distribution of forest cover, particularly in tropical regions. We evaluated how road density, land use, and topography affected forest fragmentation, deforestation and forest regrowth in a Brazilian Atlantic Forest region near the city of Sao Paulo. We mapped roads and land use/land cover for three years (1962, 1981 and 2000) from historical aerial photographs, and summarized the distribution of roads, land use/land cover and topography within a grid of 94 non-overlapping 100 ha squares. We used generalized least squares regression models for data analysis. Our models showed that forest fragmentation and deforestation depended on topography, land use and road density, whereas forest regrowth depended primarily on land use. However, the relationships between these variables and forest dynamics changed in the two studied periods; land use and slope were the strongest predictors from 1962 to 1981, and past (1962) road density and land use were the strongest predictors for the following period (1981-2000). Roads had the strongest relationship with deforestation and forest fragmentation when the expansions of agriculture and buildings were limited to already deforested areas, and when there was a rapid expansion of development, under influence of Sao Paulo city. Furthermore, the past(1962)road network was more important than the recent road network (1981) when explaining forest dynamics between 1981 and 2000, suggesting a long-term effect of roads. Roads are permanent scars on the landscape and facilitate deforestation and forest fragmentation due to increased accessibility and land valorization, which control land-use and land-cover dynamics. Topography directly affected deforestation, agriculture and road expansion, mainly between 1962 and 1981. Forest are thus in peril where there are more roads, and long-term conservation strategies should consider ways to mitigate roads as permanent landscape features and drivers facilitators of deforestation and forest fragmentation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper we deal with robust inference in heteroscedastic measurement error models Rather than the normal distribution we postulate a Student t distribution for the observed variables Maximum likelihood estimates are computed numerically Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels Results of simulations and an application to a real data set are also reported (C) 2009 The Korean Statistical Society Published by Elsevier B V All rights reserved
Resumo:
Moving-least-squares (MLS) surfaces undergoing large deformations need periodic regeneration of the point set (point-set resampling) so as to keep the point-set density quasi-uniform. Previous work by the authors dealt with algebraic MLS surfaces, and proposed a resampling strategy based on defining the new points at the intersections of the MLS surface with a suitable set of rays. That strategy has very low memory requirements and is easy to parallelize. In this article new resampling strategies with reduced CPU-time cost are explored. The basic idea is to choose as set of rays the lines of a regular, Cartesian grid, and to fully exploit this grid: as data structure for search queries, as spatial structure for traversing the surface in a continuation-like algorithm, and also as approximation grid for an interpolated version of the MLS surface. It is shown that in this way a very simple and compact resampling technique is obtained, which cuts the resampling cost by half with affordable memory requirements.
Resumo:
Partition of Unity Implicits (PUI) has been recently introduced for surface reconstruction from point clouds. In this work, we propose a PUI method that employs a set of well-observed solutions in order to produce geometrically pleasant results without requiring time consuming or mathematically overloaded computations. One feature of our technique is the use of multivariate orthogonal polynomials in the least-squares approximation, which allows the recursive refinement of the local fittings in terms of the degree of the polynomial. However, since the use of high-order approximations based only on the number of available points is not reliable, we introduce the concept of coverage domain. In addition, the method relies on the use of an algebraically defined triangulation to handle two important tasks in PUI: the spatial decomposition and an adaptive polygonization. As the spatial subdivision is based on tetrahedra, the generated mesh may present poorly-shaped triangles that are improved in this work by means a specific vertex displacement technique. Furthermore, we also address sharp features and raw data treatment. A further contribution is based on the PUI locality property that leads to an intuitive scheme for improving or repairing the surface by means of editing local functions.
Resumo:
We report a statistical analysis of Doppler broadening coincidence data of electron-positron annihilation radiation in silicon using a (22)Na source. The Doppler broadening coincidence spectrum was fit using a model function that included positron annihilation at rest with 1s, 2s, 2p, and valence band electrons. In-flight positron annihilation was also fit. The response functions of the detectors accounted for backscattering, combinations of Compton effects, pileup, ballistic deficit, and pulse-shaping problems. The procedure allows the quantitative determination of positron annihilation with core and valence electron intensities as well as their standard deviations directly from the experimental spectrum. The results obtained for the core and valence band electron annihilation intensities were 2.56(9)% and 97.44(9)%, respectively. These intensities are consistent with published experimental data treated by conventional analysis methods. This new procedure has the advantage of allowing one to distinguish additional effects from those associated with the detection system response function. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Mebendazole (MBZ) is a common benzimidazole anthelmintic that exists in three different polymorphic forms, A, B, and C. Polymorph C is the pharmaceutically preferred form due to its adequated aqueous solubility. No single crystal structure determinations depicting the nature of the crystal packing and molecular conformation and geometry have been performed on this compound. The crystal structure of mebendazole form C is resolved for the first time. Mebendazole form C crystallizes in the triclinic centrosymmetric space group and this drug is practically planar, since the least-squares methyl benzimidazolylcarbamate plane is much fitted on the forming atoms. However, the benzoyl group is twisted by 31(1)degrees from the benzimidazole ring, likewise the torsional angle between the benzene and carbonyl moieties is 27(1)degrees. The formerly described bends and other interesting intramolecular geometry features were viewed as consequence of the intermolecular contacts occurring within mebendazole C structure. Among these features, a conjugation decreasing through the imine nitrogen atom of the benzimidazole core and a further resonance path crossing the carbamate one were described. At last, the X-ray powder diffractogram of a form C rich mebendazole mixture was overlaid to the calculated one with the mebendazole crystal structure. (C) 2008 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:2336-2344, 2009