922 resultados para sums of squares
Resumo:
We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.
Resumo:
Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.
Resumo:
We prove that
∑k,ℓ=1N(nk,nℓ)2α(nknℓ)α≪N2−2α(logN)b(α)
holds for arbitrary integers 1≤n1<⋯
Resumo:
The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach.
Resumo:
The aim of this study was to evaluate the effects of substituting soybean meal for urea on milk protein fractions (casein, whey protein and non-protein nitrogen) of dairy cows in three dietary levels. Nine mid-lactation Holstein cows were used in a 3 x 3 Latin square arrangement, composed of 3 treatments, 3 periods of 21 days each, and 3 squares. The treatments consisted of three different diets fed to lactating cows, which were randomly assigned to three groups of three animals: (A) no urea inclusion, providing 100% of crude protein (CP), rumen undegradable protein (RUP) and rumen degradable protein (RDP) requirements, using soybean meal and sugarcane as roughage; (B) urea inclusion at 7.5 g/kg DM in partial substitution of soybean meal CP equivalent; (C) urea inclusion at 15 g/kg DM in partial substitution of soybean meal CP equivalent. Rations were isoenergetic and isonitrogenous-1 60 g/kg DM of crude protein and 6.40 MJ/kg DM of net energy for lactation. When the data were analyzed by simple polynomial regression, no differences were observed among treatments in relation to milk CP content, true protein, casein, whey protein, non-casein and non-protein nitrogen, or urea. The milk true protein:crude protein and casein:true protein ratios were not influenced by substituting soybean meal for urea in the diet. Based on the results it can be concluded that the addition of urea up to 15 g/kg of diet dry matter in substitution of soybean meal did not alter milk protein concentration casein, whey protein and its non-protein fractions, when fed to lactating dairy cows. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Eddy-covariance measurements of net ecosystem exchange of CO(2) (NEE) and estimates of gross ecosystem productivity (GEP) and ecosystem respiration (R(E)) were obtained in a 2-4 year old Eucalyptus plantation during two years with very different winter rainfall In the first (drier) year the annual NEE GEP and RE were lower than the sums in the second (normal) year and conversely the total respiratory costs of assimilated carbon were higher in the dry year than in the normal year Although the net primary production (NPP) in the first year was 23% lower than that of the second year the decrease in the carbon use efficiency (CUE = NPP/GEP) was 11% and autotrophic respiration utilized more resources in the first dry year than in the second normal year The time variations in NEE were followed by NPP because in these young Eucalyptus plantations NEE is very largely dominated by NPP and heterotrophic respiration plays only a relatively minor role During the dry season a pronounced hysteresis was observed in the relationship between NEE and photosynthetically active radiation and NEE fluxes were inversely proportional to humidity saturation deficit values greater than 0 8 kPa Nighttime fluxes of CO(2) during calm conditions when the friction velocity (u) was below the threshold (0 25 ms(-1)) were estimated based on a Q(10) temperature-dependence relationship adjusted separately for different classes of soil moisture content which regulated the temperature sensitivity of ecosystem respiration (C) 2010 Elsevier B V All rights reserved
Effects of roads, topography, and land use on forest cover dynamics in the Brazilian Atlantic Forest
Resumo:
Roads and topography can determine patterns of land use and distribution of forest cover, particularly in tropical regions. We evaluated how road density, land use, and topography affected forest fragmentation, deforestation and forest regrowth in a Brazilian Atlantic Forest region near the city of Sao Paulo. We mapped roads and land use/land cover for three years (1962, 1981 and 2000) from historical aerial photographs, and summarized the distribution of roads, land use/land cover and topography within a grid of 94 non-overlapping 100 ha squares. We used generalized least squares regression models for data analysis. Our models showed that forest fragmentation and deforestation depended on topography, land use and road density, whereas forest regrowth depended primarily on land use. However, the relationships between these variables and forest dynamics changed in the two studied periods; land use and slope were the strongest predictors from 1962 to 1981, and past (1962) road density and land use were the strongest predictors for the following period (1981-2000). Roads had the strongest relationship with deforestation and forest fragmentation when the expansions of agriculture and buildings were limited to already deforested areas, and when there was a rapid expansion of development, under influence of Sao Paulo city. Furthermore, the past(1962)road network was more important than the recent road network (1981) when explaining forest dynamics between 1981 and 2000, suggesting a long-term effect of roads. Roads are permanent scars on the landscape and facilitate deforestation and forest fragmentation due to increased accessibility and land valorization, which control land-use and land-cover dynamics. Topography directly affected deforestation, agriculture and road expansion, mainly between 1962 and 1981. Forest are thus in peril where there are more roads, and long-term conservation strategies should consider ways to mitigate roads as permanent landscape features and drivers facilitators of deforestation and forest fragmentation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Partition of Unity Implicits (PUI) has been recently introduced for surface reconstruction from point clouds. In this work, we propose a PUI method that employs a set of well-observed solutions in order to produce geometrically pleasant results without requiring time consuming or mathematically overloaded computations. One feature of our technique is the use of multivariate orthogonal polynomials in the least-squares approximation, which allows the recursive refinement of the local fittings in terms of the degree of the polynomial. However, since the use of high-order approximations based only on the number of available points is not reliable, we introduce the concept of coverage domain. In addition, the method relies on the use of an algebraically defined triangulation to handle two important tasks in PUI: the spatial decomposition and an adaptive polygonization. As the spatial subdivision is based on tetrahedra, the generated mesh may present poorly-shaped triangles that are improved in this work by means a specific vertex displacement technique. Furthermore, we also address sharp features and raw data treatment. A further contribution is based on the PUI locality property that leads to an intuitive scheme for improving or repairing the surface by means of editing local functions.
Resumo:
The representation of interfaces by means of the algebraic moving-least-squares (AMLS) technique is addressed. This technique, in which the interface is represented by an unconnected set of points, is interesting for evolving fluid interfaces since there is]to surface connectivity. The position of the surface points can thus be updated without concerns about the quality of any surface triangulation. We introduce a novel AMLS technique especially designed for evolving-interfaces applications that we denote RAMLS (for Robust AMLS). The main advantages with respect to previous AMLS techniques are: increased robustness, computational efficiency, and being free of user-tuned parameters. Further, we propose a new front-tracking method based on the Lagrangian advection of the unconnected point set that defines the RAMLS surface. We assume that a background Eulerian grid is defined with some grid spacing h. The advection of the point set makes the surface evolve in time. The point cloud can be regenerated at any time (in particular, we regenerate it each time step) by intersecting the gridlines with the evolved surface, which guarantees that the density of points on the surface is always well balanced. The intersection algorithm is essentially a ray-tracing algorithm, well-studied in computer graphics, in which a line (ray) is traced so as to detect all intersections with a surface. Also, the tracing of each gridline is independent and can thus be performed in parallel. Several tests are reported assessing first the accuracy of the proposed RAMLS technique, and then of the front-tracking method based on it. Comparison with previous Eulerian, Lagrangian and hybrid techniques encourage further development of the proposed method for fluid mechanics applications. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
We report a statistical analysis of Doppler broadening coincidence data of electron-positron annihilation radiation in silicon using a (22)Na source. The Doppler broadening coincidence spectrum was fit using a model function that included positron annihilation at rest with 1s, 2s, 2p, and valence band electrons. In-flight positron annihilation was also fit. The response functions of the detectors accounted for backscattering, combinations of Compton effects, pileup, ballistic deficit, and pulse-shaping problems. The procedure allows the quantitative determination of positron annihilation with core and valence electron intensities as well as their standard deviations directly from the experimental spectrum. The results obtained for the core and valence band electron annihilation intensities were 2.56(9)% and 97.44(9)%, respectively. These intensities are consistent with published experimental data treated by conventional analysis methods. This new procedure has the advantage of allowing one to distinguish additional effects from those associated with the detection system response function. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Mebendazole (MBZ) is a common benzimidazole anthelmintic that exists in three different polymorphic forms, A, B, and C. Polymorph C is the pharmaceutically preferred form due to its adequated aqueous solubility. No single crystal structure determinations depicting the nature of the crystal packing and molecular conformation and geometry have been performed on this compound. The crystal structure of mebendazole form C is resolved for the first time. Mebendazole form C crystallizes in the triclinic centrosymmetric space group and this drug is practically planar, since the least-squares methyl benzimidazolylcarbamate plane is much fitted on the forming atoms. However, the benzoyl group is twisted by 31(1)degrees from the benzimidazole ring, likewise the torsional angle between the benzene and carbonyl moieties is 27(1)degrees. The formerly described bends and other interesting intramolecular geometry features were viewed as consequence of the intermolecular contacts occurring within mebendazole C structure. Among these features, a conjugation decreasing through the imine nitrogen atom of the benzimidazole core and a further resonance path crossing the carbamate one were described. At last, the X-ray powder diffractogram of a form C rich mebendazole mixture was overlaid to the calculated one with the mebendazole crystal structure. (C) 2008 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:2336-2344, 2009
Resumo:
The glycolytic enzyme glyceraldehyde-3 -phosphate dehydrogenase (GAPDH) is as an attractive target for the development of novel antitrypanosomatid agents. In the present work, comparative molecular field analysis and comparative molecular similarity index analysis were conducted on a large series of selective inhibitors of trypanosomatid GAPDH. Four statistically significant models were obtained (r(2) > 0.90 and q(2) > 0.70), indicating their predictive ability for untested compounds. The models were then used to predict the potency of an external test set, and the predicted values were in good agreement with the experimental results. Molecular modeling studies provided further insight into the structural basis for selective inhibition of trypanosomatid GAPDH.
Resumo:
Motivated by a characterization of the complemented subspaces in Banach spaces X isomorphic to their squares X-2, we introduce the concept of P-complemented subspaces in Banach spaces. In this way, the well-known Pelczynski`s decomposition method can be seen as a Schroeder-Bernstein type theorem. Then, we give a complete description of the Schroeder-Bernstein type theorems for this new notion of complementability. By contrast, some very elementary questions on P-complementability are refinements of the Square-Cube Problem closely connected with some Banach spaces introduced by W.T. Gowers and B. Maurey in 1997. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
A new method is presented for spectrophotometric determination of total polyphenols content in wine. The procedure is a modified CUPRAC method based on the reduction of Cu(II), in hydroethanolic medium (pH 7.0) in the presence of neocuproine (2,9-dimethyl-1,10-phenanthroline), by polyphenols, yielding a Cu(I) complexes with maximum absorption peak at 450 nm. The absorbance values are linear (r = 0.998, n = 6) with tannic acid concentrations from 0.4 to 3.6 mu mol L(-1). The limit of detection obtained was 0.41 mu mol L(-1) and relative standard deviation 1.2% (1 mu mol L(-1); n = 8). Recoveries between 80% and 110% (mean value of 95%) were calculated for total polyphenols determination in 14 commercials and 2 synthetic wine samples (with and without sulphite). The proposed procedure is about 1.5 more sensitive than the official Folin-Ciocalteu method. The sensitivities of both methods were compared by the analytical responses of several polyphenols tested in each method. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper describes a chemotaxonomic analysis of a database of triterpenoid compounds from the Celastraceae family using principal component analysis (PCA). The numbers of occurrences of thirty types of triterpene skeleton in different tribes of the family were used as variables. The study shows that PCA applied to chemical data can contribute to an intrafamilial classification of Celastraceae, once some questionable taxa affinity was observed, from chemotaxonomic inferences about genera and they are in agreement with the phylogeny previously proposed. The inclusion of Hippocrateaceae within Celastraceae is supported by the triterpene chemistry.