12 resultados para linear production set
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The objective of this work was to develop and validate linear regression models to estimate the production of dry matter by Tanzania grass (Megathyrsus maximus, cultivar Tanzania) as a function of agrometeorological variables. For this purpose, data on the growth of this forage grass from 2000 to 2005, under dry-field conditions in Sao Carlos, SP, Brazil, were correlated to the following climatic parameters: minimum and mean temperatures, degree-days, and potential and actual evapotranspiration. Simple linear regressions were performed between agrometeorological variables (independent) and the dry matter accumulation rate (dependent). The estimates were validated with independent data obtained in Sao Carlos and Piracicaba, SP, Brazil. The best statistical results in the development and validation of the models were obtained with the agrometeorological parameters that consider thermal and water availability effects together, such as actual evapotranspiration, accumulation of degree-days corrected by water availability, and the climatic growth index, based on average temperature, solar radiation, and water availability. These variables can be used in simulations and models to predict the production of Tanzania grass.
Resumo:
In this paper we obtain asymptotic expansions, up to order n(-1/2) and under a sequence of Pitman alternatives, for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of symmetric linear regression models. This is a wide class of models which encompasses the t model and several other symmetric distributions with longer-than normal tails. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.
Resumo:
Background: Heavy-flavor production in p + p collisions is a good test of perturbative-quantum-chromodynamics (pQCD) calculations. Modification of heavy-flavor production in heavy-ion collisions relative to binary-collision scaling from p + p results, quantified with the nuclear-modification factor (R-AA), provides information on both cold-and hot-nuclear-matter effects. Midrapidity heavy-flavor R-AA measurements at the Relativistic Heavy Ion Collider have challenged parton-energy-loss models and resulted in upper limits on the viscosity-entropy ratio that are near the quantum lower bound. Such measurements have not been made in the forward-rapidity region. Purpose: Determine transverse-momentum (p(T)) spectra and the corresponding R-AA for muons from heavy-flavor meson decay in p + p and Cu + Cu collisions at root s(NN) = 200 GeV and y = 1.65. Method: Results are obtained using the semileptonic decay of heavy-flavor mesons into negative muons. The PHENIX muon-arm spectrometers measure the p(T) spectra of inclusive muon candidates. Backgrounds, primarily due to light hadrons, are determined with a Monte Carlo calculation using a set of input hadron distributions tuned to match measured-hadron distributions in the same detector and statistically subtracted. Results: The charm-production cross section in p + p collisions at root s = 200 GeV, integrated over p(T) and in the rapidity range 1.4 < y < 1.9, is found to be d(sigma e (e) over bar)/dy = 0.139 +/- 0.029 (stat)(-0.058)(+0.051) (syst) mb. This result is consistent with a perturbative fixed-order-plus-next-to-leading-log calculation within scale uncertainties and is also consistent with expectations based on the corresponding midrapidity charm-production cross section measured by PHENIX. The R-AA for heavy-flavor muons in Cu + Cu collisions is measured in three centrality bins for 1 < p(T) < 4 GeV/c. Suppression relative to binary-collision scaling (R-AA < 1) increases with centrality. Conclusions: Within experimental and theoretical uncertainties, the measured charm yield in p + p collisions is consistent with state-of-the-art pQCD calculations. Suppression in central Cu + Cu collisions suggests the presence of significant cold-nuclear-matter effects and final-state energy loss.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
We present a stochastic approach to nonequilibrium thermodynamics based on the expression of the entropy production rate advanced by Schnakenberg for systems described by a master equation. From the microscopic Schnakenberg expression we get the macroscopic bilinear form for the entropy production rate in terms of fluxes and forces. This is performed by placing the system in contact with two reservoirs with distinct sets of thermodynamic fields and by assuming an appropriate form for the transition rate. The approach is applied to an interacting lattice gas model in contact with two heat and particle reservoirs. On a square lattice, a continuous symmetry breaking phase transition takes place such that at the nonequilibrium ordered phase a heat flow sets in even when the temperatures of the reservoirs are the same. The entropy production rate is found to have a singularity at the critical point of the linear-logarithm type.
Resumo:
We introduce a new Integer Linear Programming (ILP) approach for solving Integer Programming (IP) problems with bilinear objectives and linear constraints. The approach relies on a series of ILP approximations of the bilinear P. We compare this approach with standard linearization techniques on random instances and a set of real-world product bundling problems. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The ALICE Collaboration reports the measurement of the relative J/psi yield as a function of charged particle pseudorapidity density dN(ch)/d eta in pp collisions at root s = 7 TeV at the LHC. J/psi particles are detected for p(t) > 0, in the rapidity interval vertical bar y vertical bar < 0.9 via decay into e(+)e(-), and in the interval 2.5 < y < 4.0 via decay into mu(+)/mu(-) pairs. An approximately linear increase of the J/psi yields normalized to their event average (dN(J/psi)/dy)/(dN(J/psi)/dy) with (dN(ch)/c eta)/(dN(ch)/d eta) is observed in both rapidity ranges, where dN(ch)/d eta is measured within vertical bar eta vertical bar < 1 and p(t) > 0. In the highest multiplicity interval with (dN(ch)/d eta)(bin)) = 24.1, corresponding to four times the minimum bias multiplicity density, an enhancement relative to the minimum bias J/psi yield by a factor of about 5 at 2.5 < y <4 (8 at vertical bar y vertical bar < 0.9) is observed. (C) 2012 CERN. Published by Elsevier B.V. All rights reserved.
Resumo:
The correlation of soil fertility x seed physiological potential is very important in the area of seed technology but results published with that theme are contradictory. For this reason, this study to evaluate the correlations between soil chemical properties and physiological potential of soybean seeds. On georeferenced points, both soil and seeds were sampled for analysis of soil fertility and seed physiological potential. Data were assessed by the following analyses: descriptive statistics; Pearson's linear correlation; and geostatistics. The adjusted parameters of the semivariograms were used to produce maps of spatial distribution for each variable. Organic matter content, Mn and Cu showed significant effects on seed germination. Most variables studied presented moderate to high spatial dependence. Germination and accelerated aging of seeds, and P, Ca, Mg, Mn, Cu and Zn showed a better fit to spherical semivariogram: organic matter, pH and K had a better fit to Gaussian model; and V% and Fe showed a better fit to the linear model. The values for range of spatial dependence varied from 89.9 m for P until 651.4 m for Fe. These values should be considered when new samples are collected for assessing soil fertility in this production area.
Resumo:
A systematic approach to model nonlinear systems using norm-bounded linear differential inclusions (NLDIs) is proposed in this paper. The resulting NLDI model is suitable for the application of linear control design techniques and, therefore, it is possible to fulfill certain specifications for the underlying nonlinear system, within an operating region of interest in the state-space, using a linear controller designed for this NLDI model. Hence, a procedure to design a dynamic output feedback controller for the NLDI model is also proposed in this paper. One of the main contributions of the proposed modeling and control approach is the use of the mean-value theorem to represent the nonlinear system by a linear parameter-varying model, which is then mapped into a polytopic linear differential inclusion (PLDI) within the region of interest. To avoid the combinatorial problem that is inherent of polytopic models for medium- and large-sized systems, the PLDI is transformed into an NLDI, and the whole process is carried out ensuring that all trajectories of the underlying nonlinear system are also trajectories of the resulting NLDI within the operating region of interest. Furthermore, it is also possible to choose a particular structure for the NLDI parameters to reduce the conservatism in the representation of the nonlinear system by the NLDI model, and this feature is also one important contribution of this paper. Once the NLDI representation of the nonlinear system is obtained, the paper proposes the application of a linear control design method to this representation. The design is based on quadratic Lyapunov functions and formulated as search problem over a set of bilinear matrix inequalities (BMIs), which is solved using a two-step separation procedure that maps the BMIs into a set of corresponding linear matrix inequalities. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
The performance, carcass traits, and litter humidity of broilers fed increasing levels of glycerine derived from biodiesel production were evaluated. In this experiment, 1,575 broilers were distributed according to a completely randomized experimental design into five treatments with seven replicates of 45 birds each. Treatments consisted of a control diet and four diets containing 2.5, 5.0, 7.5, or 10% glycerine. The experimental diets contained equal nutritional levels and were based on corn, soybean meal and soybean oil. The glycerine included in the diets contained 83.4% glycerol, 1.18% sodium, and 208 ppm methanol, and a calculated energy value of 3,422 kcal AMEn/kg. Performance parameters (weight gain, feed intake, feed conversion ratio, live weight, and livability) were monitored when broilers were 7, 21, and 42 days of age. On day 43, litter humidity was determined in each pen, and 14 birds/treatment were sacrificed for the evaluation of carcass traits. During the period of 1 to 7 days, there was a positive linear effect of the treatments on weight gain, feed intake, and live weight gain. Livability linearly decreased during the period of 1 to 21 days. During the entire experimental period, no significant effects were observed on performance parameters or carcass traits, but there was a linear increase in litter humidity. Therefore, the inclusion of up to 5% glycerine in the diet did not affect broiler performance during the total rearing period.
Resumo:
An important feature in computer systems developed for the agricultural sector is to satisfy the heterogeneity of data generated in different processes. Most problems related with this heterogeneity arise from the lack of standard for different computing solutions proposed. An efficient solution for that is to create a single standard for data exchange. The study on the actual process involved in cotton production was based on a research developed by the Brazilian Agricultural Research Corporation (EMBRAPA) that reports all phases as a result of the compilation of several theoretical and practical researches related to cotton crop. The proposition of a standard starts with the identification of the most important classes of data involved in the process, and includes an ontology that is the systematization of concepts related to the production of cotton fiber and results in a set of classes, relations, functions and instances. The results are used as a reference for the development of computational tools, transforming implicit knowledge into applications that support the knowledge described. This research is based on data from the Midwest of Brazil. The choice of the cotton process as a study case comes from the fact that Brazil is one of the major players and there are several improvements required for system integration in this segment.