847 resultados para Linear fitting
Resumo:
Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Resumo:
We present a new version of the hglm package for fittinghierarchical generalized linear models (HGLM) with spatially correlated random effects. A CAR family for conditional autoregressive random effects was implemented. Eigen decomposition of the matrix describing the spatial structure (e.g. the neighborhood matrix) was used to transform the CAR random effectsinto an independent, but heteroscedastic, gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR model.This gives a computationally efficient algorithm for moderately sized problems (e.g. n<5000).
Resumo:
We present a new version (> 2.0) of the hglm package for fitting hierarchical generalized linear models (HGLMs) with spatially correlated random effects. CAR() and SAR() families for conditional and simultaneous autoregressive random effects were implemented. Eigen decomposition of the matrix describing the spatial structure (e.g., the neighborhood matrix) was used to transform the CAR/SAR random effects into an independent, but eteroscedastic, Gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR and SAR models. This gives a computationally efficient algorithm for moderately sized problems.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and Bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. © 2013 American Dairy Science Association.
Resumo:
Within the nutritional context, the supplementation of microminerals in bird food is often made in quantities exceeding those required in the attempt to ensure the proper performance of the animals. The experiments of type dosage x response are very common in the determination of levels of nutrients in optimal food balance and include the use of regression models to achieve this objective. Nevertheless, the regression analysis routine, generally, uses a priori information about a possible relationship between the response variable. The isotonic regression is a method of estimation by least squares that generates estimates which preserves data ordering. In the theory of isotonic regression this information is essential and it is expected to increase fitting efficiency. The objective of this work was to use an isotonic regression methodology, as an alternative way of analyzing data of Zn deposition in tibia of male birds of Hubbard lineage. We considered the models of plateau response of polynomial quadratic and linear exponential forms. In addition to these models, we also proposed the fitting of a logarithmic model to the data and the efficiency of the methodology was evaluated by Monte Carlo simulations, considering different scenarios for the parametric values. The isotonization of the data yielded an improvement in all the fitting quality parameters evaluated. Among the models used, the logarithmic presented estimates of the parameters more consistent with the values reported in literature.
Resumo:
Wir haben die linearen und nichtlinearen optischen Eigenschaften von dünnen Schichten und planaren Wellenleitern aus mehreren konjugierten Polymeren (MEH-PPV und P3AT) und Polymeren mit -Elektronen Systemen in der Seitenkette (PVK und PS) untersucht und verglichen. PVK und PS haben relativ kleine Werte des nichtlinearen Brechungsindex n2 bei 532 nm, nämlich (1,2 ± 0,5)10-14 cm2/W und (2,6 ± 0,5) 10-14 cm2/W.rnWir haben die linearen optischen Konstanten von mehreren P3ATs untersucht, insbesondere den Einfluss der Regioregularität und Kettenlänge der Alkylsubstituenten. Wir haben das am besten geeignete Polymere für Wellenleiter Anwendungen identifiziert, welches P3BT-ra genannt ist. Wir haben die linearen optischen Eigenschaften dünner Schichten des P3BT-ra untersucht, die mit Spincoating aus verschiedenen Lösungsmitteln mit unterschiedlichen Siedetemperaturen präparieret wurden. Wir haben festgestellt, dass P3BT-ra Filme aus Toluol-Lösungen die am besten geeigneten Wellenleiter für die intensitätsabhängigen Prismen-Kopplungs Experimente sind, weil diese geringe Wellenleiterdämpfungsverluste bei = 1064 nm haben. rnWir haben die Dispersionen des Wellenleiterdämfungsverlustes gw, des nichtlinearen Brechungsindex n2 und des nichtlinearen Absorptionskoeffizienten 2 von Wellenleitern aus P3BT-ra im Bereich von 700 - 1500 nm gemessen. Wir haben große Werte des nichtlinearen Brechungsindex bis 1,5x10-13 cm2/W bei 1150 nm beobachtet. Wir haben gefunden, dass die Gütenkriterien (“figures of merit“) für rein optische Schalter im Wellenlängebereich 1050 - 1200 nm erfüllt sind. Dieser Bereich entspricht dem niederenergetischen Ausläufer der Zwei-Photonen-Absorption. Die Gütekriterien von P3BT-ra gehören zu den besten der bisher bekannten Werte von konjugierten Polymeren.rnWir haben gefunden, dass P3BT-ra ein vielversprechender Kandidat für integriert-optische Schalter ist, weil es eine gute Kombination aus großer Nichtlinearität dritter Ordnung, geringen Wellenleiterdämpfungverlusten und ausreichender Photostabilität zeigt. rnWir haben einen Vergleich der gemessenen Dispersion von gw, n2 und 2 mit der Theorie durchgeführt. Durch Kurvenanpassung der Dispersion von gw haben wir gefunden, dass Rayleigh-Streuung der dominierende Dämpfungsmechanismus in MEH-PPV und P3BT-ra Wellenleitern ist. Ein quantenmechanischer Ansatz wurde zur Berechnung der nichtlinearen Suszeptibilität dritter Ordnung (3) verwendet, um die gemessenen Spektren von n2 und 2 von P3BT-ra und MEH-PPV zu simulieren. Dies kann erklären, dass sättigbare Absorption und Zwei-Photonen Absorption die hauptsächlichen Effekte sind, welche die Dispersion von n2 und 2 verursachen. rn
Resumo:
Marginal generalized linear models can be used for clustered and longitudinal data by fitting a model as if the data were independent and using an empirical estimator of parameter standard errors. We extend this approach to data where the number of observations correlated with a given one grows with sample size and show that parameter estimates are consistent and asymptotically Normal with a slower convergence rate than for independent data, and that an information sandwich variance estimator is consistent. We present two problems that motivated this work, the modelling of patterns of HIV genetic variation and the behavior of clustered data estimators when clusters are large.
Resumo:
In the simultaneous estimation of a large number of related quantities, multilevel models provide a formal mechanism for efficiently making use of the ensemble of information for deriving individual estimates. In this article we investigate the ability of the likelihood to identify the relationship between signal and noise in multilevel linear mixed models. Specifically, we consider the ability of the likelihood to diagnose conjugacy or independence between the signals and noises. Our work was motivated by the analysis of data from high-throughput experiments in genomics. The proposed model leads to a more flexible family. However, we further demonstrate that adequately capitalizing on the benefits of a well fitting fully-specified likelihood in the terms of gene ranking is difficult.
Resumo:
In nature, several types of landforms have simple shapes: as they evolve they tend to take on an ideal, simple geometric form such as a cone, an ellipsoid or a paraboloid. Volcanic landforms are possibly the best examples of this ?ideal? geometry, since they develop as regular surface features due to the point-like (circular) or fissure-like (linear) manifestation of volcanic activity. In this paper, we present a geomorphometric method of fitting the ?ideal? surface onto the real surface of regular-shaped volcanoes through a number of case studies (Mt. Mayon, Mt. Somma, Mt. Semeru, and Mt. Cameroon). Volcanoes with circular, as well as elliptical, symmetry are addressed. For the best surface fit, we use the minimization library MINUIT which is made freely available by the CERN (European Organization for Nuclear Research). This library enables us to handle all the available surface data (every point of the digital elevation model) in a one-step, half-automated way regardless of the size of the dataset, and to consider simultaneously all the relevant parameters of the selected problem, such as the position of the center of the edifice, apex height, and cone slope, thanks to the highly performing adopted procedure. Fitting the geometric surface, along with calculating the related error, demonstrates the twofold advantage of the method. Firstly, we can determine quantitatively to what extent a given volcanic landform is regular, i.e. how much it follows an expected regular shape. Deviations from the ideal shape due to degradation (e.g. sector collapse and normal erosion) can be used in erosion rate calculations. Secondly, if we have a degraded volcanic landform, whose geometry is not clear, this method of surface fitting reconstructs the original shape with the maximum precision. Obviously, in addition to volcanic landforms, this method is also capable of constraining the shapes of other regular surface features such as aeolian, glacial or periglacial landforms.
Resumo:
Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic time
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.
Resumo:
We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.
Resumo:
We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the 'signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper.We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.
Resumo:
Fitting a linear regression to data provides much more information about the relationship between two variables than a simple correlation test. A goodness of fit test of the line should always be carried out. Hence, ‘r squared’ estimates the strength of the relationship between Y and X, ANOVA whether a statistically significant line is present, and the ‘t’ test whether the slope of the line is significantly different from zero. In addition, it is important to check whether the data fit the assumptions for regression analysis and, if not, whether a transformation of the Y and/or X variables is necessary.