921 resultados para Generalised Linear Modeling
Resumo:
A single-generation dataset consisting of 1,730 records from a selection program for high growth rate in giant freshwater prawn (GFP, Macrobrachium rosenbergii) was used to derive prediction equations for meat weight and meat yield. Models were based on body traits [body weight, total length and abdominal width (AW)] and carcass measurements (tail weight and exoskeleton-off weight). Lengths and width were adjusted for the systematic effects of selection line, male morphotypes and female reproductive status, and for the covariables of age at slaughter within sex and body weight. Body and meat weights adjusted for the same effects (except body weight) were used to calculate meat yield (expressed as percentage of tail weight/body weight and exoskeleton-off weight/body weight). The edible meat weight and yield in this GFP population ranged from 12 to 15 g and 37 to 45 %, respectively. The simple (Pearson) correlation coefficients between body traits (body weight, total length and AW) and meat weight were moderate to very high and positive (0.75–0.94), but the correlations between body traits and meat yield were negative (−0.47 to −0.74). There were strong linear positive relationships between measurements of body traits and meat weight, whereas relationships of body traits with meat yield were moderate and negative. Step-wise multiple regression analysis showed that the best model to predict meat weight included all body traits, with a coefficient of determination (R 2) of 0.99 and a correlation between observed and predicted values of meat weight of 0.99. The corresponding figures for meat yield were 0.91 and 0.95, respectively. Body weight or length was the best predictor of meat weight, explaining 91–94 % of observed variance when it was fitted alone in the model. By contrast, tail width explained a lower proportion (69–82 %) of total variance in the single trait models. It is concluded that in practical breeding programs, improvement of meat weight can be easily made through indirect selection for body trait combinations. The improvement of meat yield, albeit being more difficult, is possible by genetic means, with 91 % of the variation in the trait explained by the body and carcass traits examined in this study.
Resumo:
The mesoscale simulation of a lamellar mesophase based on a free energy functional is examined with the objective of determining the relationship between the parameters in the model and molecular parameters. Attention is restricted to a symmetric lamellar phase with equal volumes of hydrophilic and hydrophobic components. Apart from the lamellar spacing, there are two parameters in the free energy functional. One of the parameters, r, determines the sharpness of the interface, and it is shown how this parameter can be obtained from the interface profile in a molecular simulation. The other parameter, A, provides an energy scale. Analytical expressions are derived to relate these parameters to r and A to the bending and compression moduli and the permeation constant in the macroscopic equation to the Onsager coefficient in the concentration diffusion equation. The linear hydrodynamic response predicted by the theory is verified by carrying out a mesoscale simulation using the lattice-Boltzmann technique and verifying that the analytical predictions are in agreement with simulation results. A macroscale model based on the layer thickness field and the layer normal field is proposed, and the relationship between the parameters in the macroscale model from the parameters in the mesoscale free energy functional is obtained.
Resumo:
This paper presents a maximum likelihood method for estimating growth parameters for an aquatic species that incorporates growth covariates, and takes into consideration multiple tag-recapture data. Individual variability in asymptotic length, age-at-tagging, and measurement error are also considered in the model structure. Using distribution theory, the log-likelihood function is derived under a generalised framework for the von Bertalanffy and Gompertz growth models. Due to the generality of the derivation, covariate effects can be included for both models with seasonality and tagging effects investigated. Method robustness is established via comparison with the Fabens, improved Fabens, James and a non-linear mixed-effects growth models, with the maximum likelihood method performing the best. The method is illustrated further with an application to blacklip abalone (Haliotis rubra) for which a strong growth-retarding tagging effect that persisted for several months was detected
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
This paper presents an approach, based on Lean production philosophy, for rationalising the processes involved in the production of specification documents for construction projects. Current construction literature erroneously depicts the process for the creation of construction specifications as a linear one. This traditional understanding of the specification process often culminates in process-wastes. On the contrary, the evidence suggests that though generalised, the activities involved in producing specification documents are nonlinear. Drawing on the outcome of participant observation, this paper presents an optimised approach for representing construction specifications. Consequently, the actors typically involved in producing specification documents are identified, the processes suitable for automation are highlighted and the central role of tacit knowledge is integrated into a conceptual template of construction specifications. By applying the transformation, flow, value (TFV) theory of Lean production the paper argues that value creation can be realised by eliminating the wastes associated with the traditional preparation of specification documents with a view to integrating specifications in digital models such as Building Information Models (BIM). Therefore, the paper presents an approach for rationalising the TFV theory as a method for optimising current approaches for generating construction specifications based on a revised specification writing model.
Resumo:
This paper presents a maximum likelihood method for estimating growth parameters for an aquatic species that incorporates growth covariates, and takes into consideration multiple tag-recapture data. Individual variability in asymptotic length, age-at-tagging, and measurement error are also considered in the model structure. Using distribution theory, the log-likelihood function is derived under a generalised framework for the von Bertalanffy and Gompertz growth models. Due to the generality of the derivation, covariate effects can be included for both models with seasonality and tagging effects investigated. Method robustness is established via comparison with the Fabens, improved Fabens, James and a non-linear mixed-effects growth models, with the maximum likelihood method performing the best. The method is illustrated further with an application to blacklip abalone (Haliotis rubra) for which a strong growth-retarding tagging effect that persisted for several months was detected. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships with four modeling methods run with multiple scenarios of (1) sources of occurrences and geographically isolated background ranges for absences, (2) approaches to drawing background (absence) points, and (3) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved by using a global dataset for model training, rather than restricting data input to the species’ native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e. into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g. boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post-hoc test conducted on a new Partenium dataset from Nepal validated excellent predictive performance of our “best” model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for Parthenium hysterophorus L. (Asteraceae; parthenium). However, discrepancies between model predictions and parthenium invasion in Australia indicate successful management for this globally significant weed. This article is protected by copyright. All rights reserved.
Resumo:
Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships with four modeling methods run with multiple scenarios of (1) sources of occurrences and geographically isolated background ranges for absences, (2) approaches to drawing background (absence) points, and (3) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved by using a global dataset for model training, rather than restricting data input to the species’ native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e. into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g. boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post-hoc test conducted on a new Partenium dataset from Nepal validated excellent predictive performance of our “best” model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for Parthenium hysterophorus L. (Asteraceae; parthenium). However, discrepancies between model predictions and parthenium invasion in Australia indicate successful management for this globally significant weed. This article is protected by copyright. All rights reserved.
Resumo:
Two oxazolidine-2-thiones, thio-analogs of linezolid, were synthesized and their antibacterial properties evaluated. Unlike oxazolidinones, the thio-analogs did not inhibit the growth of Gram positive bacteria. A molecular modeling study has been carried out to aid understanding of this unexpected finding.
Resumo:
The linear spin-1/2 Heisenberg antiferromagnet with exchanges J(1) and J(2) between first and second neighbors has a bond-order wave (BOW) phase that starts at the fluid-dimer transition at J(2)/J(1)=0.2411 and is particularly simple at J(2)/J(1)=1/2. The BOW phase has a doubly degenerate singlet ground state, broken inversion symmetry, and a finite-energy gap E-m to the lowest-triplet state. The interval 0.4 < J(2)/J(1) < 1.0 has large E-m and small finite-size corrections. Exact solutions are presented up to N = 28 spins with either periodic or open boundary conditions and for thermodynamics up to N = 18. The elementary excitations of the BOW phase with large E-m are topological spin-1/2 solitons that separate BOWs with opposite phase in a regular array of spins. The molar spin susceptibility chi(M)(T) is exponentially small for T << E-m and increases nearly linearly with T to a broad maximum. J(1) and J(2) spin chains approximate the magnetic properties of the BOW phase of Hubbard-type models and provide a starting point for modeling alkali-tetracyanoquinodimethane salts.
Resumo:
The constitutive model for a magnetostrictive material and its effect on the structural response is presented in this article. The example of magnetostrictive material considered is the TERFENOL-D. As like the piezoelectric material, this material has two constitutive laws, one of which is the sensing law and the other is the actuation law, both of which are highly coupled and non-linear. For the purpose of analysis, the constitutive laws can be characterized as coupled or uncoupled and linear or non linear. Coupled model is studied without assuming any explicit direct relationship with magnetic field. In the linear coupled model, which is assumed to preserve the magnetic flux line continuity, the elastic modulus, the permeability and magneto-elastic constant are assumed as constant. In the nonlinear-coupled model, the nonlinearity is decoupled and solved separately for the magnetic domain and the mechanical domain using two nonlinear curves, namely the stress vs. strain curve and the magnetic flux density vs. magnetic field curve. This is performed by two different methods. In the first, the magnetic flux density is computed iteratively, while in the second, the artificial neural network is used, where in the trained network will give the necessary strain and magnetic flux density for a given magnetic field and stress level. The effect of nonlinearity is demonstrated on a simple magnetostrictive rod.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
Financial time series tend to behave in a manner that is not directly drawn from a normal distribution. Asymmetries and nonlinearities are usually seen and these characteristics need to be taken into account. To make forecasts and predictions of future return and risk is rather complicated. The existing models for predicting risk are of help to a certain degree, but the complexity in financial time series data makes it difficult. The introduction of nonlinearities and asymmetries for the purpose of better models and forecasts regarding both mean and variance is supported by the essays in this dissertation. Linear and nonlinear models are consequently introduced in this dissertation. The advantages of nonlinear models are that they can take into account asymmetries. Asymmetric patterns usually mean that large negative returns appear more often than positive returns of the same magnitude. This goes hand in hand with the fact that negative returns are associated with higher risk than in the case where positive returns of the same magnitude are observed. The reason why these models are of high importance lies in the ability to make the best possible estimations and predictions of future returns and for predicting risk.
Resumo:
This paper examines how volatility in financial markets can preferable be modeled. The examination investigates how good the models for the volatility, both linear and nonlinear, are in absorbing skewness and kurtosis. The examination is done on the Nordic stock markets, including Finland, Sweden, Norway and Denmark. Different linear and nonlinear models are applied, and the results indicates that a linear model can almost always be used for modeling the series under investigation, even though nonlinear models performs slightly better in some cases. These results indicate that the markets under study are exposed to asymmetric patterns only to a certain degree. Negative shocks generally have a more prominent effect on the markets, but these effects are not really strong. However, in terms of absorbing skewness and kurtosis, nonlinear models outperform linear ones.