67 resultados para Generalized Linear Model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 2-year longitudinal survey was carried out to investigate factors affecting milk yield in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of milk yield, body condition score (BCS) and heart girth measurements were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in milk yield were investigated using a general linear model by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. Daily milk yield (including milk sucked by calves) was determined by calving year (p < 0.0001), calf rearing method (p = 0.044) and BCS at calving (p < 0.0001). Only BCS at calving contributed to variation in volume of milk sucked by the calf, lactation length and lactation milk yield. BCS at 3 months after calving was improved on farms where labour was hired (p = 0.041) and BCS change from calving to 6 months was more than twice as likely to be negative on U than SU and PU farms. It was concluded that milk production was predominantly associated with BCS at calving, lactation milk yield increasing quadratically from score 1 to 3. BCS at calving may provide a simple, single indicator of the nutritional status of a cow population.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 2-year longitudinal survey was carried out to investigate factors affecting reproduction in crossbred cows on smallholder farms in and around an urban centre. Sixty farms were visited at approximately 2-week intervals and details of reproductive traits and body condition score (BCS) were collected. Fifteen farms were within the town (U), 23 farms were approximately 5 km from town (SU), and 22 farms approximately 10 km from town (PU). Sources of variation in reproductive traits were investigated using a general linear model (GLM) by a stepwise forward selection and backward elimination approach to judge important independent variables. Factors considered for the first step of formulation of the model included location (PU, SU and U), type of insemination, calving season, BCS at calving, at 3 months postpartum and at 6 months postpartum, calving year, herd size category, source of labour (hired and family labour), calf rearing method (bucket and partial suckling) and parity number of the cow. The effects of the independent variables identified were then investigated using a non-parametric survival technique. The number of days to first oestrus was increased on the U site (p = 0.045) and when family labour was used (p = 0.02). The non-parametric test confirmed the effect of site (p = 0.059), but effect of labour was not significant. The number of days from calving to conception was reduced by hiring labour (p = 0.003) and using natural service (p = 0.028). The non-parametric test confirmed the effects of type of insemination (p = 0.0001) while also identifying extended calving intervals on U and SU sites (p = 0.014). Labour source was again non-significant. Calving interval was prolonged on U and SU sites (p = 0.021), by the use of AI (p = 0.031) and by the use of family labour (p = 0.001). The non-parametric test confirmed the effect of site (p = 0.008) and insemination type (p > 0.0001) but not of labour source. It was concluded that under favourable conditions (PU site, hired labour and natural service) calving intervals of around 440 days could be achieved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The widespread occurrence of feminized male fish downstream of some wastewater treatment works has led to substantial interest from ecologists and public health professionals. This concern stems from the view that the effects observed have a parallel in humans, and that both phenomena are caused by exposure to mixtures of contaminants that interfere with reproductive development. The evidence for a "wildlife-human connection" is, however, weak: Testicular dysgenesis syndrome, seen in human males, is most easily reproduced in rodent models by exposure to mixtures of antiandrogenic chemicals. In contrast, the accepted explanation for feminization of wild male fish is that it results mainly from exposure to steroidal estrogens originating primarily from human excretion. OBJECTIVES: We sought to further explore the hypothesis that endocrine disruption in fish is multi-causal, resulting from exposure to mixtures of chemicals with both estrogenic and antiandrogenic properties. METHODS: We used hierarchical generalized linear and generalized additive statistical modeling to explore the associations between modeled concentrations and activities of estrogenic and antiandrogenic chemicals in 30 U.K. rivers and feminized responses seen in wild fish living in these rivers. RESULTS: In addition to the estrogenic substances, antiandrogenic activity was prevalent in almost all treated sewage effluents tested. Further, the results of the modeling demonstrated that feminizing effects in wild fish could be best modeled as a function of their predicted exposure to both anti-androgens and estrogens or to antiandrogens alone. CONCLUSION: The results provide a strong argument for a multicausal etiology of widespread feminization of wild fish in U.K. rivers involving contributions from both steroidal estrogens and xeno-estrogens and from other (as yet unknown) contaminants with antiandrogenic properties. These results may add farther credence to the hypothesis that endocrine-disrupting effects seen in wild fish and in humans are caused by similar combinations of endocrine-disrupting chemical cocktails.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Feedback design for a second-order control system leads to an eigenstructure assignment problem for a quadratic matrix polynomial. It is desirable that the feedback controller not only assigns specified eigenvalues to the second-order closed loop system but also that the system is robust, or insensitive to perturbations. We derive here new sensitivity measures, or condition numbers, for the eigenvalues of the quadratic matrix polynomial and define a measure of the robustness of the corresponding system. We then show that the robustness of the quadratic inverse eigenvalue problem can be achieved by solving a generalized linear eigenvalue assignment problem subject to structured perturbations. Numerically reliable methods for solving the structured generalized linear problem are developed that take advantage of the special properties of the system in order to minimize the computational work required. In this part of the work we treat the case where the leading coefficient matrix in the quadratic polynomial is nonsingular, which ensures that the polynomial is regular. In a second part, we will examine the case where the open loop matrix polynomial is not necessarily regular.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three emissions inventories have been used with a fully Lagrangian trajectory model to calculate the stratospheric accumulation of water vapour emissions from aircraft, and the resulting radiative forcing. The annual and global mean radiative forcing due to present-day aviation water vapour emissions has been found to be 0.9 [0.3 to 1.4] mW m^2. This is around a factor of three smaller than the value given in recent assessments, and the upper bound is much lower than a recently suggested 20 mW m^2 upper bound. This forcing is sensitive to the vertical distribution of emissions, and, to a lesser extent, interannual variability in meteorology. Large differences in the vertical distribution of emissions within the inventories have been identified, which result in the choice of inventory being the largest source of differences in the calculation of the radiative forcing due to the emissions. Analysis of Northern Hemisphere trajectories demonstrates that the assumption of an e-folding time is not always appropriate for stratospheric emissions. A linear model is more representative for emissions that enter the stratosphere far above the tropopause.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important feature of agribusiness promotion programs is their lagged impact on consumption. Efficient investment in advertising requires reliable estimates of these lagged responses and it is desirable from both applied and theoretical standpoints to have a flexible method for estimating them. This note derives an alternative Bayesian methodology for estimating lagged responses when investments occur intermittently within a time series. The method exploits a latent-variable extension of the natural-conjugate, normal-linear model, Gibbs sampling and data augmentation. It is applied to a monthly time series on Turkish pasta consumption (1993:5-1998:3) and three, nonconsecutive promotion campaigns (1996:3, 1997:3, 1997:10). The results suggest that responses were greatest to the second campaign, which allocated its entire budget to television media; that its impact peaked in the sixth month following expenditure; and that the rate of return (measured in metric tons additional consumption per thousand dollars expended) was around a factor of 20.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data