27 resultados para LOG-LINEAR MODELS

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider testing for additivity in a class of nonparametric stochastic regression models. Two test statistics are constructed and their asymptotic distributions are established. We also conduct a small sample study for one of the test statistics through a simulated example. (C) 2002 Elsevier Science (USA).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We compare Bayesian methodology utilizing free-ware BUGS (Bayesian Inference Using Gibbs Sampling) with the traditional structural equation modelling approach based on another free-ware package, Mx. Dichotomous and ordinal (three category) twin data were simulated according to different additive genetic and common environment models for phenotypic variation. Practical issues are discussed in using Gibbs sampling as implemented by BUGS to fit subject-specific Bayesian generalized linear models, where the components of variation may be estimated directly. The simulation study (based on 2000 twin pairs) indicated that there is a consistent advantage in using the Bayesian method to detect a correct model under certain specifications of additive genetics and common environmental effects. For binary data, both methods had difficulty in detecting the correct model when the additive genetic effect was low (between 10 and 20%) or of moderate range (between 20 and 40%). Furthermore, neither method could adequately detect a correct model that included a modest common environmental effect (20%) even when the additive genetic effect was large (50%). Power was significantly improved with ordinal data for most scenarios, except for the case of low heritability under a true ACE model. We illustrate and compare both methods using data from 1239 twin pairs over the age of 50 years, who were registered with the Australian National Health and Medical Research Council Twin Registry (ATR) and presented symptoms associated with osteoarthritis occurring in joints of the hand.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The MFG test is a family-based association test that detects genetic effects contributing to disease in offspring, including offspring allelic effects, maternal allelic effects and MFG incompatibility effects. Like many other family-based association tests, it assumes that the offspring survival and the offspring-parent genotypes are conditionally independent provided the offspring is affected. However, when the putative disease-increasing locus can affect another competing phenotype, for example, offspring viability, the conditional independence assumption fails and these tests could lead to incorrect conclusions regarding the role of the gene in disease. We propose the v-MFG test to adjust for the genetic effects on one phenotype, e.g., viability, when testing the effects of that locus on another phenotype, e.g., disease. Using genotype data from nuclear families containing parents and at least one affected offspring, the v-MFG test models the distribution of family genotypes conditional on offspring phenotypes. It simultaneously estimates genetic effects on two phenotypes, viability and disease. Simulations show that the v-MFG test produces accurate genetic effect estimates on disease as well as on viability under several different scenarios. It generates accurate type-I error rates and provides adequate power with moderate sample sizes to detect genetic effects on disease risk when viability is reduced. We demonstrate the v-MFG test with HLA-DRB1 data from study participants with rheumatoid arthritis (RA) and their parents, we show that the v-MFG test successfully detects an MFG incompatibility effect on RA while simultaneously adjusting for a possible viability loss.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The relative importance of factors that may promote genetic differentiation in marine organisms is largely unknown. Here, contributions to population structure from biogeography, habitat distribution, and isolation by distance were investigated in Axoclinus nigricaudus, a small subtidal rock reef fish, throughout its range in the Gulf of California. A 408 basepair fragment of the mitochondrial control region was sequenced from 105 individuals. Variation was significantly partitioned between many pairs of populations. Phylogenetic analyses, hierarchical analyses of variance, and general linear models substantiated a major break between two putative biogeographic regions. This genetic discontinuity coincides with an abrupt change in ecological characteristics (including temperature and salinity) but does not coincide with known oceanographic circulation patterns. Geographic distance and the nature of habitat separating populations (continuous habitat along a shoreline, discontinuous habitat along a shoreline, and open water) also contributed to population structure in general linear model analyses. To verify that local populations are genetically stable over time, one population was resampled on four occasions over eighteen months; it showed no evidence of a temporal component to diversity. These results indicate that having a planktonic life stage does not preclude geographically partitioned genetic variation over relatively small geographic distances in marine environments. Moreover, levels of genetic differentiation among populations of Axoclinus nigricaudus cannot be explained by a single factor, but are due to the combined influences of a biogeographic boundary, habitat, and geographic distance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: Many guidelines advocate measurement of total or low density lipoprotein cholesterol (LDL), high density lipoprotein cholesterol (HDL), and triglycerides (TG) to determine treatment recommendations for preventing coronary heart disease (CHD) and cardiovascular disease (CVD). This analysis is a comparison of lipid variables as predictors of cardiovascular disease. METHODS: Hazard ratios for coronary and cardiovascular deaths by fourths of total cholesterol (TC), LDL, HDL, TG, non-HDL, TC/HDL, and TG/HDL values, and for a one standard deviation change in these variables, were derived in an individual participant data meta-analysis of 32 cohort studies conducted in the Asia-Pacific region. The predictive value of each lipid variable was assessed using the likelihood ratio statistic. RESULTS: Adjusting for confounders and regression dilution, each lipid variable had a positive (negative for HDL) log-linear association with fatal CHD and CVD. Individuals in the highest fourth of each lipid variable had approximately twice the risk of CHD compared with those with lowest levels. TG and HDL were each better predictors of CHD and CVD risk compared with TC alone, with test statistics similar to TC/HDL and TG/HDL ratios. Calculated LDL was a relatively poor predictor. CONCLUSIONS: While LDL reduction remains the main target of intervention for lipid-lowering, these data support the potential use of TG or lipid ratios for CHD risk prediction. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. An isolated perfused rat liver (IPRL) preparation was used to investigate separately the disposition of the non-steroidal anti-inflammatory drug (NSAID) naproxen (NAP), its reactive acyl glucuronide metabolite (NAG) and a mixture of NAG rearrangement isomers (isoNAG), each at 30 mug NAP equivalents ml(-1) perfusate (n = 4 each group). 2. Following administration to the IPRL, NAP was eliminated slowly in a log-linear manner with an apparent elimination half-life (t(1/2)) of 13.4 +/-4.4 h. No metabolites were detected in perfusate, while NAG was the only metabolite present in bile in measurable amounts (3.9 +/-0.8%, of the dose). Following their administration to the IPRL, both NAG and isoNAG were rapidly hydrolysed (t(1/2) in perfusate=57 +/-3 and 75 +/- 14min respectively). NAG also rearranged to isoNAG in the perfusate. Both NAG and isoNAG were excreted intact in bile (24.6 and 14.8% of the NAG and isoNAG doses, respectively). 3. Covalent NAP-protein adducts in the liver increased as the dose changed from NAP to NAG to isoNAG (0.20 to 0.34 to 0.48% of the doses, respectively). Similarly, formation of covalent NAP-protein adducts in perfusate were greater in isoNAG-dosed perfusions. The comparative results Suggest that isoNAG is a better substrate for adduct formation with liver proteins than NAG.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measuring perceptions of customers can be a major problem for marketers of tourism and travel services. Much of the problem is to determine which attributes carry most weight in the purchasing decision. Older travellers weigh many travel features before making their travel decisions. This paper presents a descriptive analysis of neural network methodology and provides a research technique that assesses the weighting of different attributes and uses an unsupervised neural network model to describe a consumer-product relationship. The development of this rich class of models was inspired by the neural architecture of the human brain. These models mathematically emulate the neurophysical structure and decision making of the human brain, and, from a statistical perspective, are closely related to generalised linear models. Artificial neural networks or neural networks are, however, nonlinear and do not require the same restrictive assumptions about the relationship between the independent variables and dependent variables. Using neural networks is one way to determine what trade-offs older travellers make as they decide their travel plans. The sample of this study is from a syndicated data source of 200 valid cases from Western Australia. From senior groups, active learner, relaxed family body, careful participants and elementary vacation were identified and discussed. (C) 2003 Published by Elsevier Science Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two methods were compared for determining the concentration of penetrative biomass during growth of Rhizopus oligosporus on an artificial solid substrate consisting of an inert gel and starch as the sole source of carbon and energy. The first method was based on the use of a hand microtome to make sections of approximately 0.2- to 0.4-mm thickness parallel to the substrate surface and the determination of the glucosamine content in each slice. Use of glucosamine measurements to estimate biomass concentrations was shown to be problematic due to the large variations in glucosamine content with mycelial age. The second method was a novel method based on the use of confocal scanning laser microscopy to estimate the fractional volume occupied by the biomass. Although it is not simple to translate fractional volumes into dry weights of hyphae due to the lack of experimentally determined conversion factors, measurement of the fractional volumes in themselves is useful for characterizing fungal penetration into the substrate. Growth of penetrative biomass in the artificial model substrate showed two forms of growth with an indistinct mass in the region close to the substrate surface and a few hyphae penetrating perpendicularly to the surface in regions further away from the substrate surface. The biomass profiles against depth obtained from the confocal microscopy showed two linear regions on log-linear plots, which are possibly related to different oxygen availability at different depths within the substrate. Confocal microscopy has the potential to be a powerful tool in the investigation of fungal growth mechanisms in solid-state fermentation. (C) 2003 Wiley Periodicals, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerous studies in the last 60 years have investigated the relationship between land slope and soil erosion rates. However, relatively few of these have investigated slope gradient responses: ( a) for steep slopes, (b) for specific erosion processes, and ( c) as a function of soil properties. Simulated rainfall was applied in the laboratory on 16 soils and 16 overburdens at 100 mm/h to 3 replicates of unconsolidated flume plots 3 m long by 0.8 m wide and 0.15 m deep at slopes of 20, 5, 10, 15, and 30% slope in that order. Sediment delivery at each slope was measured to determine the relationship between slope steepness and erosion rate. Data from this study were evaluated alongside data and existing slope adjustment functions from more than 55 other studies from the literature. Data and the literature strongly support a logistic slope adjustment function of the form S = A + B/[1 + exp (C - D sin theta)] where S is the slope adjustment factor and A, B, C, and D are coefficients that depend on the dominant detachment and transport processes. Average coefficient values when interill-only processes are active are A - 1.50, B 6.51, C 0.94, and D 5.30 (r(2) = 0.99). When rill erosion is also potentially active, the average slope response is greater and coefficient values are A - 1.12, B 16.05, C 2.61, and D 8.32 (r(2) = 0.93). The interill-only function predicts increases in sediment delivery rates from 5 to 30% slope that are approximately double the predictions based on existing published interill functions. The rill + interill function is similar to a previously reported value. The above relationships represent a mean slope response for all soils, yet the response of individual soils varied substantially from a 2.5-fold to a 50-fold increase over the range of slopes studied. The magnitude of the slope response was found to be inversely related ( log - log linear) to the dispersed silt and clay content of the soil, and 3 slope adjustment equations are proposed that provide a better estimate of slope response when this soil property is known. Evaluation of the slope adjustment equations proposed in this paper using independent datasets showed that the new equations can improve soil erosion predictions.