954 resultados para Deviance information criterion


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: EpHA2 is a 130 kD transmembrane glycoprotein belonging to ephrin receptor subfamily and involved in angiogenesis/tumour neovascularisation. High EpHA2 mRNA level has recently been implicated in cetuximab resistance. Previously, we found high EpHA2 levels in a panel of invasive colorectal cancer (CRC) cells, which was associated with high levels of stem-cell marker CD44. Our aim was to investigate the prognostic value of EpHA2 and subsequently correlate expression levels to known clinico-pathological variables in early stage CRC. Methods: Tissue samples from 509 CRC patients were analysed. EpHA2 expression was measured using IHC. Kaplan-Meier graphs were used. Univariate and multivariate analyses employed Cox Proportional Hazards Ratio (HR) method. A backward selection method (Akaike’s information criterion) was used to determine a refined multivariate model. Results: EpHA2 was highly expressed in CRC adenocarcinoma compared to matched normal colon tissue. In support of our preclinical invasive models, strong correlation was found between EpHA2 expression and CD44 and Lgr5 staining (p<0.001). In addition, high EpHA2 expression significantly correlated with vascular invasion (p=0.03).HR for OS for stage II/III patients with high EpHA2 expression was 1.69 (95%CI: 1.164-2.439; p=0.003). When stage II/III was broken down into individual stages, there was significant correlation between high EpHA2 expression and poor 5-years OS in stage II patients (HR: 2.18; 95%CI: 1.28-3.71; p=0.005).HR in the stage III group showed a trend to statistical significance (HR: 1.48; 95%CI=0.87-2.51; p=0.05). In both univariate and multivariate analyses of stage II patients, high EpHA2 expression was the only significant factor and was retained in the final multivariate model. Higher levels of EpHA2 were noted in our RAS and BRAF mutant CRC cells, and silencing EpHA2 resulted in significant decreases in migration/invasion in parental and invasive CRC sublines. Correlation between KRAS/NRAS/BRAFmutational status and EpHA2 expression in clinical samples is ongoing. Conclusions: Taken together, our study is the first to indicate that EpHA2 expression is a predictor of poor clinical outcome and a potential novel target in early stage CRC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação apresentada ao Instituto Politécnico do Porto para obtenção do Grau de Mestre em Logística

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES: The purpose of this study was to evaluate the association between inflammation and heart failure (HF) risk in older adults. BACKGROUND: Inflammation is associated with HF risk factors and also directly affects myocardial function. METHODS: The association of baseline serum concentrations of interleukin (IL)-6, tumor necrosis factor-alpha, and C-reactive protein (CRP) with incident HF was assessed with Cox models among 2,610 older persons without prevalent HF enrolled in the Health ABC (Health, Aging, and Body Composition) study (age 73.6 +/- 2.9 years; 48.3% men; 59.6% white). RESULTS: During follow-up (median 9.4 years), HF developed in 311 (11.9%) participants. In models controlling for clinical characteristics, ankle-arm index, and incident coronary heart disease, doubling of IL-6, tumor necrosis factor-alpha, and CRP concentrations was associated with 29% (95% confidence interval: 13% to 47%; p < 0.001), 46% (95% confidence interval: 17% to 84%; p = 0.001), and 9% (95% confidence interval: -1% to 24%; p = 0.087) increase in HF risk, respectively. In models including all 3 markers, IL-6, and tumor necrosis factor-alpha, but not CRP, remained significant. These associations were similar across sex and race and persisted in models accounting for death as a competing event. Post-HF ejection fraction was available in 239 (76.8%) cases; inflammatory markers had stronger association with HF with preserved ejection fraction. Repeat IL-6 and CRP determinations at 1-year follow-up did not provide incremental information. Addition of IL-6 to the clinical Health ABC HF model improved model discrimination (C index from 0.717 to 0.734; p = 0.001) and fit (decreased Bayes information criterion by 17.8; p < 0.001). CONCLUSIONS: Inflammatory markers are associated with HF risk among older adults and may improve HF risk stratification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse porte sur l'analyse bayésienne de données fonctionnelles dans un contexte hydrologique. L'objectif principal est de modéliser des données d'écoulements d'eau d'une manière parcimonieuse tout en reproduisant adéquatement les caractéristiques statistiques de celles-ci. L'analyse de données fonctionnelles nous amène à considérer les séries chronologiques d'écoulements d'eau comme des fonctions à modéliser avec une méthode non paramétrique. Dans un premier temps, les fonctions sont rendues plus homogènes en les synchronisant. Ensuite, disposant d'un échantillon de courbes homogènes, nous procédons à la modélisation de leurs caractéristiques statistiques en faisant appel aux splines de régression bayésiennes dans un cadre probabiliste assez général. Plus spécifiquement, nous étudions une famille de distributions continues, qui inclut celles de la famille exponentielle, de laquelle les observations peuvent provenir. De plus, afin d'avoir un outil de modélisation non paramétrique flexible, nous traitons les noeuds intérieurs, qui définissent les éléments de la base des splines de régression, comme des quantités aléatoires. Nous utilisons alors le MCMC avec sauts réversibles afin d'explorer la distribution a posteriori des noeuds intérieurs. Afin de simplifier cette procédure dans notre contexte général de modélisation, nous considérons des approximations de la distribution marginale des observations, nommément une approximation basée sur le critère d'information de Schwarz et une autre qui fait appel à l'approximation de Laplace. En plus de modéliser la tendance centrale d'un échantillon de courbes, nous proposons aussi une méthodologie pour modéliser simultanément la tendance centrale et la dispersion de ces courbes, et ce dans notre cadre probabiliste général. Finalement, puisque nous étudions une diversité de distributions statistiques au niveau des observations, nous mettons de l'avant une approche afin de déterminer les distributions les plus adéquates pour un échantillon de courbes donné.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon-known as heterotachy-can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estimation of a population size by means of capture-recapture techniques is an important problem occurring in many areas of life and social sciences. We consider the frequencies of frequencies situation, where a count variable is used to summarize how often a unit has been identified in the target population of interest. The distribution of this count variable is zero-truncated since zero identifications do not occur in the sample. As an application we consider the surveillance of scrapie in Great Britain. In this case study holdings with scrapie that are not identified (zero counts) do not enter the surveillance database. The count variable of interest is the number of scrapie cases per holding. For count distributions a common model is the Poisson distribution and, to adjust for potential heterogeneity, a discrete mixture of Poisson distributions is used. Mixtures of Poissons usually provide an excellent fit as will be demonstrated in the application of interest. However, as it has been recently demonstrated, mixtures also suffer under the so-called boundary problem, resulting in overestimation of population size. It is suggested here to select the mixture model on the basis of the Bayesian Information Criterion. This strategy is further refined by employing a bagging procedure leading to a series of estimates of population size. Using the median of this series, highly influential size estimates are avoided. In limited simulation studies it is shown that the procedure leads to estimates with remarkable small bias.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy–Fuller estimator in place of OLS; and the use of the Andrews–Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy–Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy–Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Various studies have indicated a relationship between enteric methane (CH4) production and milk fatty acid (FA) profiles of dairy cattle. However, the number of studies investigating such a relationship is limited and the direct relationships reported are mainly obtained by variation in CH4 production and milk FA concentration induced by dietary lipid supplements. The aim of this study was to perform a meta-analysis to quantify relationships between CH4 yield (per unit of feed and unit of milk) and milk FA profile in dairy cattle and to develop equations to predict CH4 yield based on milk FA profile of cows fed a wide variety of diets. Data from 8 experiments encompassing 30 different dietary treatments and 146 observations were included. Yield of CH4 measured in these experiments was 21.5 ± 2.46 g/kg of dry matter intake (DMI) and 13.9 ± 2.30 g/ kg of fat- and protein-corrected milk (FPCM). Correlation coefficients were chosen as effect size of the relationship between CH4 yield and individual milk FA concentration (g/100 g of FA). Average true correlation coefficients were estimated by a random-effects model. Milk FA concentrations of C6:0, C8:0, C10:0, C16:0, and C16:0-iso were significantly or tended to be positively related to CH4 yield per unit of feed. Concentrations of trans-6+7+8+9 C18:1, trans-10+11 C18:1, cis- 11 C18:1, cis-12 C18:1, cis-13 C18:1, trans-16+cis-14 C18:1, and cis-9,12 C18:2 in milk fat were significantly or tended to be negatively related to CH4 yield per unit of feed. Milk FA concentrations of C10:0, C12:0, C14:0-iso, C14:0, cis-9 C14:1, C15:0, and C16:0 were significantly or tended to be positively related to CH4 yield per unit of milk. Concentrations of C4:0, C18:0, trans-10+11 C18:1, cis-9 C18:1, cis-11 C18:1, and cis- 9,12 C18:2 in milk fat were significantly or tended to be negatively related to CH4 yield per unit of milk. Mixed model multiple regression and a stepwise selection procedure of milk FA based on the Bayesian information criterion to predict CH4 yield with milk FA as input (g/100 g of FA) resulted in the following prediction equations: CH4 (g/kg of DMI) = 23.39 + 9.74 × C16:0- iso – 1.06 × trans-10+11 C18:1 – 1.75 × cis-9,12 C18:2 (R2 = 0.54), and CH4 (g/kg of FPCM) = 21.13 – 1.38 × C4:0 + 8.53 × C16:0-iso – 0.22 × cis-9 C18:1 – 0.59 × trans-10+11 C18:1 (R2 = 0.47). This indicated that milk FA profile has a moderate potential for predicting CH4 yield per unit of feed and a slightly lower potential for predicting CH4 yield per unit of milk. Key words: methane , milk fatty acid profile , metaanalysis , dairy cattle

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society

Relevância:

80.00% 80.00%

Publicador:

Resumo:

So Paulo is the most developed state in Brazil and contains few fragments of native ecosystems, generally surrounded by intensive agriculture lands. Despite this, some areas still shelter large native animals. We aimed at understanding how medium and large carnivores use a mosaic landscape of forest/savanna and agroecosystems, and how the species respond to different landscape parameters (percentage of landcover and edge density), in a multi-scale perspective. The response variables were: species richness, carnivore frequency and frequency for the three most recorded species (Puma concolor, Chrysocyon brachyurus and Leopardus pardalis). We compared 11 competing models using Akaike`s information criterion (AIC) and assessed model support using weight of AIC. Concurrent models were combinations of landcover types (native vegetation, ""cerrado"" formations, ""cerrado"" and eucalypt plantation), landscape feature (percentage of landcover and edge density) and spatial scale. Herein, spatial scale refers to the radius around a sampling point defining a circular landscape. The scales analyzed were 250 (fine), 1,000 (medium) and 2,000 m (coarse). The shape of curves for response variables (linear, exponential and power) was also assessed. Our results indicate that species with high mobility, P. concolor and C. brachyurus, were best explained by edge density of the native vegetation at a coarse scale (2,000 m). The relationship between P. concolor and C. brachyurus frequency had a negative power-shaped response to explanatory variables. This general trend was also observed for species richness and carnivore frequency. Species richness and P. concolor frequency were also well explained by a second concurrent model: edge density of cerrado at the fine (250 m) scale. A different response was recorded for L. pardalis, as the frequency was best explained for the amount of cerrado at the fine (250 m) scale. The curve of response was linearly positive. The contrasting results (P. concolor and C. brachyurus vs L. pardalis) may be due to the much higher mobility of the two first species, in comparison with the third. Still, L. pardalis requires habitat with higher quality when compared with other two species. This study highlights the importance of considering multiple spatial scales when evaluating species responses to different habitats. An important and new finding was the prevalence of edge density over the habitat extension to explain overall carnivore distribution, a key information for planning and management of protected areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Analyses of species association have major implications for selecting indicators for freshwater biomonitoring and conservation, because they allow for the elimination of redundant information and focus on taxa that can be easily handled and identified. These analyses are particularly relevant in the debate about using speciose groups (such as the Chironomidae) as indicators in the tropics, because they require difficult and time-consuming analysis, and their responses to environmental gradients, including anthropogenic stressors, are poorly known. 2. Our objective was to show whether chironomid assemblages in Neotropical streams include clear associations of taxa and, if so, how well these associations could be explained by a set of models containing information from different spatial scales. For this, we formulated a priori models that allowed for the influence of local, landscape and spatial factors on chironomid taxon associations (CTA). These models represented biological hypotheses capable of explaining associations between chironomid taxa. For instance, CTA could be best explained by local variables (e.g. pH, conductivity and water temperature) or by processes acting at wider landscape scales (e.g. percentage of forest cover). 3. Biological data were taken from 61 streams in Southeastern Brazil, 47 of which were in well-preserved regions, and 14 of which drained areas severely affected by anthropogenic activities. We adopted a model selection procedure using Akaike`s information criterion to determine the most parsimonious models for explaining CTA. 4. Applying Kendall`s coefficient of concordance, seven genera (Tanytarsus/Caladomyia, Ablabesmyia, Parametriocnemus, Pentaneura, Nanocladius, Polypedilum and Rheotanytarsus) were identified as associated taxa. The best-supported model explained 42.6% of the total variance in the abundance of associated taxa. This model combined local and landscape environmental filters and spatial variables (which were derived from eigenfunction analysis). However, the model with local filters and spatial variables also had a good chance of being selected as the best model. 5. Standardised partial regression coefficients of local and landscape filters, including spatial variables, derived from model averaging allowed an estimation of which variables were best correlated with the abundance of associated taxa. In general, the abundance of the associated genera tended to be lower in streams characterised by a high percentage of forest cover (landscape scale), lower proportion of muddy substrata and high values of pH and conductivity (local scale). 6. Overall, our main result adds to the increasing number of studies that have indicated the importance of local and landscape variables, as well as the spatial relationships among sampling sites, for explaining aquatic insect community patterns in streams. Furthermore, our findings open new possibilities for the elimination of redundant data in the assessment of anthropogenic impacts on tropical streams.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A correlation between the physicochemical properties of mono- [Li(I), K(I), Na(I)] and divalent [Cd(II), Cu(II), Mn(II), Ni(II), Co(II), Zn(II), Mg(II), Ca(II)] metal cations and their toxicity (evaluated by the free ion median effective concentration. EC50(F)) to the naturally bioluminescent fungus Gerronema viridilucens has been studied using the quantitative ion character activity relationship (QICAR) approach. Among the 11 ionic parameters used in the current study, a univariate model based on the covalent index (X(m)(2)r) proved to be the most adequate for prediction of fungal metal toxicity evaluated by the logarithm of free ion median effective concentration (log EC50(F)): log EC50(F) = 4.243 (+/-0.243) -1.268 (+/-0.125).X(m)(2)r (adj-R(2) = 0.9113, Alkaike information criterion [AIC] = 60.42). Additional two- and three-variable models were also tested and proved less suitable to fit the experimental data. These results indicate that covalent bonding is a good indicator of metal inherent toxicity to bioluminescent fungi. Furthermore, the toxicity of additional metal ions [Ag(I), Cs(I), Sr(II), Ba(II), Fe(II), Hg(II), and Pb(II)] to G. viridilucens was predicted, and Pb was found to be the most toxic metal to this bioluminescent fungus (EC50(F)): Pb(II) > Ag(I) > Hg(I) > Cd(II) > Cu(II) > Co(II) Ni(II) > Mn(II) > Fe(II) approximate to Zn(II) > Mg(II) approximate to Ba(II) approximate to Cs(I) > Li(I) > K(I) approximate to Na(I) approximate to Sr(II)> Ca(II). Environ. Toxicol. Chem. 2010;29:2177-2181. (C) 2010 SETAC

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.