931 resultados para phylogeographical hypothesis testing
Resumo:
Copepod assemblages from two cascade reservoirs were analyzed during two consecutive years. The upstream reservoir (Chavantes) is a storage system with a high water retention time (WRT of 400 days), and the downstream one (Salto Grande) is a run-of-river system with only 1. 5 days WRT. Copepod composition, richness, abundance, and diversity were correlated with the limnological variables and the hydrological and morphometric features. Standard methods were employed for zooplankton sampling and analysis (vertical 50-μm net hauls and counting under a stereomicroscope). Two hypotheses were postulated and confirmed through the data obtained: (1) compartmentalization is more pronounced in the storage reservoir and determines the differences in the copepod assemblage structure; and (2) the assemblages are more homogeneous in the run-of-river reservoir, where the abundance decreases because of the predominance of washout effects. For both reservoirs, the upstream zone is more distinctive. In addition, in the smaller reservoir the influence of the input from tributaries is stronger (turbid waters). Richness did not differ significantly among seasons, but abundance was higher in the run-of-river reservoir during summer. © 2012 Springer Science+Business Media Dordrecht.
Resumo:
Predation is a primary driver of tadpole assemblages, and the activity rate is a good predictor of the tadpoles' tolerance for predation risk. The conflicting demands between activity and exposure to predation can generate suboptimal behaviours. Because morphological components, such as body colouration, may affect the activity of tadpoles, we predict that environmental features that enhance or match the tadpole colouration should affect their survival or activity rate in the presence of a predator. We tested this prediction experimentally by assessing the mortality rate of tadpoles of Rhinella schneideri and Eupemphix nattereri and the active time on two artificial background types: one bright-coloured and one black-coloured. We found no difference in tadpole mortality due to the background type. However, R. schneideri tadpoles were more active than E. nattereri tadpoles, and the activity of R. schneideri was reduced less in the presence of the predator than that of E. nattereri. Although the background colouration did not affect the tadpole mortality rate, it was a stimulus that elicited behavioural responses in the tadpoles, leading them to adjust their activity rate to the type of background colour. © 2013 Dipartimento di Biologia, Università degli Studi di Firenze, Italia.
Resumo:
Background: Arboviral diseases are major global public health threats. Yet, our understanding of infection risk factors is, with a few exceptions, considerably limited. A crucial shortcoming is the widespread use of analytical methods generally not suited for observational data - particularly null hypothesis-testing (NHT) and step-wise regression (SWR). Using Mayaro virus (MAYV) as a case study, here we compare information theory-based multimodel inference (MMI) with conventional analyses for arboviral infection risk factor assessment. Methodology/Principal Findings: A cross-sectional survey of anti-MAYV antibodies revealed 44% prevalence (n = 270 subjects) in a central Amazon rural settlement. NHT suggested that residents of village-like household clusters and those using closed toilet/latrines were at higher risk, while living in non-village-like areas, using bednets, and owning fowl, pigs or dogs were protective. The "minimum adequate" SWR model retained only residence area and bednet use. Using MMI, we identified relevant covariates, quantified their relative importance, and estimated effect-sizes (beta +/- SE) on which to base inference. Residence area (beta(Village) = 2.93 +/- 0.41; beta(Upland) = -0.56 +/- 0.33, beta(Riverbanks) = -2.37 +/- 0.55) and bednet use (beta = -0.95 +/- 0.28) were the most important factors, followed by crop-plot ownership (beta = 0.39 +/- 0.22) and regular use of a closed toilet/latrine (beta = 0.19 +/- 0.13); domestic animals had insignificant protective effects and were relatively unimportant. The SWR model ranked fifth among the 128 models in the final MMI set. Conclusions/Significance: Our analyses illustrate how MMI can enhance inference on infection risk factors when compared with NHT or SWR. MMI indicates that forest crop-plot workers are likely exposed to typical MAYV cycles maintained by diurnal, forest dwelling vectors; however, MAYV might also be circulating in nocturnal, domestic-peridomestic cycles in village-like areas. This suggests either a vector shift (synanthropic mosquitoes vectoring MAYV) or a habitat/habits shift (classical MAYV vectors adapting to densely populated landscapes and nocturnal biting); any such ecological/adaptive novelty could increase the likelihood of MAYV emergence in Amazonia.
Resumo:
The issue of assessing variance components is essential in deciding on the inclusion of random effects in the context of mixed models. In this work we discuss this problem by supposing nonlinear elliptical models for correlated data by using the score-type test proposed in Silvapulle and Silvapulle (1995). Being asymptotically equivalent to the likelihood ratio test and only requiring the estimation under the null hypothesis, this test provides a fairly easy computable alternative for assessing one-sided hypotheses in the context of the marginal model. Taking into account the possible non-normal distribution, we assume that the joint distribution of the response variable and the random effects lies in the elliptical class, which includes light-tailed and heavy-tailed distributions such as Student-t, power exponential, logistic, generalized Student-t, generalized logistic, contaminated normal, and the normal itself, among others. We compare the sensitivity of the score-type test under normal, Student-t and power exponential models for the kinetics data set discussed in Vonesh and Carter (1992) and fitted using the model presented in Russo et al. (2009). Also, a simulation study is performed to analyze the consequences of the kurtosis misspecification.
Resumo:
To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results". The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used.
Resumo:
In this paper, we present approximate distributions for the ratio of the cumulative wavelet periodograms considering stationary and non-stationary time series generated from independent Gaussian processes. We also adapt an existing procedure to use this statistic and its approximate distribution in order to test if two regularly or irregularly spaced time series are realizations of the same generating process. Simulation studies show good size and power properties for the test statistic. An application with financial microdata illustrates the test usefulness. We conclude advocating the use of these approximate distributions instead of the ones obtained through randomizations, mainly in the case of irregular time series. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Aim: to evaluate the association of antenatal depressive symptomatology (AD) with life events and coping styles, the hypothesis was that certain coping strategies are associated to depressive symptomatology. Methods: we performed a cross sectional study of 312 women attending a private clinic in the city of Osasco, Sao Paulo from 27/05/1998 to 13/05/2002. The following instruments were used: Beck Depression Inventory (BDI), Holmes and Rahe Schedule of Recent Events (SSRS), Folkman and Lazarus Ways of Coping Questionnaire and questionnaire with social-demographic and obstetric data. Inclusion criteria: women with 110 past history of depression, psychiatric treatment, alcohol or drug abuse and no clinical-obstetrical complications. Odds ratios and 95% CI were used to examine the association between AD (according to BDI) and exposures variables. Hypothesis testing was done with chi(2) tests and a p value < .05. Results: AD occurred in 21.1% of pregnant women. By the univariate analyses, education, number of pregnancies, previous abortion, husband income, situation of marriage and score of SSRS were associated with AD. All coping styles were associated with AD, except seeking support and positive reappraisal. By the multivariate analyses, four coping styles were kept in the final model: confront (p = .039), accepting responsibility (p < .001), escape-avoidance (p = .002), problem-solving (p = .005). Conclusions: AD was highly prevalent and was associated with maladaptive coping styles.
Resumo:
[EN]In this paper the authors show that techniques employed in the prediction of chaotic time series" can also be applied to detection of outliers. A definition of outlier" lS provided and a theorem on hypothesis testing is also proved.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
Estimation of breastmilk infectivity in HIV-1 infected mothers is difficult because transmission can occur while the fetus is in-utero, during delivery, or through breastfeeding. Since transmission can only be detected through periodic testing, however, it may be impossible to determine the actual mode of transmission in any individual child. In this paper we develop a model to estimate breastmilk infectivity as well as the probabilities of in-utero and intrapartum transmission. In addition, the model allows separate estimation of early and late breastmilk infectivity and individual variation in maternal infectivity. Methods for hypothesis testing of binary risk factors and a method for assessing goodness of fit are also described. Data from a randomized trial of breastfeeding versus formula feeding among HIV-1 infected mothers in Nairobi, Kenya are used to illustrate the methods.
Resumo:
Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is "equivalent" to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence and examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods are a reasonable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. Our study also shows that the standard methods in the current practice of bioequivalence trials offers only weak evidence from the evidential point of view.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
Ziel dieses Beitrages ist die Analyse der Anwendung empirischer Tests in der deutschsprachigen Sportpsychologie. Die Ergebnisse vergleichbarer Analysen, bspw. in der Psychologie, zeigen, dass zwischen Anforderungen aus Testkonzepten und empirischer Realität Unterschiede existieren, die bislang für die Sportpsychologie nicht beschrieben und bewertet worden sind. Die Jahrgänge 1994–2007 der Zeitschrift für Sportpsychologie (früher psychologie und sport) wurden danach untersucht, ob Forschungsfragen formuliert, welche Stichprobenart gewählt, welches Testkonzept verwendet, welches Signifikanzniveau benutzt und ob statistische Probleme diskutiert wurden. 83 Artikel wurden von zwei unabhängigen Bewertern nach diesen Aspekten kategorisiert. Als Ergebnis ist festzuhalten, dass in der sportpsychologischen Forschung überwiegend eine Mischung aus Fishers Signifikanztesten sowie Neyman-Pearsons-Hypothesentesten zur Anwendung kommt,das sogenannte „Hybrid-Modell” oder „Null-Ritual”. Die Beschreibung der Teststärke ist kaum zu beobachten. Eine zeitliche Analyse der Beiträge zeigt, dass vor allem die Benutzung von Effektgrößen in den letzten Jahren zugenommen hat. Abschließend werden Ansätze zur Verbesserung und der Vereinheitlichung der Anwendung empirischer Tests vorgeschlagen und diskutiert.
Resumo:
Drought perturbation driven by the El Niño Southern Oscillation (ENSO) is a principal stochastic variable determining the dynamics of lowland rain forest in S.E. Asia. Mortality, recruitment and stem growth rates at Danum in Sabah (Malaysian Borneo) were recorded in two 4-ha plots (trees ≥ 10 cm gbh) for two periods, 1986–1996 and 1996–2001. Mortality and growth were also recorded in a sample of subplots for small trees (10 to <50 cm gbh) in two sub-periods, 1996–1999 and 1999–2001. Dynamics variables were employed to build indices of drought response for each of the 34 most abundant plot-level species (22 at the subplot level), these being interval-weighted percentage changes between periods and sub-periods. A significant yet complex effect of the strong 1997/1998 drought at the forest community level was shown by randomization procedures followed by multiple hypothesis testing. Despite a general resistance of the forest to drought, large and significant differences in short-term responses were apparent for several species. Using a diagrammatic form of stability analysis, different species showed immediate or lagged effects, high or low degrees of resilience or even oscillatory dynamics. In the context of the local topographic gradient, species’ responses define the newly termed perturbation response niche. The largest responses, particularly for recruitment and growth, were among the small trees, many of which are members of understorey taxa. The results bring with them a novel approach to understanding community dynamics: the kaleidoscopic complexity of idiosyncratic responses to stochastic perturbations suggests that plurality, rather than neutrality, of responses may be essential to understanding these tropical forests. The basis to the various responses lies with the mechanisms of tree-soil water relations which are physiologically predictable: the timing and intensity of the next drought, however, is not. To date, environmental stochasticity has been insufficiently incorporated into models of tropical forest dynamics, a step that might considerably improve the reality of theories about these globally important ecosystems.