967 resultados para Bayesian hypothesis testing
Resumo:
Previous research has shown that motion imagery draws on the same neural circuits that are involved in perception of motion, thus leading to a motion aftereffect (Winawer et al., 2010). Imagined stimuli can induce a similar shift in participants’ psychometric functions as neural adaptation due to a perceived stimulus. However, these studies have been criticized on the grounds that they fail to exclude the possibility that the subjects might have guessed the experimental hypothesis, and behaved accordingly (Morgan et al., 2012). In particular, the authors claim that participants can adopt arbitrary response criteria, which results in similar changes of the central tendency μ of psychometric curves as those shown by Winawer et al. (2010).
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
The main purpose of a gene interaction network is to map the relationships of the genes that are out of sight when a genomic study is tackled. DNA microarrays allow the measure of gene expression of thousands of genes at the same time. These data constitute the numeric seed for the induction of the gene networks. In this paper, we propose a new approach to build gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling. The interactions induced by the Bayesian classifiers are based both on the expression levels and on the phenotype information of the supervised variable. Feature selection and bootstrap resampling add reliability and robustness to the overall process removing the false positive findings. The consensus among all the induced models produces a hierarchy of dependences and, thus, of variables. Biologists can define the depth level of the model hierarchy so the set of interactions and genes involved can vary from a sparse to a dense set. Experimental results show how these networks perform well on classification tasks. The biological validation matches previous biological findings and opens new hypothesis for future studies
Resumo:
Many multifactorial biologic effects, particularly in the context of complex human diseases, are still poorly understood. At the same time, the systematic acquisition of multivariate data has become increasingly easy. The use of such data to analyze and model complex phenotypes, however, remains a challenge. Here, a new analytic approach is described, termed coreferentiality, together with an appropriate statistical test. Coreferentiality is the indirect relation of two variables of functional interest in respect to whether they parallel each other in their respective relatedness to multivariate reference data, which can be informative for a complex effect or phenotype. It is shown that the power of coreferentiality testing is comparable to multiple regression analysis, sufficient even when reference data are informative only to a relatively small extent of 2.5%, and clearly exceeding the power of simple bivariate correlation testing. Thus, coreferentiality testing uses the increased power of multivariate analysis, however, in order to address a more straightforward interpretable bivariate relatedness. Systematic application of this approach could substantially improve the analysis and modeling of complex phenotypes, particularly in the context of human study where addressing functional hypotheses by direct experimentation is often difficult.
Resumo:
This study examined the utility of a stress/coping model in explaining adaptation in two groups of people at-risk for Huntington's Disease (HD): those who have not approached genetic testing services (non-testees) and those who have engaged a testing service (testees). The aims were (1) to compare testees and non-testees on stress/coping variables, (2) to examine relations between adjustment and the stress/coping predictors in the two groups, and (3) to examine relations between the stress/coping variables and testees' satisfaction with their first counselling session. Participants were 44 testees and 40 non-testees who completed questionnaires which measured the stress/coping variables: adjustment (global distress, depression, health anxiety, social and dyadic adjustment), genetic testing concerns, testing context (HD contact, experience, knowledge), appraisal (control, threat, self-efficacy), coping strategies (avoidance, self-blame, wishful thinking, seeking support, problem solving), social support and locus of control. Testees also completed a genetic counselling session satisfaction scale. As expected, non-testees reported lower self-efficacy and control appraisals, higher threat and passive avoidant coping than testees. Overall, results supported the hypothesis that within each group poorer adjustment would be related to higher genetic testing concerns, contact with HD, threat appraisals, passive avoidant coping and external locus of control, and lower levels of positive experiences with HD, social support, internal locus of control, self-efficacy, control appraisals, problem solving, emotional approach and seeking social support coping. Session satisfaction scores were positively correlated with dyadic adjustment, problem solving and positive experience with HD, and inversely related to testing concerns, and threat and control appraisals. Findings support the utility of the stress/coping model in explaining adaptation in people who have decided not to seek genetic testing for HD and those who have decided to engage a genetic testing service.
Resumo:
We outline and evaluate competing explanations of three relationships that have consistently been found between cannabis use and the use of other illicit drugs, namely, ( 1) that cannabis use typically precedes the use of other illicit drugs; and that ( 2) the earlier cannabis is used, and ( 3) the more regularly it is used, the more likely a young person is to use other illicit drugs. We consider three major competing explanations of these patterns: ( 1) that the relationship is due to the fact that there is a shared illicit market for cannabis and other drugs which makes it more likely that other illicit drugs will be used if cannabis is used; ( 2) that they are explained by the characteristics of those who use cannabis; and ( 3) that they reflect a causal relationship in which the pharmacological effects of cannabis on brain function increase the likelihood of using other illicit drugs. These explanations are evaluated in the light of evidence from longitudinal epidemiological studies, simulation studies, discordant twin studies and animal studies. The available evidence indicates that the association reflects in part but is not wholly explained by: ( 1) the selective recruitment to heavy cannabis use of persons with pre-existing traits ( that may be in part genetic) that predispose to the use of a variety of different drugs; ( 2) the affiliation of cannabis users with drug using peers in settings that provide more opportunities to use other illicit drugs at an earlier age; ( 3) supported by socialisation into an illicit drug subculture with favourable attitudes towards the use of other illicit drugs. Animal studies have raised the possibility that regular cannabis use may have pharmacological effects on brain function that increase the likelihood of using other drugs. We conclude with suggestions for the type of research studies that will enable a decision to be made about the relative contributions that social context, individual characteristics, and drug effects make to the relationship between cannabis use and the use of other drugs.
Resumo:
This study investigated whether Negative Affectivity (NA) causes bias in self-report measures of activity limitations or whether NA has a real, non-artifactual association with activity limitations. The Symptom Perception Hypothesis (NA negatively biases self-reporting), Disability Hypothesis (activity limitations cause NA) and Psychosomatic Hypothesis (NA causes activity limitations) were examined longitudinally using both self-report and objective activity limitations measures. Participants were 101 stroke patients and their caregivers interviewed within two weeks of discharge, six weeks later and six months post-discharge. NA and self-report, proxy-report and observed performance activity (walking) limitations were assessed at each interview. NA was associated with activity limitations across measures. Both the Disability and Psychosomatic Hypotheses were supported: initial NA predicted objective activity limitations at six weeks but, additionally, activity limitations at six weeks predicted NA at six months. These results suggest that NA both affects and is affected by activity limitations and does not simply influence reporting.
Resumo:
2000 Mathematics Subject Classification: 62E16, 65C05, 65C20.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
The Intersensory Redundancy Hypothesis (IRH; Bahrick & Lickliter, 2000, 2002, 2012) predicts that early in development information presented to a single sense modality will selectively recruit attention to modality-specific properties of stimulation and facilitate learning of those properties at the expense of amodal properties (unimodal facilitation). Vaillant (2010) demonstrated that bobwhite quail chicks prenatally exposed to a maternal call alone (unimodal stimulation) are able to detect a pitch change, a modality-specific property, in subsequent postnatal testing between the familiarized call and the same call with altered pitch. In contrast, chicks prenatally exposed to a maternal call paired with a temporally synchronous light (redundant audiovisual stimulation) were unable to detect a pitch change. According to the IRH (Bahrick & Lickliter, 2012), as development proceeds and the individual's perceptual abilities increase, the individual should detect modality-specific properties in both nonredundant, unimodal and redundant, bimodal conditions. However, when the perceiver is presented with a difficult task, relative to their level of expertise, unimodal facilitation should become evident. The first experiment of the present study exposed bobwhite quail chicks 24 hr after hatching to unimodal auditory, nonredundant audiovisual, or redundant audiovisual presentations of a maternal call for 10min/hr for 24 hours. All chicks were subsequently tested 24 hr after the completion of the stimulation (72 hr following hatching) between the familiarized maternal call and the same call with altered pitch. Chicks from all experimental groups (unimodal, nonredundant audiovisual, and redundant audiovisual exposure) significantly preferred the familiarized call over the pitch-modified call. The second experiment exposed chicks to the same exposure conditions, but created a more difficult task by narrowing the pitch range between the two maternal calls with which they were tested. Chicks in the unimodal and nonredundant audiovisual conditions demonstrated detection of the pitch change, whereas the redundant audiovisual exposure group did not show detection of the pitch change, providing evidence of unimodal facilitation. These results are consistent with predictions of the IRH and provide further support for the effects of unimodal facilitation and the role of task difficulty across early development.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.
Resumo:
Testing for differences within data sets is an important issue across various applications. Our work is primarily motivated by the analysis of microbiomial composition, which has been increasingly relevant and important with the rise of DNA sequencing. We first review classical frequentist tests that are commonly used in tackling such problems. We then propose a Bayesian Dirichlet-multinomial framework for modeling the metagenomic data and for testing underlying differences between the samples. A parametric Dirichlet-multinomial model uses an intuitive hierarchical structure that allows for flexibility in characterizing both the within-group variation and the cross-group difference and provides very interpretable parameters. A computational method for evaluating the marginal likelihoods under the null and alternative hypotheses is also given. Through simulations, we show that our Bayesian model performs competitively against frequentist counterparts. We illustrate the method through analyzing metagenomic applications using the Human Microbiome Project data.
Resumo:
When we study the variables that a ffect survival time, we usually estimate their eff ects by the Cox regression model. In biomedical research, e ffects of the covariates are often modi ed by a biomarker variable. This leads to covariates-biomarker interactions. Here biomarker is an objective measurement of the patient characteristics at baseline. Liu et al. (2015) has built up a local partial likelihood bootstrap model to estimate and test this interaction e ffect of covariates and biomarker, but the R code developed by Liu et al. (2015) can only handle one variable and one interaction term and can not t the model with adjustment to nuisance variables. In this project, we expand the model to allow adjustment to nuisance variables, expand the R code to take any chosen interaction terms, and we set up many parameters for users to customize their research. We also build up an R package called "lplb" to integrate the complex computations into a simple interface. We conduct numerical simulation to show that the new method has excellent fi nite sample properties under both the null and alternative hypothesis. We also applied the method to analyze data from a prostate cancer clinical trial with acid phosphatase (AP) biomarker.
Resumo:
Tests for dependence of continuous, discrete and mixed continuous-discrete variables are ubiquitous in science. The goal of this paper is to derive Bayesian alternatives to frequentist null hypothesis significance tests for dependence. In particular, we will present three Bayesian tests for dependence of binary, continuous and mixed variables. These tests are nonparametric and based on the Dirichlet Process, which allows us to use the same prior model for all of them. Therefore, the tests are “consistent” among each other, in the sense that the probabilities that variables are dependent computed with these tests are commensurable across the different types of variables being tested. By means of simulations with artificial data, we show the effectiveness of the new tests.