9 resultados para Jeffreys priors
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment, and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the meta-analytic-predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors, which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations. Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust. Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data conflicts ' they should encourage better and more frequent use of historical data in clinical trials.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
Farmed and wild salmonids are affected by a variety of skin conditions, some of which have significant economic and welfare implications. In many cases, the causes are not well understood, and one example is cold water strawberry disease of rainbow trout, also called red mark syndrome, which has been recorded in the UK since 2003. To date, there are no internationally agreed methods for describing these conditions, which has caused confusion for farmers and health professionals, who are often unclear as to whether they are dealing with a new or a previously described condition. This has resulted, inevitably, in delays to both accurate diagnosis and effective treatment regimes. Here, we provide a standardized methodology for the description of skin conditions of rainbow trout of uncertain aetiology. We demonstrate how the approach can be used to develop case definitions, using coldwater strawberry disease as an example.
Resumo:
In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.
Resumo:
The next generation neutrino observatory proposed by the LBNO collaboration will address fundamental questions in particle and astroparticle physics. The experiment consists of a far detector, in its first stage a 20 kt LAr double phase TPC and a magnetised iron calorimeter, situated at 2300 km from CERN and a near detector based on a highpressure argon gas TPC. The long baseline provides a unique opportunity to study neutrino flavour oscillations over their 1st and 2nd oscillation maxima exploring the L/E behaviour, and distinguishing effects arising from δCP and matter. In this paper we have reevaluated the physics potential of this setup for determining the mass hierarchy (MH) and discovering CP-violation (CPV), using a conventional neutrino beam from the CERN SPS with a power of 750 kW. We use conservative assumptions on the knowledge of oscillation parameter priors and systematic uncertainties. The impact of each systematic error and the precision of oscillation prior is shown. We demonstrate that the first stage of LBNO can determine unambiguously the MH to > 5δ C.L. over the whole phase space. We show that the statistical treatment of the experiment is of very high importance, resulting in the conclusion that LBNO has ~ 100% probability to determine the MH in at most 4-5 years of running. Since the knowledge of MH is indispensable to extract δCP from the data, the first LBNO phase can convincingly give evidence for CPV on the 3δ C.L. using today’s knowledge on oscillation parameters and realistic assumptions on the systematic uncertainties.
Resumo:
INTRODUCTION Despite important advances in psychological and pharmacological treatments of persistent depressive disorders in the past decades, their responses remain typically slow and poor, and differential responses among different modalities of treatments or their combinations are not well understood. Cognitive-Behavioural Analysis System of Psychotherapy (CBASP) is the only psychotherapy that has been specifically designed for chronic depression and has been examined in an increasing number of trials against medications, alone or in combination. When several treatment alternatives are available for a certain condition, network meta-analysis (NMA) provides a powerful tool to examine their relative efficacy by combining all direct and indirect comparisons. Individual participant data (IPD) meta-analysis enables exploration of impacts of individual characteristics that lead to a differentiated approach matching treatments to specific subgroups of patients. METHODS AND ANALYSIS We will search for all randomised controlled trials that compared CBASP, pharmacotherapy or their combination, in the treatment of patients with persistent depressive disorder, in Cochrane CENTRAL, PUBMED, SCOPUS and PsycINFO, supplemented by personal contacts. Individual participant data will be sought from the principal investigators of all the identified trials. Our primary outcomes are depression severity as measured on a continuous observer-rated scale for depression, and dropouts for any reason as a proxy measure of overall treatment acceptability. We will conduct a one-step IPD-NMA to compare CBASP, medications and their combinations, and also carry out a meta-regression to identify their prognostic factors and effect moderators. The model will be fitted in OpenBUGS, using vague priors for all location parameters. For the heterogeneity we will use a half-normal prior on the SD. ETHICS AND DISSEMINATION This study requires no ethical approval. We will publish the findings in a peer-reviewed journal. The study results will contribute to more finely differentiated therapeutics for patients suffering from this chronically disabling disorder. TRIAL REGISTRATION NUMBER CRD42016035886.
Resumo:
Both cointegration methods, and non-cointegrated structural VARs identified based on either long-run restrictions, or a combination of long-run and sign restrictions, are used in order to explore the long-run trade-off between inflation and the unemployment rate in the post-WWII U.S., U.K., Euro area, Canada, and Australia. Overall, neither approach produces clear evidence of a non-vertical trade-off. The extent of uncertainty surrounding the estimates is however substantial, thus implying that a researcher holding alternative priors about what a reasonable slope of the long-run trade-off might be will likely not see her views falsified
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.