893 resultados para Minimal Set
Resumo:
BACKGROUND: A core outcome set (COS) can address problems of outcome heterogeneity and outcome reporting bias in trials and systematic reviews, including Cochrane reviews, helping to reduce waste. One of the aims of the international Core Outcome Measures in Effectiveness Trials (COMET) Initiative is to link the development and use of COS with the outcomes specified and reported in Cochrane reviews, including the outcomes listed in the summary of findings (SoF) tables. As part of this work, an earlier exploratory survey of the outcomes of newly published 2007 and 2011 Cochrane reviews was performed. This survey examined the use of COS, the variety of specified outcomes, and outcome reporting in Cochrane reviews by Cochrane Review Group (CRG). To examine changes over time and to explore outcomes that were repeatedly specified over time in Cochrane reviews by CRG, we conducted a follow-up survey of outcomes in 2013 Cochrane reviews.
METHODS: A descriptive survey of outcomes in Cochrane reviews that were first published in 2013. Outcomes specified in the methods sections and reported in the results section of the Cochrane reviews were examined by CRG. We also explored the uptake of SoF tables, the number of outcomes included in these, and the quality of the evidence for the outcomes.
RESULTS: Across the 50 CRGs, 375 Cochrane reviews that included at least one study specified a total of 3142 outcomes. Of these outcomes, 32 % (1008) were not reported in the results section of these reviews. For 23 % (233) of these non-reported outcomes, we did not find any reason in the text of the review for this non-report. Fifty-seven percent (216/375) of reviews included a SoF table.
CONCLUSIONS: The proportion of specified outcomes that were reported in Cochrane reviews had increased in 2013 (68 %) compared to 2007 (61 %) and 2011 (65 %). Importantly, 2013 Cochrane reviews that did not report specified outcomes were twice as likely to provide an explanation for why the outcome was not reported. There has been an increased uptake of SoF tables in Cochrane reviews. Outcomes that were repeatedly specified in Cochrane reviews by CRG in 2007, 2011, and 2013 may assist COS development.
Resumo:
In this single centre study of childhood acute lymphoblastic leukaemia (ALL) patients treated on the Medical Research Council UKALL 97/99 protocols, it was determined that minimal residual disease (MRD) detected by real time quantitative polymerase chain reaction (RQ-PCR) and 3-colour flow cytometry (FC) displayed high levels of qualitative concordance when evaluated at multiple time-points during treatment (93.38%), and a combined use of both approaches allowed a multi time-point evaluation of MRD kinetics for 90% (53/59) of the initial cohort. At diagnosis, MRD markers with sensitivity of at least 0.01% were identified by RQ-PCR detection of fusion gene transcripts, IGH/TRG rearrangements, and FC. Using a combined RQ-PCR and FC approach, the evaluation of 367 follow-up BM samples revealed that the detection of MRD >1% at Day 15 (P = 0.04), >0.01% at the end of induction (P = 0.02), >0.01% at the end of consolidation (P = 0.01), >0.01% prior to the first delayed intensification (P = 0.01), and >0.1% prior to the second delayed intensification and continued maintenance (P = 0.001) were all associated with relapse and, based on early time-points (end of induction and consolidation) a significant log-rank trend (P = 0.0091) was noted between survival curves for patients stratified into high, intermediate and low-risk MRD groups.
Resumo:
Wilms' tumor gene 1 (WT1) is overexpressed in the majority (70-90%) of acute leukemias and has been identified as an independent adverse prognostic factor, a convenient minimal residual disease (MRD) marker and potential therapeutic target in acute leukemia. We examined WT1 expression patterns in childhood acute lymphoblastic leukemia (ALL), where its clinical implication remains unclear. Using a real-time quantitative PCR designed according to Europe Against Cancer Program recommendations, we evaluated WT1 expression in 125 consecutively enrolled patients with childhood ALL (106 BCP-ALL, 19 T-ALL) and compared it with physiologic WT1 expression in normal and regenerating bone marrow (BM). In childhood B-cell precursor (BCP)-ALL, we detected a wide range of WT1 levels (5 logs) with a median WT1 expression close to that of normal BM. WT1 expression in childhood T-ALL was significantly higher than in BCP-ALL (P<0.001). Patients with MLL-AF4 translocation showed high WT1 overexpression (P<0.01) compared to patients with other or no chromosomal aberrations. Older children (> or =10 years) expressed higher WT1 levels than children under 10 years of age (P<0.001), while there was no difference in WT1 expression in patients with peripheral blood leukocyte count (WBC) > or =50 x 10(9)/l and lower. Analysis of relapsed cases (14/125) indicated that an abnormal increase or decrease in WT1 expression was associated with a significantly increased risk of relapse (P=0.0006), and this prognostic impact of WT1 was independent of other main risk factors (P=0.0012). In summary, our study suggests that WT1 expression in childhood ALL is very variable and much lower than in AML or adult ALL. WT1, thus, will not be a useful marker for MRD detection in childhood ALL, however, it does represent a potential independent risk factor in childhood ALL. Interestingly, a proportion of childhood ALL patients express WT1 at levels below the normal physiological BM WT1 expression, and this reduced WT1 expression appears to be associated with a higher risk of relapse.
Resumo:
his paper considers a problem of identification for a high dimensional nonlinear non-parametric system when only a limited data set is available. The algorithms are proposed for this purpose which exploit the relationship between the input variables and the output and further the inter-dependence of input variables so that the importance of the input variables can be established. A key to these algorithms is the non-parametric two stage input selection algorithm.
Resumo:
Although Answer Set Programming (ASP) is a powerful framework for declarative problem solving, it cannot in an intuitive way handle situations in which some rules are uncertain, or in which it is more important to satisfy some constraints than others. Possibilistic ASP (PASP) is a natural extension of ASP in which certainty weights are associated with each rule. In this paper we contrast two different views on interpreting the weights attached to rules. Under the first view, weights reflect the certainty with which we can conclude the head of a rule when its body is satisfied. Under the second view, weights reflect the certainty that a given rule restricts the considered epistemic states of an agent in a valid way, i.e. it is the certainty that the rule itself is correct. The first view gives rise to a set of weighted answer sets, whereas the second view gives rise to a weighted set of classical answer sets.
Resumo:
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Resumo:
Boolean games are a framework for reasoning about the rational behaviour of agents, whose goals are formalized using propositional formulas. They offer an attractive alternative to normal-form games, because they allow for a more intuitive and more compact encoding. Unfortunately, however, there is currently no general, tailor-made method available to compute the equilibria of Boolean games. In this paper, we introduce a method for finding the pure Nash equilibria based on disjunctive answer set programming. Our method is furthermore capable of finding the core elements and the Pareto optimal equilibria, and can easily be modified to support other forms of optimality, thanks to the declarative nature of disjunctive answer set programming. Experimental results clearly demonstrate the effectiveness of the proposed method.
Resumo:
Possibilistic answer set programming (PASP) extends answer set programming (ASP) by attaching to each rule a degree of certainty. While such an extension is important from an application point of view, existing semantics are not well-motivated, and do not always yield intuitive results. To develop a more suitable semantics, we first introduce a characterization of answer sets of classical ASP programs in terms of possibilistic logic where an ASP program specifies a set of constraints on possibility distributions. This characterization is then naturally generalized to define answer sets of PASP programs. We furthermore provide a syntactic counterpart, leading to a possibilistic generalization of the well-known Gelfond-Lifschitz reduct, and we show how our framework can readily be implemented using standard ASP solvers.
Resumo:
Answer set programming is a form of declarative programming that has proven very successful in succinctly formulating and solving complex problems. Although mechanisms for representing and reasoning with the combined answer set programs of multiple agents have already been proposed, the actual gain in expressivity when adding communication has not been thoroughly studied. We show that allowing simple programs to talk to each other results in the same expressivity as adding negation-as-failure. Furthermore, we show that the ability to focus on one program in a network of simple programs results in the same expressivity as adding disjunction in the head of the rules.
Resumo:
Hidden Markov models (HMMs) are widely used probabilistic models of sequential data. As with other probabilistic models, they require the specification of local conditional probability distributions, whose assessment can be too difficult and error-prone, especially when data are scarce or costly to acquire. The imprecise HMM (iHMM) generalizes HMMs by allowing the quantification to be done by sets of, instead of single, probability distributions. iHMMs have the ability to suspend judgment when there is not enough statistical evidence, and can serve as a sensitivity analysis tool for standard non-stationary HMMs. In this paper, we consider iHMMs under the strong independence interpretation, for which we develop efficient inference algorithms to address standard HMM usage such as the computation of likelihoods and most probable explanations, as well as performing filtering and predictive inference. Experiments with real data show that iHMMs produce more reliable inferences without compromising the computational efficiency.
Resumo:
Online forums are becoming a popular way of finding useful
information on the web. Search over forums for existing discussion
threads so far is limited to keyword-based search due
to the minimal effort required on part of the users. However,
it is often not possible to capture all the relevant context in a
complex query using a small number of keywords. Examplebased
search that retrieves similar discussion threads given
one exemplary thread is an alternate approach that can help
the user provide richer context and vastly improve forum
search results. In this paper, we address the problem of
finding similar threads to a given thread. Towards this, we
propose a novel methodology to estimate similarity between
discussion threads. Our method exploits the thread structure
to decompose threads in to set of weighted overlapping
components. It then estimates pairwise thread similarities
by quantifying how well the information in the threads are
mutually contained within each other using lexical similarities
between their underlying components. We compare our
proposed methods on real datasets against state-of-the-art
thread retrieval mechanisms wherein we illustrate that our
techniques outperform others by large margins on popular
retrieval evaluation measures such as NDCG, MAP, Precision@k
and MRR. In particular, consistent improvements of
up to 10% are observed on all evaluation measures
Resumo:
Many problems in artificial intelligence can be encoded as answer set programs (ASP) in which some rules are uncertain. ASP programs with incorrect rules may have erroneous conclusions, but due to the non-monotonic nature of ASP, omitting a correct rule may also lead to errors. To derive the most certain conclusions from an uncertain ASP program, we thus need to consider all situations in which some, none, or all of the least certain rules are omitted. This corresponds to treating some rules as optional and reasoning about which conclusions remain valid regardless of the inclusion of these optional rules. While a version of possibilistic ASP (PASP) based on this view has recently been introduced, no implementation is currently available. In this paper we propose a simulation of the main reasoning tasks in PASP using (disjunctive) ASP programs, allowing us to take advantage of state-of-the-art ASP solvers. Furthermore, we identify how several interesting AI problems can be naturally seen as special cases of the considered reasoning tasks, including cautious abductive reasoning and conformant planning. As such, the proposed simulation enables us to solve instances of the latter problem types that are more general than what current solvers can handle.