83 resultados para Forwards reachable set
Resumo:
We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments.
Resumo:
Background: The COMET (Core Outcome Measures in Effectiveness Trials) Initiative is developing a publicly accessible online resource to collate the knowledge base for core outcome set development (COS) and the applied work from different health conditions. Ensuring that the database is as comprehensive as possible and keeping it up to date are key to its value for users. This requires the development and application of an optimal, multi-faceted search strategy to identify relevant material. This paper describes the challenges of designing and implementing such a search, outlining the development of the search strategy for studies of COS development, and, in turn, the process for establishing a database of COS.
Methods: We investigated the performance characteristics of this strategy including sensitivity, precision and numbers needed to read. We compared the contribution of databases towards identifying included studies to identify the best combination of methods to retrieve all included studies.
Results: Recall of the search strategies ranged from 4% to 87%, and precision from 0.77% to 1.13%. MEDLINE performed best in terms of recall, retrieving 216 (87%) of the 250 included records, followed by Scopus (44%). The Cochrane Methodology Register found just 4% of the included records. MEDLINE was also the database with the highest precision. The number needed to read varied between 89 (MEDLINE) and 130 (SCOPUS).
Conclusions: We found that two databases and hand searching were required to locate all of the studies in this review. MEDLINE alone retrieved 87% of the included studies, but actually 97% of the included studies were indexed on MEDLINE. The Cochrane Methodology Register did not contribute any records that were not found in the other databases, and will not be included in our future searches to identify studies developing COS. SCOPUS had the lowest precision rate (0.77) and highest number needed to read (130). In future COMET searches for COS a balance needs to be struck between the work involved in screening large numbers of records, the frequency of the searching and the likelihood that eligible studies will be identified by means other than the database searches.
Resumo:
Background
Among clinical trials of interventions that aim to modify time spent on mechanical ventilation for critically ill patients there is considerable inconsistency in chosen outcomes and how they are measured. The Core Outcomes in Ventilation Trials (COVenT) study aims to develop a set of core outcomes for use in future ventilation trials in mechanically ventilated adults and children.
Methods/design
We will use a mixed methods approach that incorporates a randomised trial nested within a Delphi study and a consensus meeting. Additionally, we will conduct an observational cohort study to evaluate uptake of the core outcome set in published studies at 5 and 10 years following core outcome set publication. The three-round online Delphi study will use a list of outcomes that have been reported previously in a review of ventilation trials. The Delphi panel will include a range of stakeholder groups including patient support groups. The panel will be randomised to one of three feedback methods to assess the impact of the feedback mechanism on subsequent ranking of outcomes. A final consensus meeting will be held with stakeholder representatives to review outcomes.
Discussion
The COVenT study aims to develop a core outcome set for ventilation trials in critical care, explore the best Delphi feedback mechanism for achieving consensus and determine if participation increases use of the core outcome set in the long term.
Resumo:
Several studies in the last decade have pointed out that many devices, such as computers, are often left powered on even when idle, just to make them available and reachable on the network, leading to large energy waste. The concept of network connectivity proxy (NCP) has been proposed as an effective means to improve energy efficiency. It impersonates the presence of networked devices that are temporally unavailable, by carrying out background networking routines on their behalf. Hence, idle devices could be put into low-power states and save energy. Several architectural alternatives and the applicability of this concept to different protocols and applications have been investigated. However, there is no clear understanding of the limitations and issues of this approach in current networking scenarios. This paper extends the knowledge about the NCP by defining an extended set of tasks that the NCP can carry out, by introducing a suitable communication interface to control NCP operation, and by designing, implementing, and evaluating a functional prototype.
Resumo:
A new approach to determine the local boundary of voltage stability region in a cut-set power space (CVSR) is presented. Power flow tracing is first used to determine the generator-load pair most sensitive to each branch in the interface. The generator-load pairs are then used to realize accurate small disturbances by controlling the branch power flow in increasing and decreasing directions to obtain new equilibrium points around the initial equilibrium point. And, continuous power flow is used starting from such new points to get the corresponding critical points around the initial critical point on the CVSR boundary. Then a hyperplane cross the initial critical point can be calculated by solving a set of linear algebraic equations. Finally, the presented method is validated by some systems, including New England 39-bus system, IEEE 118-bus system, and EPRI-1000 bus system. It can be revealed that the method is computationally more efficient and has less approximation error. It provides a useful approach for power system online voltage stability monitoring and assessment. This work is supported by National Natural Science Foundation of China (No. 50707019), Special Fund of the National Basic Research Program of China (No. 2009CB219701), Foundation for the Author of National Excellent Doctoral Dissertation of PR China (No. 200439), Tianjin Municipal Science and Technology Development Program (No. 09JCZDJC25000), National Major Project of Scientific and Technical Supporting Programs of China During the 11th Five-year Plan Period (No. 2006BAJ03A06). ©2009 State Grid Electric Power Research Institute Press.
Resumo:
Cells experience damage from exogenous and endogenous sources that endanger genome stability. Several cellular pathways have evolved to detect DNA damage and mediate its repair. Although many proteins have been implicated in these processes, only recent studies have revealed how they operate in the context of high-ordered chromatin structure. Here, we identify the nuclear oncogene SET (I2PP2A) as a modulator of DNA damage response (DDR) and repair in chromatin surrounding double-strand breaks (DSBs). We demonstrate that depletion of SET increases DDR and survival in the presence of radiomimetic drugs, while overexpression of SET impairs DDR and homologous recombination (HR)-mediated DNA repair. SET interacts with the Kruppel-associated box (KRAB)-associated co-repressor KAP1, and its overexpression results in the sustained retention of KAP1 and Heterochromatin protein 1 (HP1) on chromatin. Our results are consistent with a model in which SET-mediated chromatin compaction triggers an inhibition of DNA end resection and HR.
Resumo:
BACKGROUND: A core outcome set (COS) can address problems of outcome heterogeneity and outcome reporting bias in trials and systematic reviews, including Cochrane reviews, helping to reduce waste. One of the aims of the international Core Outcome Measures in Effectiveness Trials (COMET) Initiative is to link the development and use of COS with the outcomes specified and reported in Cochrane reviews, including the outcomes listed in the summary of findings (SoF) tables. As part of this work, an earlier exploratory survey of the outcomes of newly published 2007 and 2011 Cochrane reviews was performed. This survey examined the use of COS, the variety of specified outcomes, and outcome reporting in Cochrane reviews by Cochrane Review Group (CRG). To examine changes over time and to explore outcomes that were repeatedly specified over time in Cochrane reviews by CRG, we conducted a follow-up survey of outcomes in 2013 Cochrane reviews.
METHODS: A descriptive survey of outcomes in Cochrane reviews that were first published in 2013. Outcomes specified in the methods sections and reported in the results section of the Cochrane reviews were examined by CRG. We also explored the uptake of SoF tables, the number of outcomes included in these, and the quality of the evidence for the outcomes.
RESULTS: Across the 50 CRGs, 375 Cochrane reviews that included at least one study specified a total of 3142 outcomes. Of these outcomes, 32 % (1008) were not reported in the results section of these reviews. For 23 % (233) of these non-reported outcomes, we did not find any reason in the text of the review for this non-report. Fifty-seven percent (216/375) of reviews included a SoF table.
CONCLUSIONS: The proportion of specified outcomes that were reported in Cochrane reviews had increased in 2013 (68 %) compared to 2007 (61 %) and 2011 (65 %). Importantly, 2013 Cochrane reviews that did not report specified outcomes were twice as likely to provide an explanation for why the outcome was not reported. There has been an increased uptake of SoF tables in Cochrane reviews. Outcomes that were repeatedly specified in Cochrane reviews by CRG in 2007, 2011, and 2013 may assist COS development.
Resumo:
his paper considers a problem of identification for a high dimensional nonlinear non-parametric system when only a limited data set is available. The algorithms are proposed for this purpose which exploit the relationship between the input variables and the output and further the inter-dependence of input variables so that the importance of the input variables can be established. A key to these algorithms is the non-parametric two stage input selection algorithm.
Resumo:
Although Answer Set Programming (ASP) is a powerful framework for declarative problem solving, it cannot in an intuitive way handle situations in which some rules are uncertain, or in which it is more important to satisfy some constraints than others. Possibilistic ASP (PASP) is a natural extension of ASP in which certainty weights are associated with each rule. In this paper we contrast two different views on interpreting the weights attached to rules. Under the first view, weights reflect the certainty with which we can conclude the head of a rule when its body is satisfied. Under the second view, weights reflect the certainty that a given rule restricts the considered epistemic states of an agent in a valid way, i.e. it is the certainty that the rule itself is correct. The first view gives rise to a set of weighted answer sets, whereas the second view gives rise to a weighted set of classical answer sets.
Resumo:
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Resumo:
Boolean games are a framework for reasoning about the rational behaviour of agents, whose goals are formalized using propositional formulas. They offer an attractive alternative to normal-form games, because they allow for a more intuitive and more compact encoding. Unfortunately, however, there is currently no general, tailor-made method available to compute the equilibria of Boolean games. In this paper, we introduce a method for finding the pure Nash equilibria based on disjunctive answer set programming. Our method is furthermore capable of finding the core elements and the Pareto optimal equilibria, and can easily be modified to support other forms of optimality, thanks to the declarative nature of disjunctive answer set programming. Experimental results clearly demonstrate the effectiveness of the proposed method.
Resumo:
Possibilistic answer set programming (PASP) extends answer set programming (ASP) by attaching to each rule a degree of certainty. While such an extension is important from an application point of view, existing semantics are not well-motivated, and do not always yield intuitive results. To develop a more suitable semantics, we first introduce a characterization of answer sets of classical ASP programs in terms of possibilistic logic where an ASP program specifies a set of constraints on possibility distributions. This characterization is then naturally generalized to define answer sets of PASP programs. We furthermore provide a syntactic counterpart, leading to a possibilistic generalization of the well-known Gelfond-Lifschitz reduct, and we show how our framework can readily be implemented using standard ASP solvers.