31 resultados para rule-based logic
Resumo:
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.
Resumo:
The Pulmonary Embolism Rule-out Criteria (PERC) rule is a clinical diagnostic rule designed to exclude pulmonary embolism (PE) without further testing. We sought to externally validate the diagnostic performance of the PERC rule alone and combined with clinical probability assessment based on the revised Geneva score.
Resumo:
This paper aims at the development and evaluation of a personalized insulin infusion advisory system (IIAS), able to provide real-time estimations of the appropriate insulin infusion rate for type 1 diabetes mellitus (T1DM) patients using continuous glucose monitors and insulin pumps. The system is based on a nonlinear model-predictive controller (NMPC) that uses a personalized glucose-insulin metabolism model, consisting of two compartmental models and a recurrent neural network. The model takes as input patient's information regarding meal intake, glucose measurements, and insulin infusion rates, and provides glucose predictions. The predictions are fed to the NMPC, in order for the latter to estimate the optimum insulin infusion rates. An algorithm based on fuzzy logic has been developed for the on-line adaptation of the NMPC control parameters. The IIAS has been in silico evaluated using an appropriate simulation environment (UVa T1DM simulator). The IIAS was able to handle various meal profiles, fasting conditions, interpatient variability, intraday variation in physiological parameters, and errors in meal amount estimations.
Resumo:
It is not clear what a system for evidence-based common knowledge should look like if common knowledge is treated as a greatest fixed point. This paper is a preliminary step towards such a system. We argue that the standard induction rule is not well suited to axiomatize evidence-based common knowledge. As an alternative, we study two different deductive systems for the logic of common knowledge. The first system makes use of an induction axiom whereas the second one is based on co-inductive proof theory. We show the soundness and completeness for both systems.
Resumo:
Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score) provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident (< = 12 m) or older infection by 26 different algorithms. Incident infection rates (IIR) were calculated based on diagnostic sensitivity and specificity of each algorithm and the rule that the total of incident results is the sum of true-incident and false-incident results, which can be calculated by means of the pre-determined sensitivity and specificity. Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline) and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and sampling bias.
Resumo:
STUDY DESIGN: The biomechanics of vertebral bodies augmented with real distributions of cement were investigated using nonlinear finite element (FE) analysis. OBJECTIVES: To compare stiffness, strength, and stress transfer of augmented versus nonaugmented osteoporotic vertebral bodies under compressive loading. Specifically, to examine how cement distribution, volume, and compliance affect these biomechanical variables. SUMMARY OF BACKGROUND DATA: Previous FE studies suggested that vertebroplasty might alter vertebral stress transfer, leading to adjacent vertebral failure. However, no FE study so far accounted for real cement distributions and bone damage accumulation. METHODS: Twelve vertebral bodies scanned with high-resolution pQCT and tested in compression were augmented with various volumes of cements and scanned again. Nonaugmented and augmented pQCT datasets were converted to FE models, with bone properties modeled with an elastic, plastic and damage constitutive law that was previously calibrated for the nonaugmented models. The cement-bone composite was modeled with a rule of mixture. The nonaugmented and augmented FE models were subjected to compression and their stiffness, strength, and stress map calculated for different cement compliances. RESULTS: Cement distribution dominated the stiffening and strengthening effects of augmentation. Models with cement connecting either the superior or inferior endplate (S/I fillings) were only up to 2 times stiffer than the nonaugmented models with minimal strengthening, whereas those with cement connecting both endplates (S + I fillings) were 1 to 8 times stiffer and 1 to 12 times stronger. Stress increases above and below the cement, which was higher for the S + I cases and was significantly reduced by increasing cement compliance. CONCLUSION: The developed FE approach, which accounts for real cement distributions and bone damage accumulation, provides a refined insight into the mechanics of augmented vertebral bodies. In particular, augmentation with compliant cement bridging both endplates would reduce stress transfer while providing sufficient strengthening.
Resumo:
Proof nets provide abstract counterparts to sequent proofs modulo rule permutations; the idea being that if two proofs have the same underlying proof-net, they are in essence the same proof. Providing a convincing proof-net counterpart to proofs in the classical sequent calculus is thus an important step in understanding classical sequent calculus proofs. By convincing, we mean that (a) there should be a canonical function from sequent proofs to proof nets, (b) it should be possible to check the correctness of a net in polynomial time, (c) every correct net should be obtainable from a sequent calculus proof, and (d) there should be a cut-elimination procedure which preserves correctness. Previous attempts to give proof-net-like objects for propositional classical logic have failed at least one of the above conditions. In Richard McKinley (2010) [22], the author presented a calculus of proof nets (expansion nets) satisfying (a) and (b); the paper defined a sequent calculus corresponding to expansion nets but gave no explicit demonstration of (c). That sequent calculus, called LK∗ in this paper, is a novel one-sided sequent calculus with both additively and multiplicatively formulated disjunction rules. In this paper (a self-contained extended version of Richard McKinley (2010) [22]), we give a full proof of (c) for expansion nets with respect to LK∗, and in addition give a cut-elimination procedure internal to expansion nets – this makes expansion nets the first notion of proof-net for classical logic satisfying all four criteria.
Resumo:
In this paper, we are concerned about the short-term scheduling of industrial make-and-pack production processes. The planning problem consists in minimizing the production makespan while meeting given end-product demands. Sequence-dependent changeover times, multi-purpose storage units with finite capacities, quarantine times, batch splitting, partial equipment connectivity, material transfer times, and a large number of operations contribute to the complexity of the problem. Known MILP formulations cover all technological constraints of such production processes, but only small problem instances can be solved in reasonable CPU times. In this paper, we develop a heuristic in order to tackle large instances. Under this heuristic, groups of batches are scheduled iteratively using a novel MILP formulation; the assignment of the batches to the groups and the scheduling sequence of the groups are determined using a priority rule. We demonstrate the applicability by means of a real-world production process.
Resumo:
AIMS A non-invasive gene-expression profiling (GEP) test for rejection surveillance of heart transplant recipients originated in the USA. A European-based study, Cardiac Allograft Rejection Gene Expression Observational II Study (CARGO II), was conducted to further clinically validate the GEP test performance. METHODS AND RESULTS Blood samples for GEP testing (AlloMap(®), CareDx, Brisbane, CA, USA) were collected during post-transplant surveillance. The reference standard for rejection status was based on histopathology grading of tissue from endomyocardial biopsy. The area under the receiver operating characteristic curve (AUC-ROC), negative (NPVs), and positive predictive values (PPVs) for the GEP scores (range 0-39) were computed. Considering the GEP score of 34 as a cut-off (>6 months post-transplantation), 95.5% (381/399) of GEP tests were true negatives, 4.5% (18/399) were false negatives, 10.2% (6/59) were true positives, and 89.8% (53/59) were false positives. Based on 938 paired biopsies, the GEP test score AUC-ROC for distinguishing ≥3A rejection was 0.70 and 0.69 for ≥2-6 and >6 months post-transplantation, respectively. Depending on the chosen threshold score, the NPV and PPV range from 98.1 to 100% and 2.0 to 4.7%, respectively. CONCLUSION For ≥2-6 and >6 months post-transplantation, CARGO II GEP score performance (AUC-ROC = 0.70 and 0.69) is similar to the CARGO study results (AUC-ROC = 0.71 and 0.67). The low prevalence of ACR contributes to the high NPV and limited PPV of GEP testing. The choice of threshold score for practical use of GEP testing should consider overall clinical assessment of the patient's baseline risk for rejection.
Resumo:
OBJECTIVES To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.