960 resultados para Artificial intelligence.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.
Resumo:
The United States Supreme Court case of 1991, Feist Publications, Inc. v. Rural Tel. Service Co., continues to be highly significant for property in data and databases, but remains poorly understood. The approach taken in this article contrasts with previous studies. It focuses upon the “not original” rather than the original. The delineation of the absence of a modicum of creativity in selection, coordination, and arrangement of data as a component of the not original forms a pivotal point in the Supreme Court decision. The author also aims at elucidation rather than critique, using close textual exegesis of the Supreme Court decision. The results of the exegesis are translated into a more formal logical form to enhance clarity and rigor.
The insufficiently creative is initially characterized as “so mechanical or routine.” Mechanical and routine are understood in their ordinary discourse senses, as a conjunction or as connected by AND, and as the central clause. Subsequent clauses amplify the senses of mechanical and routine without disturbing their conjunction.
The delineation of the absence of a modicum of creativity can be correlated with classic conceptions of computability. The insufficiently creative can then be understood as a routine selection, coordination, or arrangement produced by an automatic mechanical procedure or algorithm. An understanding of a modicum of creativity and of copyright law is also indicated.
The value of the exegesis and interpretation is identified as its final simplicity, clarity, comprehensiveness, and potential practical utility.
Resumo:
Temporal distinctiveness models of memory retrieval claim that memories are organised partly in terms of their positions along a temporal dimension, and suggest that memory retrieval involves temporal discrimination. According to such models the retrievability of memories should be related to the discriminability of their temporal distances at the time of retrieval. This prediction is tested directly in three pairs of experiments that examine (a) memory retrieval and (b) identification of temporal durations that correspond to the temporal distances of the memories. Qualitative similarities between memory retrieval and temporal discrimination are found in probed serial recall (Experiments 1 and 2), immediate and delayed free recall (Experiments 3 and 4) and probed serial recall of grouped lists (Experiments 5 and 6). The results are interpreted as consistent with the suggestion that memory retrieval is indeed akin to temporal discrimination. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
We apply all autobiographical memory framework to the Study of regret. Focusing oil the distinction between regrets for specific and general events we argue that the temporal profile of regret, usually explained in terms of the action-inaction distinction, is predicted by models of autobiographical memory. In two studies involving Participants in their sixties we demonstrate a reminiscence bump for general, but not for specific regrets. Recent regrets were more likely to be specific than general in nature. Coding regrets as actions/inactions revealed that general regrets were significantly more likely to be due to inaction while specific regrets were as likely to be clue to action as to inaction. In Study 2 we also generalised all of these findings to a group of participants in their 40s. We re-interpret existing accounts of the temporal profile of regret within the autobiographical memory framework, and Outline the practical and theoretical advantages Of Our memory-based distinction over traditional decision-making approaches to the Study of regret. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
The decision of the U.S. Supreme Court in 1991 in Feist Publications, Inc. v. Rural Tel. Service Co. affirmed originality as a constitutional requirement for copyright. Originality has a specific sense and is constituted by a minimal degree of creativity and independent creation. The not original is the more developed concept within the decision. It includes the absence of a minimal degree of creativity as a major constituent. Different levels of absence of creativity also are distinguished, from the extreme absence of creativity to insufficient creativity. There is a gestalt effect of analogy between the delineation of the not original and the concept of computability. More specific correlations can be found within the extreme absence of creativity. "[S]o mechanical" in the decision can be correlated with an automatic mechanical procedure and clauses with a historical resonance with understandings of computability as what would naturally be regarded as computable. The routine within the extreme absence of creativity can be regarded as the product of a computational process. The concern of this article is with rigorously establishing an understanding of the extreme absence of creativity, primarily through the correlations with aspects of computability. The understanding established is consistent with the other elements of the not original. It also revealed as testable under real-world conditions. The possibilities for understanding insufficient creativity, a minimal degree of creativity, and originality, from the understanding developed of the extreme absence of creativity, are indicated.
Resumo:
With the rapid growth in the quantity and complexity of scientific knowledge available for scientists, and allied professionals, the problems associated with harnessing this knowledge are well recognized. Some of these problems are a result of the uncertainties and inconsistencies that arise in this knowledge. Other problems arise from heterogeneous and informal formats for this knowledge. To address these problems, developments in the application of knowledge representation and reasoning technologies can allow scientific knowledge to be captured in logic-based formalisms. Using such formalisms, we can undertake reasoning with the uncertainty and inconsistency to allow automated techniques to be used for querying and combining of scientific knowledge. Furthermore, by harnessing background knowledge, the querying and combining tasks can be carried out more intelligently. In this paper, we review some of the significant proposals for formalisms for representing and reasoning with scientific knowledge.
Resumo:
The number of clinical trials reports is increasing rapidly due to a large number of clinical trials being conducted; it, therefore, raises an urgent need to utilize the clinical knowledge contained in the clinical trials reports. In this paper, we focus on the qualitative knowledge instead of quantitative knowledge. More precisely, we aim to model and reason with the qualitative comparison (QC for short) relations which consider qualitatively how strongly one drug/therapy is preferred to another in a clinical point of view. To this end, first, we formalize the QC relations, introduce the notions of QC language, QC base, and QC profile; second, we propose a set of induction rules for the QC relations and provide grading interpretations for the QC bases and show how to determine whether a QC base is consistent. Furthermore, when a QC base is inconsistent, we analyze how to measure inconsistencies among QC bases, and we propose different approaches to merging multiple QC bases. Finally, a case study on lowering intraocular pressure is conducted to illustrate our approaches.
Resumo:
Hunter and Konieczny explored the relationships between measures of inconsistency for a belief base and the minimal inconsistent subsets of that belief base in several of their papers. In particular, an inconsistency value termed MIVC, defined from minimal inconsistent subsets, can be considered as a Shapley Inconsistency Value. Moreover, it can be axiomatized completely in terms of five simple axioms. MinInc, one of the five axioms, states that each minimal inconsistent set has the same amount of conflict. However, it conflicts with the intuition illustrated by the lottery paradox, which states that as the size of a minimal inconsistent belief base increases, the degree of inconsistency of that belief base becomes smaller. To address this, we present two kinds of revised inconsistency measures for a belief base from its minimal inconsistent subsets. Each of these measures considers the size of each minimal inconsistent subset as well as the number of minimal inconsistent subsets of a belief base. More specifically, we first present a vectorial measure to capture the inconsistency for a belief base, which is more discriminative than MIVC. Then we present a family of weighted inconsistency measures based on the vectorial inconsistency measure, which allow us to capture the inconsistency for a belief base in terms of a single numerical value as usual. We also show that each of the two kinds of revised inconsistency measures can be considered as a particular Shapley Inconsistency Value, and can be axiomatically characterized by the corresponding revised axioms presented in this paper.
Resumo:
This paper investigates the center selection of multi-output radial basis function (RBF) networks, and a multi-output fast recursive algorithm (MFRA) is proposed. This method can not only reveal the significance of each candidate center based on the reduction in the trace of the error covariance matrix, but also can estimate the network weights simultaneously using a back substitution approach. The main contribution is that the center selection procedure and the weight estimation are performed within a well-defined regression context, leading to a significantly reduced computational complexity. The efficiency of the algorithm is confirmed by a computational complexity analysis, and simulation results demonstrate its effectiveness. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper we investigate the relationship between two prioritized knowledge bases by measuring both the conflict and the agreement between them.First of all, a quantity of conflict and two quantities of agreement are defined. The former is shown to be a generalization of the well-known Dalal distance which is the hamming distance between two interpretations. The latter are, respectively, a quantity of strong agreement which measures the amount ofinformation on which two belief bases “totally” agree, and a quantity of weak agreement which measures the amount of information that is believed by onesource but is unknown to the other. All three quantity measures are based on the weighted prime implicant, which represents beliefs in a prioritized belief base. We then define a degree of conflict and two degrees of agreement based on our quantity of conflict and quantities of agreement. We also consider the impact of these measures on belief merging and information source ordering.
Resumo:
The success postulate in belief revision ensures that new evidence (input) is always trusted. However, admitting uncertain input has been questioned by many researchers. Darwiche and Pearl argued that strengths of evidence should be introduced to determine the outcome of belief change, and provided a preliminary definition towards this thought. In this paper, we start with Darwiche and Pearl’s idea aiming to develop a framework that can capture the influence of the strengths of inputs with some rational assumptions. To achieve this, we first define epistemic states to represent beliefs attached with strength, and then present a set of postulates to describe the change process on epistemic states that is determined by the strengths of input and establish representation theorems to characterize these postulates. As a result, we obtain a unique rewarding operator which is proved to be a merging operator that is in line with many other works. We also investigate existing postulates on belief merging and compare them with our postulates. In addition, we show that from an epistemic state, a corresponding ordinal conditional function by Spohn can be derived and the result of combining two epistemic states is thus reduced to the result of combining two corresponding ordinal conditional functions proposed by Laverny and Lang. Furthermore, when reduced to the belief revision situation, we prove that our results induce all the Darwiche and Pearl’s postulates as well as the Recalcitrance postulate and the Independence postulate.