35 resultados para Linguistic Knowledge Base
Resumo:
Background: The COMET (Core Outcome Measures in Effectiveness Trials) Initiative is developing a publicly accessible online resource to collate the knowledge base for core outcome set development (COS) and the applied work from different health conditions. Ensuring that the database is as comprehensive as possible and keeping it up to date are key to its value for users. This requires the development and application of an optimal, multi-faceted search strategy to identify relevant material. This paper describes the challenges of designing and implementing such a search, outlining the development of the search strategy for studies of COS development, and, in turn, the process for establishing a database of COS.
Methods: We investigated the performance characteristics of this strategy including sensitivity, precision and numbers needed to read. We compared the contribution of databases towards identifying included studies to identify the best combination of methods to retrieve all included studies.
Results: Recall of the search strategies ranged from 4% to 87%, and precision from 0.77% to 1.13%. MEDLINE performed best in terms of recall, retrieving 216 (87%) of the 250 included records, followed by Scopus (44%). The Cochrane Methodology Register found just 4% of the included records. MEDLINE was also the database with the highest precision. The number needed to read varied between 89 (MEDLINE) and 130 (SCOPUS).
Conclusions: We found that two databases and hand searching were required to locate all of the studies in this review. MEDLINE alone retrieved 87% of the included studies, but actually 97% of the included studies were indexed on MEDLINE. The Cochrane Methodology Register did not contribute any records that were not found in the other databases, and will not be included in our future searches to identify studies developing COS. SCOPUS had the lowest precision rate (0.77) and highest number needed to read (130). In future COMET searches for COS a balance needs to be struck between the work involved in screening large numbers of records, the frequency of the searching and the likelihood that eligible studies will be identified by means other than the database searches.
Resumo:
In this paper, we propose an adaptive approach to merging possibilistic knowledge bases that deploys multiple operators instead of a single operator in the merging process. The merging approach consists of two steps: one is called the splitting step and the other is called the combination step. The splitting step splits each knowledge base into two subbases and then in the second step, different classes of subbases are combined using different operators. Our approach is applied to knowledge bases which are self-consistent and the result of merging is also a consistent knowledge base. Two operators are proposed based on two different splitting methods. Both operators result in a possibilistic knowledge base which contains more information than that obtained by the t-conorm (such as the maximum) based merging methods. In the flat case, one of the operators provides a good alternative to syntax-based merging operators in classical logic.
Resumo:
Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool.
Resumo:
It is increasingly recognized that identifying the degree of blame or responsibility of each formula for inconsistency of a knowledge base (i.e. a set of formulas) is useful for making rational decisions to resolve inconsistency in that knowledge base. Most current techniques for measuring the blame of each formula with regard to an inconsistent knowledge base focus on classical knowledge bases only. Proposals for measuring the blames of formulas with regard to an inconsistent prioritized knowledge base have not yet been given much consideration. However, the notion of priority is important in inconsistency-tolerant reasoning. This article investigates this issue and presents a family of measurements for the degree of blame of each formula in an inconsistent prioritized knowledge base by using the minimal inconsistent subsets of that knowledge base. First of all, we present a set of intuitive postulates as general criteria to characterize rational measurements for the blames of formulas of an inconsistent prioritized knowledge base. Then we present a family of measurements for the blame of each formula in an inconsistent prioritized knowledge base under the guidance of the principle of proportionality, one of the intuitive postulates. We also demonstrate that each of these measurements possesses the properties that it ought to have. Finally, we use a simple but explanatory example in requirements engineering to illustrate the application of these measurements. Compared to the related works, the postulates presented in this article consider the special characteristics of minimal inconsistent subsets as well as the priority levels of formulas. This makes them more appropriate to characterizing the inconsistency measures defined from minimal inconsistent subsets for prioritized knowledge bases as well as classical knowledge bases. Correspondingly, the measures guided by these postulates can intuitively capture the inconsistency for prioritized knowledge bases.
Resumo:
Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal-ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suffers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possibilistic knowledge base as the result of merging. In such a case, we argue that it is not always necessary to keep weighted information after merging. In this paper, we define a merging operator that maps a set of possibilistic knowledge bases and a formula representing the integrity constraints to a classical knowledge base by using lexicographic ordering. We show that it satisfies nine postulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowledge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and establish the advantage of our merging operator over existing semantic merging operators in the propositional case.
Resumo:
Knowledge is an important component in many intelligent systems.
Since items of knowledge in a knowledge base can be conflicting, especially if
there are multiple sources contributing to the knowledge in this base, significant
research efforts have been made on developing inconsistency measures for
knowledge bases and on developing merging approaches. Most of these efforts
start with flat knowledge bases. However, in many real-world applications, items
of knowledge are not perceived with equal importance, rather, weights (which
can be used to indicate the importance or priority) are associated with items of
knowledge. Therefore, measuring the inconsistency of a knowledge base with
weighted formulae as well as their merging is an important but difficult task. In
this paper, we derive a numerical characteristic function from each knowledge
base with weighted formulae, based on the Dempster-Shafer theory of evidence.
Using these functions, we are able to measure the inconsistency of the knowledge
base in a convenient and rational way, and are able to merge multiple knowledge
bases with weighted formulae, even if knowledge in these bases may be
inconsistent. Furthermore, by examining whether multiple knowledge bases are
dependent or independent, they can be combined in different ways using their
characteristic functions, which cannot be handled (or at least have never been
considered) in classic knowledge based merging approaches in the literature.
Resumo:
Summary: This article argues that the notion of the knowledge base as a central aspect of professional activity is flawed, and that it is more useful to see social work as in a continuous process of constructing and reconstructing professional knowledge. Findings: Culture is an area that has attracted widespread attention in academia and the social professions. However, there has been little examination of culturally sensitive social work practice from a realist perspective, or one that starts from the view that oppressive structures, as encoded within social class, are essential determinants of cultural experience. Following a critique of postmodern perspectives on culture, the work of Pierre Bourdieu on culture and power is explored. Applications: Three of Bourdieu's key constructs - habitus, field and capital - are utilized to develop a model for culturally sensitive social work practice that attends to the interplay of agency and structure in reproducing inequalities within the social world.
Resumo:
The purpose of this study is to develop a decision making system to evaluate the risks in E-Commerce (EC) projects. Competitive software businesses have the critical task of assessing the risk in the software system development life cycle. This can be conducted on the basis of conventional probabilities, but limited appropriate information is available and so a complete set of probabilities is not available. In such problems, where the analysis is highly subjective and related to vague, incomplete, uncertain or inexact information, the Dempster-Shafer (DS) theory of evidence offers a potential advantage. We use a direct way of reasoning in a single step (i.e., extended DS theory) to develop a decision making system to evaluate the risk in EC projects. This consists of five stages 1) establishing knowledge base and setting rule strengths, 2) collecting evidence and data, 3) determining evidence and rule strength to a mass distribution for each rule; i.e., the first half of a single step reasoning process, 4) combining prior mass and different rules; i.e., the second half of the single step reasoning process, 5) finally, evaluating the belief interval for the best support decision of EC project. We test the system by using potential risk factors associated with EC development and the results indicate that the system is promising way of assisting an EC project manager in identifying potential risk factors and the corresponding project risks.
Resumo:
The tailpipe emissions from automotive engines have been subject to steadily reducing legislative limits. This reduction has been achieved through the addition of sub-systems to the basic four-stroke engine which thereby increases its complexity. To ensure the entire system functions correctly, each system and / or sub-systems needs to be continuously monitored for the presence of any faults or malfunctions. This is a requirement detailed within the On-Board Diagnostic (OBD) legislation. To date, a physical model approach has been adopted by me automotive industry for the monitoring requirement of OBD legislation. However, this approach has restrictions from the available knowledge base and computational load required. A neural network technique incorporating Multivariant Statistical Process Control (MSPC) has been proposed as an alternative method of building interrelationships between the measured variables and monitoring the correct operation of the engine. Building upon earlier work for steady state fault detection, this paper details the use of non-linear models based on an Auto-associate Neural Network (ANN) for fault detection under transient engine operation. The theory and use of the technique is shown in this paper with the application to the detection of air leaks within the inlet manifold system of a modern gasoline engine whilst operated on a pseudo-drive cycle. Copyright © 2007 by ASME.
Resumo:
Background: Despite its prevalence and prognostic impact, primary cachexia is not well understood. Its potential to cause considerable psychological stress indicates the need for qualitative research to help understand the perspectives of those affected.
Objective: The aims of this study were to describe the perspectives of patients with primary cachexia, of their relatives, and of the healthcare professionals involved in their care and to demonstrate how this evidence can be applied in practice at 4 different levels of application ranging from empathy to coaching.
Methods: A review of the qualitative literature and empirical qualitative investigation was used to understand the experiences of patients and relatives and the perspectives of professionals.
Results: The main worries expressed by patients and relatives concerned appetite loss, changing appearance, prognosis, and social interaction. We also describe their coping responses and their views of professionals’ responses. The main concerns of professionals related to poor communication, lack of clinical guidance, and lack of professional education.
Conclusions: Understanding patients’, families’, and professionals’ perspectives, and mapping that understanding onto what we know about the trajectory and prognosis of the condition, provides the evidence base for good practice. Qualitative research has a central role to play in providing the knowledge base for the nursing care of patients with cachexia.
Implications for Practice: The evidence provided can improve nurses’ insight and assist them in assessment of status, the provision of guidance, and coaching. There is a need for the development of a holistic, information-based integrated care pathway for those with cancer cachexia and their families.
Resumo:
There is a growing body of research regarding children and young people in state care that is organised around the concept of transition. Focusing mainly on young people leaving care, the research highlights their experiences of multiple transitions that can contribute to poor long-term outcomes in terms of emotional and psychological well-being, educational attainment and employment prospects. The smaller body of research that focuses on young children shows that their journeys before and when in state care are also marked by multiple and fragmented transitions. Despite the growing knowledge base, there are two areas that remain under-developed—research that draws attention to the lived experiences of young children regarding their transitions into state care; and the development of conceptual frameworks that centralise young children's perspectives to support the development of practice. This article begins to address these gaps by applying Schlossberg's transition framework to a case study of a young child regarding their transition into state care. The article highlights, through the child's perspectives, the multiple impacts of the transition and considers the implications for the development of better child-centred practice.
Resumo:
Context: The development of a consolidated knowledge base for social work requires rigorous approaches to identifying relevant research. Method: The quality of 10 databases and a web search engine were appraised by systematically searching for research articles on resilience and burnout in child protection social workers. Results: Applied Social Sciences Index and Abstracts, Social Services Abstracts and Social Sciences Citation Index (SSCI) had greatest sensitivity, each retrieving more than double than any other database. PsycINFO and Cumulative Index to Nursing and Allied Health (CINAHL) had highest precision. Google Scholar had modest sensitivity and good precision in relation to the first 100 items. SSCI, Google Scholar, Medline, and CINAHL retrieved the highest number of hits not retrieved by any other database. Conclusion: A range of databases is required for even modestly comprehensive searching. Advanced database searching methods are being developed but the profession requires greater standardization of terminology to assist in information retrieval.