909 resultados para knowledge representation
Resumo:
This study aims to analyze the possibilities of integration of new information and communication technologies (ICT) in teaching and learning in a school of public elementary school Municipal Araraquara. This is a survey based on a qualitative approach, which has a strategy Case Study. Made an analysis of the Curriculum Guidelines and Policy Project (PPP) school selected, in order to check how the school addresses the issue of the use of new technologies in the educational process. For the analysis of the Curriculum Guidelines and Educational Policy Project found that the school has studied a concern to make use of ICT in teaching and learning, featuring an awareness of the importance of technology in education, but does not describe ways to integrate curricularmente them. The integration of new technologies into the curriculum requires a systematic reflection about their goals, their technical and content chosen. Not only is working with new technologies aimed at digital inclusion of students, but we need to integrate the school into their curricular, integrate them into the process of teaching and learning. Today ICT configure themselves into a new form of language, essential for knowledge representation and, if so, their presence in the school curriculum is crucial.
Resumo:
Pós-graduação em Televisão Digital: Informação e Conhecimento - FAAC
Resumo:
Different vocabularies and contexts are barriers to the communication between people or software systems. It is necessary a common understanding in the domain that is talked about, so it can be obtained a correct interpretation of the information. An ontology formally models the structure of a domain and turn explicit the shared understanding in the form of concepts and relations that emerge from its observation. Constitutes a sort of framework used in the mapping to the meaning of the information that is talked about. The formal accuracy in which they are defined, by means of axioms, allow machine processing, implicating in systems interoperability. Structured this way, the knowledge is easily transferred between people or systems from different contexts. Ontologies present several applications nowadays. They are considered the infra-structure to the Semantic Web, which is composed by Web resources with embedded meaning. Thereby, the automatic execution of complex tasks is allowed, with the benefit from the effective communication between Web software agents. Among other applications, they also have been used to structure the knowledge generated from several areas, like Biology and Software Engineering.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Reasoning and change over inconsistent knowledge bases (KBs) is of utmost relevance in areas like medicine and law. Argumentation may bring the possibility to cope with both problems. Firstly, by constructing an argumentation framework (AF) from the inconsistent KB, we can decide whether to accept or reject a certain claim through the interplay among arguments and counterarguments. Secondly, by handling dynamics of arguments of the AF, we might deal with the dynamics of knowledge of the underlying inconsistent KB. Dynamics of arguments has recently attracted attention and although some approaches have been proposed, a full axiomatization within the theory of belief revision was still missing. A revision arises when we want the argumentation semantics to accept an argument. Argument Theory Change (ATC) encloses the revision operators that modify the AF by analyzing dialectical trees-arguments as nodes and attacks as edges-as the adopted argumentation semantics. In this article, we present a simple approach to ATC based on propositional KBs. This allows to manage change of inconsistent KBs by relying upon classical belief revision, although contrary to it, consistency restoration of the KB is avoided. Subsequently, a set of rationality postulates adapted to argumentation is given, and finally, the proposed model of change is related to the postulates through the corresponding representation theorem. Though we focus on propositional logic, the results can be easily extended to more expressive formalisms such as first-order logic and description logics, to handle evolution of ontologies.
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
This paper describes a logic-based formalism for qualitative spatial reasoning with cast shadows (Perceptual Qualitative Relations on Shadows, or PQRS) and presents results of a mobile robot qualitative self-localisation experiment using this formalism. Shadow detection was accomplished by mapping the images from the robot’s monocular colour camera into a HSV colour space and then thresholding on the V dimension. We present results of selflocalisation using two methods for obtaining the threshold automatically: in one method the images are segmented according to their grey-scale histograms, in the other, the threshold is set according to a prediction about the robot’s location, based upon a qualitative spatial reasoning theory about shadows. This theory-driven threshold search and the qualitative self-localisation procedure are the main contributions of the present research. To the best of our knowledge this is the first work that uses qualitative spatial representations both to perform robot self-localisation and to calibrate a robot’s interpretation of its perceptual input.
Resumo:
The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.
Resumo:
Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.
Resumo:
Online reputation management deals with monitoring and influencing the online record of a person, an organization or a product. The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly have a disastrous influence on the online reputation of some of the entities. This dissertation can be split into three parts: In the first part, possible fuzzy clustering applications for the Social Semantic Web are investigated. The second part explores promising Social Semantic Web elements for organizational applications,while in the third part the former two parts are brought together and a fuzzy online reputation analysis framework is introduced and evaluated. Theentire PhD thesis is based on literature reviews as well as on argumentative-deductive analyses.The possible applications of Social Semantic Web elements within organizations have been researched using a scenario and an additional case study together with two ancillary case studies—based on qualitative interviews. For the conception and implementation of the online reputation analysis application, a conceptual framework was developed. Employing test installations and prototyping, the essential parts of the framework have been implemented.By following a design sciences research approach, this PhD has created two artifacts: a frameworkand a prototype as proof of concept. Bothartifactshinge on twocoreelements: a (cluster analysis-based) translation of tags used in the Social Web to a computer-understandable fuzzy grassroots ontology for the Semantic Web, and a (Topic Maps-based) knowledge representation system, which facilitates a natural interaction with the fuzzy grassroots ontology. This is beneficial to the identification of unknown but essential Web data that could not be realized through conventional online reputation analysis. Theinherent structure of natural language supports humans not only in communication but also in the perception of the world. Fuzziness is a promising tool for transforming those human perceptions intocomputer artifacts. Through fuzzy grassroots ontologies, the Social Semantic Web becomes more naturally and thus can streamline online reputation management.
Resumo:
Manuscript 1: “Conceptual Analysis: Externalizing Nursing Knowledge” We use concept analysis to establish that the report tool nurses prepare, carry, reference, amend, and use as a temporary data repository are examples of cognitive artifacts. This tool, integrally woven throughout the work and practice of nurses, is important to cognition and clinical decision-making. Establishing the tool as a cognitive artifact will support new dimensions of study. Such studies can characterize how this report tool supports cognition, internal representation of knowledge and skills, and external representation of knowledge of the nurse. Manuscript 2: “Research Methods: Exploring Cognitive Work” The purpose of this paper is to describe a complex, cross-sectional, multi-method approach to study of personal cognitive artifacts in the clinical environment. The complex data arrays present in these cognitive artifacts warrant the use of multiple methods of data collection. Use of a less robust research design may result in an incomplete understanding of the meaning, value, content, and relationships between personal cognitive artifacts in the clinical environment and the cognitive work of the user. Manuscript 3: “Making the Cognitive Work of Registered Nurses Visible” Purpose: Knowledge representations and structures are created and used by registered nurses to guide patient care. Understanding is limited regarding how these knowledge representations, or cognitive artifacts, contribute to working memory, prioritization, organization, cognition, and decision-making. The purpose of this study was to identify and characterize the role a specific cognitive artifact knowledge representation and structure as it contributed to the cognitive work of the registered nurse. Methods: Data collection was completed, using qualitative research methods, by shadowing and interviewing 25 registered nurses. Data analysis employed triangulation and iterative analytic processes. Results: Nurse cognitive artifacts support recall, data evaluation, decision-making, organization, and prioritization. These cognitive artifacts demonstrated spatial, longitudinal, chronologic, visual, and personal cues to support the cognitive work of nurses. Conclusions: Nurse cognitive artifacts are an important adjunct to the cognitive work of nurses, and directly support patient care. Nurses need to be able to configure their cognitive artifact in ways that are meaningful and support their internal knowledge representations.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Enabling Subject Matter Experts (SMEs) to formulate knowledge without the intervention of Knowledge Engineers (KEs) requires providing SMEs with methods and tools that abstract the underlying knowledge representation and allow them to focus on modeling activities. Bridging the gap between SME-authored models and their representation is challenging, especially in the case of complex knowledge types like processes, where aspects like frame management, data, and control flow need to be addressed. In this paper, we describe how SME-authored process models can be provided with an operational semantics and grounded in a knowledge representation language like F-logic in order to support process-related reasoning. The main results of this work include a formalism for process representation and a mechanism for automatically translating process diagrams into executable code following such formalism. From all the process models authored by SMEs during evaluation 82% were well-formed, all of which executed correctly. Additionally, the two optimizations applied to the code generation mechanism produced a performance improvement at reasoning time of 25% and 30% with respect to the base case, respectively.
Resumo:
This paper describes ExperNet, an intelligent multi-agent system that was developed under an EU funded project to assist in the management of a large-scale data network. ExperNet assists network operators at various nodes of a WAN to detect and diagnose hardware failures and network traffic problems and suggests the most feasible solution, through a web-based interface. ExperNet is composed by intelligent agents, capable of both local problem solving and social interaction among them for coordinating problem diagnosis and repair. The current network state is captured and maintained by conventional network management and monitoring software components, which have been smoothly integrated into the system through sophisticated information exchange interfaces. For the implementation of the agents, a distributed Prolog system enhanced with networking facilities was developed. The agents’ knowledge base is developed in an extensible and reactive knowledge base system capable of handling multiple types of knowledge representation. ExperNet has been developed, installed and tested successfully in an experimental network zone of Ukraine.