862 resultados para Knowledge Representation Formalisms and Methods
Resumo:
The present investigation aims to analyse the relationship between knowledge sharing behaviours and performance. The former behaviours were studied using Social Network Analysis, in an attempt to characterise knowledge sharing networks. Through identification of central individuals in these networks, we made analysis of the association between this centrality and individual performance. A questionnaire was developed and applied to a sample of workers in a Portuguese organisation (N=244). The final conclusions point to a positive association between these behaviours and individual performance.
Resumo:
Knowledge organization in the networked environment is guided by standards. Standards in knowledge organization are built on principles. For example, NISO Z39.19-1993 Guide to the Construction of Monolingual Thesauri (now undergoing revision) and NISO Z39.85- 2001 Dublin Core Metadata Element Set are two standards used in many implementations. Both of these standards were crafted with knowledge organization principles in mind. Therefore it is standards work guided by knowledge organization principles which can affect design of information services and technologies. This poster outlines five threads of thought that inform knowledge organization principles in the networked environment. An understanding of each of these five threads informs system evaluation. The evaluation of knowledge organization systems should be tightly linked to a rigorous understanding of the principles of construction. Thus some foundational evaluation questions grow from an understanding of stan dard s and pr inciples: on what pr inciples is this know ledge organization system built? How well does this implementation meet the ideal conceptualization of those principles? How does this tool compare to others built on the same principles?
Resumo:
The diagnosis of mixed genotype hepatitis C virus (HCV) infection is rare and information on incidence in the UK, where genotypes 1a and 3 are the most prevalent, is sparse. Considerable variations in the efficacies of direct-acting antivirals (DAAs) for the HCV genotypes have been documented and the ability of DAAs to treat mixed genotype HCV infections remains unclear, with the possibility that genotype switching may occur. In order to estimate the prevalence of mixed genotype 1a/3 infections in Scotland, a cohort of 512 samples was compiled and then screened using a genotype-specific nested PCR assay. Mixed genotype 1a/3 infections were found in 3.8% of samples tested, with a significantly higher prevalence rate of 6.7% (p<0.05) observed in individuals diagnosed with genotype 3 infections than genotype 1a (0.8%). An analysis of the samples using genotypic-specific qPCR assays found that in two-thirds of samples tested, the minor strain contributed <1% of the total viral load. The potential of deep sequencing methods for the diagnosis of mixed genotype infections was assessed using two pan-genotypic PCR assays compatible with the Illumina MiSeq platform that were developed targeting the E1-E2 and NS5B regions of the virus. The E1-E2 assay detected 75% of the mixed genotype infections, proving to be more sensitive than the NS5B assay which identified only 25% of the mixed infections. Studies of sequence data and linked patient records also identified significantly more neurological disorders in genotype 3 patients. Evidence of distinctive dinucleotide expression within the genotypes was also uncovered. Taken together these findings raise interesting questions about the evolutionary history of the virus and indicate that there is still more to understand about the different genotypes. In an era where clinical medicine is frequently more personalised, the development of diagnostic methods for HCV providing increased patient stratification is increasingly important. This project has shown that sequence-based genotyping methods can be highly discriminatory and informative, and their use should be encouraged in diagnostic laboratories. Mixed genotype infections were challenging to identify and current deep sequencing methods were not as sensitive or cost-effective as Sanger-based approaches in this study. More research is needed to evaluate the clinical prognosis of patients with mixed genotype infection and to develop clinical guidelines on their treatment.
Resumo:
Natural events are a widely recognized hazard for industrial sites where relevant quantities of hazardous substances are handled, due to the possible generation of cascading events resulting in severe technological accidents (Natech scenarios). Natural events may damage storage and process equipment containing hazardous substances, that may be released leading to major accident scenarios called Natech events. The need to assess the risk associated with Natech scenarios is growing and methodologies were developed to allow the quantification of Natech risk, considering both point sources and linear sources as pipelines. A key element of these procedures is the use of vulnerability models providing an estimation of the damage probability of equipment or pipeline segment as a result of the impact of the natural event. Therefore, the first aim of the PhD project was to outline the state of the art of vulnerability models for equipment and pipelines subject to natural events such as floods, earthquakes, and wind. Moreover, the present PhD project also aimed at the development of new vulnerability models in order to fill some gaps in literature. In particular, a vulnerability model for vertical equipment subject to wind and to flood were developed. Finally, in order to improve the calculation of Natech risk for linear sources an original methodology was developed for Natech quantitative risk assessment methodology for pipelines subject to earthquakes. Overall, the results obtained are a step forward in the quantitative risk assessment of Natech accidents. The tools developed open the way to the inclusion of new equipment in the analysis of Natech events, and the methodology for the assessment of linear risk sources as pipelines provides an important tool for a more accurate and comprehensive assessment of Natech risk.
Resumo:
Drawing on theories of technical communication, rhetoric, literacy, language and culture, and medical anthropology, this dissertation explores how local culture and traditions can be incorporated into health-risk-communication-program design and implementation, including the design and dissemination of health-risk messages. In a modern world with increasing global economic partnerships, mounting health and environmental risks, and cross-cultural collaborations, those who interact with people of different cultures have “a moral obligation to take those cultures seriously, including their social organization and values” (Hahn and Inhorn 10). Paradoxically, at the same time as we must carefully adapt health, safety, and environmental-risk messages to diverse cultures and populations, we must also recognize the increasing extent to which we are all becoming part of one, vast, interrelated global village. This, too, has a significant impact on the ways in which healthcare plans should be designed, communicated, and implemented. Because communicating across diverse cultures requires a system for “bridging the gap between individual differences and negotiating individual realities” (Kim and Gudykunst 50), both administrators and beneficiaries of malaria-treatment-and-control programs (MTCPs) in Liberia were targeted to participate in this study. A total of 105 people participated in this study: 21 MTCP administrators (including designers and implementers) completed survey questionnaires on program design, implementation, and outcomes; and 84 MTCP beneficiaries (e.g., traditional leaders and young adults) were interviewed about their knowledge of malaria and methods for communicating health risks in their tribe or culture. All participants showed a tremendous sense of courage, commitment, resilience, and pragmatism, especially in light of the fact that many of them live and work under dire socioeconomic conditions (e.g., no electricity and poor communication networks). Although many MTCP beneficiaries interviewed for this study had bed nets in their homes, a majority (46.34 percent) used a combination of traditional herbal medicine and Western medicine to treat malaria. MTCP administrators who participated in this study rated the impacts of their programs on reducing malaria in Liberia as moderately successful (61.90 percent) or greatly successful (38.10 percent), and they offered a variety of insights on what they might do differently in the future to incorporate local culture and traditions into program design and implementation. Participating MTCP administrators and beneficiaries differed in their understanding of what “cultural incorporation” meant, but they agreed that using local indigenous languages to communicate health-risk messages was essential for effective health-risk communication. They also suggested that understanding the literacy practices and linguistic cultures of the local people is essential to communicating health risks across diverse cultures and populations.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
The objective of this study was to analyse the effect of using two health education approaches on knowledge of transmission and prevention of schistosomiasis of school children living in a rural endemic area in the state of Minas Gerais, Brazil. The 87 children participating in the study were divided into three groups based on gender, age and presence or absence of Schistosoma mansoni infection. In the first group the social representation model and illness experience was used. In the second group, we used the cognitive model based on the transmission of information. The third group, the control group, did not receive any information related to schistosomiasis. Ten meetings were held with all three groups that received a pre-test prior to the beginning of the educational intervention and a post-test after the completion of the program. The results showed that knowledge levels in Group 1 increased significantly during the program in regard to transmission (p = 0.038) and prevention (p = 0.001) of schistosomiasis. Groups 2 and 3 did not show significant increase in knowledge between the two tests. These results indicate that health education models need to consider social representation and illness experience besides scientific knowledge in order to increase knowledge of schistosomiasis transmission and prevention.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and methods for checking them are described. It is shown on a simple example how different modelling assumptions act on the model equations and their effect on the differential index of the resulted model is also indicated.
Resumo:
In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.