838 resultados para Representation and information retrieval technologies


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web-scale knowledge retrieval can be enabled by distributed information retrieval, clustering Web clients to a large-scale computing infrastructure for knowledge discovery from Web documents. Based on this infrastructure, we propose to apply semiotic (i.e., sub-syntactical) and inductive (i.e., probabilistic) methods for inferring concept associations in human knowledge. These associations can be combined to form a fuzzy (i.e.,gradual) semantic net representing a map of the knowledge in the Web. Thus, we propose to provide interactive visualizations of these cognitive concept maps to end users, who can browse and search the Web in a human-oriented, visual, and associative interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nurses prepare knowledge representations, or summaries of patient clinical data, each shift. These knowledge representations serve multiple purposes, including support of working memory, workload organization and prioritization, critical thinking, and reflection. This summary is integral to internal knowledge representations, working memory, and decision-making. Study of this nurse knowledge representation resulted in development of a taxonomy of knowledge representations necessary to nursing practice.This paper describes the methods used to elicit the knowledge representations and structures necessary for the work of clinical nurses, described the development of a taxonomy of this knowledge representation, and discusses translation of this methodology to the cognitive artifacts of other disciplines. Understanding the development and purpose of practitioner's knowledge representations provides important direction to informaticists seeking to create information technology alternatives. The outcome of this paper is to suggest a process template for transition of cognitive artifacts to an information system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital technologies have profoundly changed not only the ways we create, distribute, access, use and re-use information but also many of the governance structures we had in place. Overall, "older" institutions at all governance levels have grappled and often failed to master the multi-faceted and multi-directional issues of the Internet. Regulatory entrepreneurs have yet to discover and fully mobilize the potential of digital technologies as an influential factor impacting upon the regulability of the environment and as a potential regulatory tool in themselves. At the same time, we have seen a deterioration of some public spaces and lower prioritization of public objectives, when strong private commercial interests are at play, such as most tellingly in the field of copyright. Less tangibly, private ordering has taken hold and captured through contracts spaces, previously regulated by public law. Code embedded in technology often replaces law. Non-state action has in general proliferated and put serious pressure upon conventional state-centered, command-and-control models. Under the conditions of this "messy" governance, the provision of key public goods, such as freedom of information, has been made difficult or is indeed jeopardized.The grand question is how can we navigate this complex multi-actor, multi-issue space and secure the attainment of fundamental public interest objectives. This is also the question that Ian Brown and Chris Marsden seek to answer with their book, Regulating Code, as recently published under the "Information Revolution and Global Politics" series of MIT Press. This book review critically assesses the bold effort by Brown and Marsden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soils are fundamental to ensuring water, energy and food security. Within the context of sus- tainable food production, it is important to share knowledge on existing and emerging tech- nologies that support land and soil monitoring. Technologies, such as remote sensing, mobile soil testing, and digital soil mapping, have the potential to identify degraded and non- /little-responsive soils, and may also provide a basis for programmes targeting the protection and rehabilitation of soils. In the absence of such information, crop production assessments are often not based on the spatio-temporal variability in soil characteristics. In addition, uncertain- ties in soil information systems are notable and build up when predictions are used for monitor- ing soil properties or biophysical modelling. Consequently, interpretations of model-based results have to be done cautiously. As such they provide a scientific, but not always manage- able, basis for farmers and/or policymakers. In general, the key incentives for stakeholders to aim for sustainable management of soils and more resilient food systems are complex at farm as well as higher levels. The same is true of drivers of soil degradation. The decision- making process aimed at sustainable soil management, be that at farm or higher level, also in- volves other goals and objectives valued by stakeholders, e.g. land governance, improved envi- ronmental quality, climate change adaptation and mitigation etc. In this dialogue session we will share ideas on recent developments in the discourse on soils, their functions and the role of soil and land information in enhancing food system resilience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The metabolic network of a cell represents the catabolic and anabolic reactions that interconvert small molecules (metabolites) through the activity of enzymes, transporters and non-catalyzed chemical reactions. Our understanding of individual metabolic networks is increasing as we learn more about the enzymes that are active in particular cells under particular conditions and as technologies advance to allow detailed measurements of the cellular metabolome. Metabolic network databases are of increasing importance in allowing us to contextualise data sets emerging from transcriptomic, proteomic and metabolomic experiments. Here we present a dynamic database, TrypanoCyc (http://www.metexplore.fr/trypanocyc/), which describes the generic and condition-specific metabolic network of Trypanosoma brucei, a parasitic protozoan responsible for human and animal African trypanosomiasis. In addition to enabling navigation through the BioCyc-based TrypanoCyc interface, we have also implemented a network-based representation of the information through MetExplore, yielding a novel environment in which to visualise the metabolism of this important parasite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early Employee Assistance Programs (EAPs) had their origin in humanitarian motives, and there was little concern for their cost/benefit ratios; however, as some programs began accumulating data and analyzing it over time, even with single variables such as absenteeism, it became apparent that the humanitarian reasons for a program could be reinforced by cost savings particularly when the existence of the program was subject to justification.^ Today there is general agreement that cost/benefit analyses of EAPs are desirable, but the specific models for such analyses, particularly those making use of sophisticated but simple computer based data management systems, are few.^ The purpose of this research and development project was to develop a method, a design, and a prototype for gathering managing and presenting information about EAPS. This scheme provides information retrieval and analyses relevant to such aspects of EAP operations as: (1) EAP personnel activities, (2) Supervisory training effectiveness, (3) Client population demographics, (4) Assessment and Referral Effectiveness, (5) Treatment network efficacy, (6) Economic worth of the EAP.^ This scheme has been implemented and made operational at The University of Texas Employee Assistance Programs for more than three years.^ Application of the scheme in the various programs has defined certain variables which remained necessary in all programs. Depending on the degree of aggressiveness for data acquisition maintained by program personnel, other program specific variables are also defined. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, many researches focus their efforts in studies and applications on the Learning area. However, there is a lack of a reference system that permits to know the positioning and the existing links between Learning and Information Technologies. This paper proposes a Cartography where explains the relationships between the elements that compose the Learning Theories and Information Technologies, considering the own features of the learner and the Information Technologies Properties. This intersection will allow us to know what Information Technologies Properties promote Learning Futures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The basal forebrain complex, which includes the nucleus basalis magnocellularis (NBM), provides widespread cholinergic and γ-aminobutyric acid-containing projections throughout the brain, including the insular and pyriform cortices. A number of studies have implicated the cholinergic neurons in the mediation of learning and memory processes. However, the role of basal forebrain activity in information retrieval mechanisms is less known. The aim of the present study is to evaluate the effects of reversible inactivation of the NBM by tetrodotoxin (TTX, a voltage-sensitive sodium channel blocker) during the acquisition and retrieval of conditioned taste aversion (CTA) and to measure acetylcholine (ACh) release during TTX inactivation in the insular cortex, by means of the microdialysis technique in free-moving rats. Bilateral infusion of TTX in the NBM was performed 30 min before the presentation of gustative stimuli, in either the CTA acquisition trial or retrieval trial. At the same time, levels of extracellular ACh release were measured in the insular cortex. The behavioral results showed significant impairment in CTA acquisition when the TTX was infused in the NBM, whereas retrieval was not affected when the treatment was given during the test trial. Biochemical results showed that TTX infusion into the NBM produced a marked decrease in cortical ACh release as compared with the controls during consumption of saccharin in the acquisition trial. Depleted ACh levels were found during the test trial in all groups except in the group that received TTX during acquisition. These results suggest a cholinergic-dependent process during acquisition, but not during memory retrieval, and that NBM-mediated cholinergic cortical release may play an important role in early stages of learning, but not during recall of aversive memories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remembering an event involves not only what happened, but also where and when it occurred. We measured regional cerebral blood flow by positron emission tomography during initial encoding and subsequent retrieval of item, location, and time information. Multivariate image analysis showed that left frontal brain regions were always activated during encoding, and right superior frontal regions were always activated at retrieval. Pairwise image subtraction analyses revealed information-specific activations at (i) encoding, item information in left hippocampal, location information in right parietal, and time information in left fusiform regions; and (ii) retrieval, item in right inferior frontal and temporal, location in left frontal, and time in anterior cingulate cortices. These results point to the existence of general encoding and retrieval networks of episodic memory whose operations are augmented by unique brain areas recruited for processing specific aspects of remembered events.