847 resultados para information, knowledge
Resumo:
This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor
Resumo:
This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor
Resumo:
This paper investigates the current relationship between information management and information mediation along with the digital reference service through a case study which took place in an academic library. The concept of information mediation is herein analyzed, since a conceptual examination provides elements that will help people to comprehend and evaluate the concerned service. The information professional plays a very important role in the mediation aforementioned, which may be directly or indirectly; consciously or unconsciously; by himself/herself or plurally; individually or inserted into a group ? in all such manners that mediator facilitates the acquisition of information, fully or partially satisfying a user?s need of all sorts of knowledge. Meanwhile, we here approach information management from a scope that points out a description over performed activities concerned to policies and procedures put into effect until the service evaluation by proposing a criterion for such point. Finally, we outline a few actions to be implemented in long-term perspective, which goal is to continually ameliorate such assistance, taking in account the human factor
Resumo:
This paper presents a micro-model of knowledge creation and transfer in a small group of people. Our model incorporates two key aspects of the cooperative process of knowledge creation: (i) heterogeneity of people in their state of knowledge is essential for successful cooperation in the joint creation of new ideas, while (ii) the very process of cooperative knowledge creation a¤ects the heterogeneity of people through the accumulation of knowledge in common. The model features myopic agents in a pure externality model of interaction. In the two person case, we show that the equilibrium process tends to result in the accumulation of too much knowledge in common compared to the most productive state. Unlike the two-person case, in the four person case we show that the equilibrium process of knowledge creation may converge to the most productive state. Equilibrium paths are found analytically, and they are a discontinuous function of initial heterogeneity.
Resumo:
Enriching knowledge bases with multimedia information makes it possible to complement textual descriptions with visual and audio information. Such complementary information can help users to understand the meaning of assertions, and in general improve the user experience with the knowledge base. In this paper we address the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines. DBpedia has evolved into a major hub in the Linked Data cloud, interconnecting millions of entities organized under a consistent ontology. Our approach taps into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images. We performed experiments with focus on the particularly challenging problem of highly ambiguous names. Both methods presented in this work outperformed the baseline. Our best method leveraged context words from Wikipedia, tags from Flickr and type information from DBpedia to achieve an average precision of 80%.
Resumo:
Presenting relevant information via web-based user friendly interfac- es makes the information more accessible to the general public. This is especial- ly useful for sensor networks that monitor natural environments. Adequately communicating this type of information helps increase awareness about the limited availability of natural resources and promotes their better use with sus- tainable practices. In this paper, I suggest an approach to communicating this information to wide audiences based on simulating data journalism using artifi- cial intelligence techniques. I analyze this approach by describing a pioneer knowledge-based system called VSAIH, which looks for news in hydrological data from a national sensor network in Spain and creates news stories that gen- eral users can understand. VSAIH integrates artificial intelligence techniques, including a model-based data analyzer and a presentation planner. In the paper, I also describe characteristics of the hydrological national sensor network and the technical solutions applied by VSAIH to simulate data journalism.
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper filis this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compiletime/ run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, aimed at performing as much work as possible at compile-time. The approach is based on the knowledge of certain properties regarding the run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed. Thus, the paper does not deal with the analysis itself, but rather with how the analysis results can be used to parallelize programs.
Resumo:
The Linked Data initiative offers a straight method to publish structured data in the World Wide Web and link it to other data, resulting in a world wide network of semantically codified data known as the Linked Open Data cloud. The size of the Linked Open Data cloud, i.e. the amount of data published using Linked Data principles, is growing exponentially, including life sciences data. However, key information for biological research is still missing in the Linked Open Data cloud. For example, the relation between orthologs genes and genetic diseases is absent, even though such information can be used for hypothesis generation regarding human diseases. The OGOLOD system, an extension of the OGO Knowledge Base, publishes orthologs/diseases information using Linked Data. This gives the scientists the ability to query the structured information in connection with other Linked Data and to discover new information related to orthologs and human diseases in the cloud.
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, presumably because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper fills this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compile- time/run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, attempting to perform as much work as possible at compiletime. The approach is based on the knowledge of certain properties about run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed.
Resumo:
onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.
Resumo:
Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.
Resumo:
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease.
Resumo:
Information integration is a very important topic. Reusing the knowledge and having common representations have been (and it is) an active research topic in the process systems community. Conventional (structural) But only structural models have been dealt with so far. In this paper the issue of integration is related with two types of different knowledge, functional and structural. Functional representation and analysis have proved very useful, but still it is developed and presented in a completely isolated way from the classic structural description of the process. This paper presents an architecture to integrate both representations.
Resumo:
Information integration is a very important topic. Reusing the knowledge and having common and exchangeable representations have been an active research topic in process systems engineering. In this paper we deal with information integration in two different ways, the first one sharing knowledge between different heterogeneous applications and the second one integrating two different (but complementary) types of knowledge: functional and structural. A new architecture to integrate these representation and use for several purposes is presented in this paper.
Resumo:
The definition of an agent architecture at the knowledge level makes emphasis on the knowledge role played by the data interchanged between the agent components and makes explicit this data interchange this makes easier the reuse of these knowledge structures independently of the implementation This article defines a generic task model of an agent architecture and refines some of these tasks using the interference diagrams. Finally, a operationalisation of this conceptual model using the rule-oriented language Jess is shown. knowledge level,