69 resultados para Informational Retrieval


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length, temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Allergy is a major cause of morbidity worldwide. The number of characterized allergens and related information is increasing rapidly creating demands for advanced information storage, retrieval and analysis. Bioinformatics provides useful tools for analysing allergens and these are complementary to traditional laboratory techniques for the study of allergens. Specific applications include structural analysis of allergens, identification of B- and T-cell epitopes, assessment of allergenicity and cross-reactivity, and genome analysis. In this paper, the most important bioinformatic tools and methods with relevance to the study of allergy have been reviewed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prospective memory (ProM) is the memory for future actions. It requires retrieving content of anaction in response to an ambiguous cue. Currently, it is unclear if ProM is a distinct form of memory, or merely a variant of retrospective memory (RetM). While content retrieval in ProM appears analogous to conventional RetM, less is known about the process of cue detection. Using a modified version of the standard ProM paradigm, three experiments manipulated stimulus characteristics known to influence RetM, in order to examine their effects on ProM performance. Experiment 1 (N — 80) demonstrated that low frequency stimuli elicited significantly higher hit rates and lower false alarm rates than high frequency stimuli, comparable to the mirror effect in RetM. Experiment 2 (N = 80) replicated these results, and showed that repetition of distracters during the test phase significantly increased false alarm rates to second and subsequent presentations of low frequency distracters. Building on these results. Experiment 3 (AT = 40) showed that when the study list was strengthened, the repeated presentation of targets and distracters did not significantly affect response rates. These experiments demonstrate more overlap between ProM and RetM than has previously been acknowledged. The implications for theories of ProM are considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of sheep digestion and mastication on Malva parviflora L. seed transmission, viability and germination was investigated. Mature M. parviflora seeds were subjected to 2 seed treatments: 'scarified', where the hard seed coat was manually cut to allow inhibition, and 'unscarified', where the hard seed coat was not cut. Seeds were placed directly into the rumen of fistulated sheep and removed at 0, 12, 24, 36 and 48 h of rumen digestion. After 12 h of in sacco exposure to digestion in the rumen, the germination of seeds that were initially scarified dropped from 99.2 to 1.4% and longer exposure periods produced no germinable seeds. In contrast, seeds that were unscarified when placed in the rumen produced over 92% germination regardless of in sacco digestion time, although manual scarification after retrieval was essential to elicit germination. In a second experiment, unscarified seeds (29000) were fed in a single meal to fistulated sheep and feces were collected at regular intervals between 6 and 120 h after feeding. Fecal subsamples were taken to determine number of seeds excreted, seed germination on agar and seed germination from feces. Major seed excretion in the feces commenced after 12 h and continued until 144 h, with peaks between 36 and 72 h after consumption. Although mastication and gut passage killed the majority of unscarified seeds, about 20% were recovered intact and over 90% of these recovered seeds were viable and could, thus, potentially form an extensive seed bank. A few excreted seeds (1%) were able to germinate directly from feces, which increased to a maximum of 10% after subsequent dry summer storage (3 months). Through information gained in this study, there is a potential to utilise livestock in an integrated weed management program for the control of M. parviflora, provided additional measures of weed control are in place such as holding periods (> 7 days) for movement of livestock from weed infested areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evidence for expectancy-based priming in the pronunciation task was provided in three experiments. In Experiments 1 and 2, a high proportion of associatively related trials produced greater associative priming and superior retrieval of primes in a subsequent test of memory for primes, whereas high- and low-proportion groups showed comparable repetition benefits in perceptual identification of previously presented primes. In Experiment 2, the low-proportion condition had few associatively related pairs hut many identity pairs. In Experiment 3, identity priming was greater in a high- than a low-identity proportion group, with similar repetition benefits and prime retrieval responses for the two groups. These results indicate that when the prime-target relationship is salient, subjects strategically vary their processing of the prime according to the nature of the prime-target relationship.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Like previous volumes in the Educational Innovation in Economics and Business Series, this book is genuinely international in terms of its coverage. With contributions from nine different countries and three continents, it reflects a global interest in, and commitment to, innovation in business education, with a view to enhancing the learning experience of both undergraduates and postgraduates. It should prove of value to anyone engaged directly in business education, defined broadly to embrace management, finance, marketing, economics, informational studies, and ethics, or who has responsibility for fostering the professional development of business educators. The contributions have been selected with the objective of encouraging and inspiring others as well as illustrating developments in the sphere of business education. This volume brings together a collection of articles describing different aspects of the developments taking place in today’s workplace and how they affect business education. It describes strategies for breaking boundaries for global learning. These target specific techniques regarding teams and collaborative learning, transitions from academic settings to the workplace, the role of IT in the learning process, and program-level innovation strategies. This volume addresses issues faced by professionals in higher and further education and also those involved in corporate training centers and industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formal Concept Analysis is an unsupervised machine learning technique that has successfully been applied to document organisation by considering documents as objects and keywords as attributes. The basic algorithms of Formal Concept Analysis then allow an intelligent information retrieval system to cluster documents according to keyword views. This paper investigates the scalability of this idea. In particular we present the results of applying spatial data structures to large datasets in formal concept analysis. Our experiments are motivated by the application of the Formal Concept Analysis idea of a virtual filesystem [11,17,15]. In particular the libferris [1] Semantic File System. This paper presents customizations to an RD-Tree Generalized Index Search Tree based index structure to better support the application of Formal Concept Analysis to large data sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the effects of information request ambiguity and construct incongruence on end user's ability to develop SQL queries with an interactive relational database query language. In this experiment, ambiguity in information requests adversely affected accuracy and efficiency. Incongruities among the information request, the query syntax, and the data representation adversely affected accuracy, efficiency, and confidence. The results for ambiguity suggest that organizations might elicit better query development if end users were sensitized to the nature of ambiguities that could arise in their business contexts. End users could translate natural language queries into pseudo-SQL that could be examined for precision before the queries were developed. The results for incongruence suggest that better query development might ensue if semantic distances could be reduced by giving users data representations and database views that maximize construct congruence for the kinds of queries in typical domains. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the proliferation of relational database programs for PC's and other platforms, many business end-users are creating, maintaining, and querying their own databases. More importantly, business end-users use the output of these queries as the basis for operational, tactical, and strategic decisions. Inaccurate data reduce the expected quality of these decisions. Implementing various input validation controls, including higher levels of normalisation, can reduce the number of data anomalies entering the databases. Even in well-maintained databases, however, data anomalies will still accumulate. To improve the quality of data, databases can be queried periodically to locate and correct anomalies. This paper reports the results of two experiments that investigated the effects of different data structures on business end-users' abilities to detect data anomalies in a relational database. The results demonstrate that both unnormalised and higher levels of normalisation lower the effectiveness and efficiency of queries relative to the first normal form. First normal form databases appear to provide the most effective and efficient data structure for business end-users formulating queries to detect data anomalies.