955 resultados para Information Retrieval, Document Databases, Digital Libraries


Relevância:

100.00% 100.00%

Publicador:

Resumo:

DSpace is an open source software platform that enables organizations to: - Capture and describe digital material using a submission workflow module, or a variety of programmatic ingest options - Distribute an organization's digital assets over the web through a search and retrieval system - Preserve digital assets over the long term This system documentation includes a functional overview of the system, which is a good introduction to the capabilities of the system, and should be readable by nontechnical personnel. Everyone should read this section first because it introduces some terminology used throughout the rest of the documentation. For people actually running a DSpace service, there is an installation guide, and sections on configuration and the directory structure. Note that as of DSpace 1.2, the administration user interface guide is now on-line help available from within the DSpace system. Finally, for those interested in the details of how DSpace works, and those potentially interested in modifying the code for their own purposes, there is a detailed architecture and design section.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of mobile computers and wireless networks requires the design of future distributed real-time applications to recognize and deal with the significant asymmetry between downstream and upstream communication capacities, and the significant disparity between server and client storage capacities. Recent research work proposed the use of Broadcast Disks as a scalable mechanism to deal with this problem. In this paper, we propose a new broadcast disks protocol, based on our Adaptive Information Dispersal Algorithm (AIDA). Our protocol is different from previous broadcast disks protocols in that it improves communication timeliness, fault-tolerance, and security, while allowing for a finer control of multiplexing of prioritized data (broadcast frequencies). We start with a general introduction of broadcast disks. Next, we propose broadcast disk organizations that are suitable for real-time applications. Next, we present AIDA and show its fault-tolerance and security properties. We conclude the paper with the description and analysis of AIDA-based broadcast disks organizations that achieve both timeliness and fault-tolerance, while preserving downstream communication capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HYPERJOSEPH combines hypertext, information retrieval, literary studies, Biblical scholarship, and linguistics. Dialectically, this paper contrasts hypertextual form (the extant tool) and AI-captured content (a desideratum), in the HYPERJOSEPH project. The discussion is more general and oriented to epistemology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The creation of my hypermedia work Index of Love, which narrates a love story as an archive of moments, images and objects recollected, also articulated for me the potential of the book as electronic text. The book has always existed as both narrative and archive. Tables of contents and indexes allow the book to function simultaneously as linear narrative and non-linear, searchable database. The book therefore has more in common with the so-called 'new media' of the 21st century than it does with the dominant 20th century media of film, video and audiotape, whose logic and mode of distribution are resolutely linear. My thesis is that the non-linear logic of new media brings to the fore an aspect of the book - the index - whose potential for the production of narrative is only just beginning to be explored. When a reader/user accesses an electronic work, such as a website, via its menu, they simultaneously experience it as narrative and archive. The narrative journey taken is created through the menu choices made. Within the electronic book, therefore, the index (or menu) has the potential to function as more than just an analytical or navigational tool. It has the potential to become a creative, structuring device. This opens up new possibilities for the book, particularly as, in its paper based form, the book indexes factual work, but not fiction. In the electronic book, however, the index offers as rich a potential for fictional narratives as it does for factual volumes. [ABSTRACT FROM AUTHOR]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following the integration of nurse and midwifery education into institutions of higher education in the United Kingdom, a number of studies have shown that a defined clinical framework for nursing and midwifery lecturers in practice areas is lacking. The aim of this study was to explore strategies that nurse and midwifery lecturers from one higher education institution in south east England can use to work collaboratively with nurses and midwives to promote the utilization of research findings in practice. A cross-sectional survey using a structured questionnaire was sent to a sample of 60 nurse and midwifery lecturers and 90 clinical managers. Response rates of 67% (40) and 69% (62) respectively were obtained. The main strategies suggested were to make clinical staff more aware of what research exist in their specialties; to help them to access research information from research databases; and to critically appraise this information. Other strategies were for teachers to run research workshops on site; to undertake joint research projects with clinical staff; to set up journal clubs or research interest groups; and to help formulate clinical guidelines and protocols which are explicitly research-based.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following the integration of nurse and midwifery education into institutions of higher education in the United Kingdom, a number of studies have shown that a defined clinical framework for nursing and midwifery lecturers in practice areas is lacking. The aim of this study was to explore strategies that nurse and midwifery lecturers from one higher education institution in south east England can use to work collaboratively with nurses and midwives to promote the utilization of research findings in practice. A cross-sectional survey using a structured questionnaire was sent to a sample of 60 nurse and midwifery lecturers and 90 clinical managers. Response rates of 67% (40) and 69% (62) respectively were obtained. The main strategies suggested were to make clinical staff more aware of what research exist in their specialties; to help them to access research information from research databases; and to critically appraise this information. Other strategies were for teachers to run research workshops on site; to undertake joint research projects with clinical staff; to set up journal clubs or research interest groups; and to help formulate clinical guidelines and protocols which are explicitly research-based.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary 1. The Marine Life Information Network (MarLIN) has been developed since 1998. Defra funding has supported a core part of its work, the Biology and Sensitivity Key Information Sub-programme. This report relates to Biology and Sensitivity work for the period 2001-2004. 2. MarLIN Biology and Sensitivity research takes information on the biology of species to identify the likely effects of changing environmental conditions linked to human activities on those species. In turn, species that are key functional, key structural, dominant, or characteristic in a biotope (the habitat and its associated species) are used to identify biotope sensitivity. Results are displayed over the World Wide Web and can be accessed via a range of search tools that make the information of relevance to environmental management. 3. The first Defra contract enabled the development of criteria and methods of research, database storage methods and the research of a wide range of species. A contract from English Nature and Scottish Natural Heritage enabled biotopes relevant to marine SACs to be researched. 4. Defra funding in 2001-2004 has especially enabled recent developments to be targeted for research. Those developments included the identification of threatened and declining species by the OSPAR Biodiversity Committee, the development of a new approach to defining sensitivity (part of the Review of Marine Nature Conservation), and the opportunity to use Geographical Information Systems (GIS) more effectively to link survey data to MarLIN assessments of sensitivity. 5. The MarLIN database has been developed to provide a resource to 'pick-and-mix' information depending on the questions being asked. Using GIS, survey data that provides locations for species and biotopes has been linked to information researched by MarLIN to map the likely sensitivity of an area to a specified factor. Projects undertaken for the Irish Sea pilot (marine landscapes), in collaboration with CEFAS (fishing impacts) and with the Countryside Council for Wales (oil spill response) have demonstrated the application of MarLIN information linked to survey data in answering, through maps, questions about likely impacts of human activities on seabed ecosystems. 6. GIS applications that use MarLIN sensitivity information give meaningful results when linked to localized and detailed survey information (lists of species and biotopes as point source or mapped extents). However, broad landscape units require further interpretation. 7. A new mapping tool (SEABED map) has been developed to display data on species distributions and survey data according to search terms that might be used by an environmental manager. 8. MarLIN outputs are best viewed on the Web site where the most up-to-date information from live databases is available. The MarLIN Web site receives about 1600 visits a day. 9. The MarLIN approach to assessing sensitivity and its application to environmental management were presented in papers at three international conferences during the current contract and a 'touchstone' paper is to be published in the peer-reviewed journal Hydrobiologia. The utility of MarLIN information for environmental managers, amongst other sorts of information, has been described in an article in Marine Pollution Bulletin. 10. MarLIN information is being used to inform the identification of potential indicator species for implementation of the Water Framework Directive including initiatives by ICES. 11. Non-Defra funding streams are supporting the updating of reviews and increasing the amount of peer review undertaken; both of which are important to the maintenance of the resource. However, whilst MarLIN information is sufficiently wide ranging to be used in an 'operational' way for marine environmental protection and management, new initiatives and the new biotopes classification have introduced additional species and biotopes that will need to be researched in the future. 12. By the end of the contract, the Biology and Sensitivity Key Information database contained full Key Information reviews on 152 priority species and 117 priority biotopes, together with basic information on 412 species; a total of 564 marine benthic species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latent semantic indexing (LSI) is a popular technique used in information retrieval (IR) applications. This paper presents a novel evaluation strategy based on the use of image processing tools. The authors evaluate the use of the discrete cosine transform (DCT) and Cohen Daubechies Feauveau 9/7 (CDF 9/7) wavelet transform as a pre-processing step for the singular value decomposition (SVD) step of the LSI system. In addition, the effect of different threshold types on the search results is examined. The results show that accuracy can be increased by applying both transforms as a pre-processing step, with better performance for the hard-threshold function. The choice of the best threshold value is a key factor in the transform process. This paper also describes the most effective structure for the database to facilitate efficient searching in the LSI system.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Textual problem-solution repositories are available today in
various forms, most commonly as problem-solution pairs from community
question answering systems. Modern search engines that operate on
the web can suggest possible completions in real-time for users as they
type in queries. We study the problem of generating intelligent query
suggestions for users of customized search systems that enable querying
over problem-solution repositories. Due to the small scale and specialized
nature of such systems, we often do not have the luxury of depending on
query logs for finding query suggestions. We propose a retrieval model
for generating query suggestions for search on a set of problem solution
pairs. We harness the problem solution partition inherent in such
repositories to improve upon traditional query suggestion mechanisms
designed for systems that search over general textual corpora. We evaluate
our technique over real problem-solution datasets and illustrate that
our technique provides large and statistically significant

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online forums are becoming a popular way of finding useful
information on the web. Search over forums for existing discussion
threads so far is limited to keyword-based search due
to the minimal effort required on part of the users. However,
it is often not possible to capture all the relevant context in a
complex query using a small number of keywords. Examplebased
search that retrieves similar discussion threads given
one exemplary thread is an alternate approach that can help
the user provide richer context and vastly improve forum
search results. In this paper, we address the problem of
finding similar threads to a given thread. Towards this, we
propose a novel methodology to estimate similarity between
discussion threads. Our method exploits the thread structure
to decompose threads in to set of weighted overlapping
components. It then estimates pairwise thread similarities
by quantifying how well the information in the threads are
mutually contained within each other using lexical similarities
between their underlying components. We compare our
proposed methods on real datasets against state-of-the-art
thread retrieval mechanisms wherein we illustrate that our
techniques outperform others by large margins on popular
retrieval evaluation measures such as NDCG, MAP, Precision@k
and MRR. In particular, consistent improvements of
up to 10% are observed on all evaluation measures

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of linking web search queries to entities from a knowledge base such as Wikipedia. Such linking enables converting a user’s web search session to a footprint in the knowledge base that could be used to enrich the user profile. Traditional methods for entity linking have been directed towards finding entity mentions in text documents such as news reports, each of which are possibly linked to multiple entities enabling the usage of measures like entity set coherence. Since web search queries are very small text fragments, such criteria that rely on existence of a multitude of mentions do not work too well on them. We propose a three-phase method for linking web search queries to wikipedia entities. The first phase does IR-style scoring of entities against the search query to narrow down to a subset of entities that are expanded using hyperlink information in the second phase to a larger set. Lastly, we use a graph traversal approach to identify the top entities to link the query to. Through an empirical evaluation on real-world web search queries, we illustrate that our methods significantly enhance the linking accuracy over state-of-the-art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the modern society, communications and digital transactions are becoming the norm rather than the exception. As we allow networked computing devices into our every-day actions, we build a digital lifestyle where networks and devices enrich our interactions. However, as we move our information towards a connected digital environment, privacy becomes extremely important as most of our personal information can be found in the network. This is especially relevant as we design and adopt next generation networks that provide ubiquitous access to services and content, increasing the impact and pervasiveness of existing networks. The environments that provide widespread connectivity and services usually rely on network protocols that have few privacy considerations, compromising user privacy. The presented work focuses on the network aspects of privacy, considering how network protocols threaten user privacy, especially on next generation networks scenarios. We target the identifiers that are present in each network protocol and support its designed function. By studying how the network identifiers can compromise user privacy, we explore how these threats can stem from the identifier itself and from relationships established between several protocol identifiers. Following the study focused on identifiers, we show that privacy in the network can be explored along two dimensions: a vertical dimension that establishes privacy relationships across several layers and protocols, reaching the user, and a horizontal dimension that highlights the threats exposed by individual protocols, usually confined to a single layer. With these concepts, we outline an integrated perspective on privacy in the network, embracing both vertical and horizontal interactions of privacy. This approach enables the discussion of several mechanisms to address privacy threats on individual layers, leading to architectural instantiations focused on user privacy. We also show how the different dimensions of privacy can provide insight into the relationships that exist in a layered network stack, providing a potential path towards designing and implementing future privacy-aware network architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The artefact and techno-centricity of the research into the architecture process needs to be counterbalanced by other approaches. An increasing amount of information is collected and used in the process, resulting in challenges related to information and knowledge management, as this research evidences through interviews with practicing architects. However, emerging technologies are expected to resolve many of the traditional challenges, opening up new avenues for research. This research suggests that among them novel techniques addressing how architects interact with project information, especially that indirectly related to the artefacts, and tools which better address the social nature of work, notably communication between participants, become a higher priority. In the fields associated with the Human Computer Interaction generic solutions still frequently prevail, whereas it appears that specific alternative approaches would be particularly in demand for the dynamic and context dependent design process. This research identifies an opportunity for a process-centric and integrative approach for architectural practice and proposes an information management and communication software application, developed for the needs discovered in close collaboration with architects. Departing from the architects’ challenges, an information management software application, Mneme, was designed and developed until a working prototype. It proposes the use of visualizations as an interface to provide an overview of the process, facilitate project information retrieval and access, and visualize relationships between the pieces of information. Challenges with communication about visual content, such as images and 3D files, led to a development of a communication feature allowing discussions attached to any file format and searchable from a database. Based on the architects testing the prototype and literature recognizing the subjective side of usability, this thesis argues that visualizations, even 3D visualizations, present potential as an interface for information management in the architecture process. The architects confirmed that Mneme allowed them to have a better project overview, to easier locate heterogeneous content, and provided context for the project information. Communication feature in Mneme was seen to offer a lot of potential in design projects where diverse file formats are typically used. Through empirical understanding of the challenges in the architecture process, and through testing the resulting software proposal, this thesis suggests promising directions for future research into the architecture and design process.