995 resultados para EBSCO Discovery Service


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data was gathered on students’ usage of these tools. Regardless of the search system, students exhibited a marked inability to effectively evaluate sources and a heavy reliance on default search settings. On the quantitative benchmarks measured by this study, the EBSCO Discovery Service tool outperformed the other search systems in almost every category. This article describes these results and makes recommendations for libraries considering these tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deakin University Library offers a number of search and discovery tools to its user communities: a web scale discovery product, a faceted display catalogue and a traditional catalogue. The presentation provides an overview of the challenges the Library has faced in its attempt to offer a seamless, comprehensive search and discovery service, that facilitates the finding of information resources. The information literacy and research skill levels of the University’s various cohort groups are considered, as well as the important role metadata plays in leading users to the resources they want.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presentation describes the technical aspects of the customisation of its discovery interface, which uses EBSCO Discovery Service software. The customisation required close collaboration with EBSCO software engineers and involved a high level of technical expertise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: La enfermedad cardiovascular es la principal causa de muerte a nivel mundial, afectando principalmente la salud pública de países pobres con economías emergentes. La transición epidemiológica en Colombia ha incrementado la proporción de pacientes ancianos con enfermedad cardiovascular y que requieren cirugía cardíaca. Sin embargo, no existe consenso sobre la conducta para la selección de pacientes añosos para este tipo de intervenciones. El objetivo de este estudio fue definir el riesgo mortalidad asociado a cirugía cardíaca en este grupo de pacientes, basados en una revisión sistemática de la literatura. Materiales y Métodos: Se diseñó una revisión sistemática empleando las plataformas PubMed (Medline), EBSCO Discovery Service, Ovid SP-EBMR, Sciverse y MDConsult. Los términos de búsqueda fueron “Aged”, “Cardiac surgery” and “Mortality”, conjugados de acuerdo con el lenguaje de cada buscador. Las publicaciones fueron seleccionadas por consenso. Los resultados se analizaron en un modelo de Mantel-Haenszel. Resultados: La búsqueda arrojó un total de 8.565 publicaciones. Los datos analizados en el modelo incluyeron 81.547 pacientes (7.855 octogenarios y 73.692 más jóvenes). El riesgo de mortalidad asociado a cirugía cardíaca en octogenarios fue de 125% (OR=2,35, IC 95% [2,15 - 2,57]). Discusión: El sometimiento de pacientes octogenarios a cirugías cardíacas mayores es una decisión que requiere un juicio clínico minucioso en el que es importante destacar que la probabilidad de un resultado francamente desfavorable es alta. Se necesitan más estudios diseñados que permitan aumentar la solidez de la evidencia actual en cuanto al riesgo aquí encontrado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic information is becoming increasingly rich in content and varied in format and style while at the same time client devices are getting increasingly varied in their capabilities. This mismatch between rich contents and the end devices capability presents a challenge in providing seamless and ubiquitous access to electronic documents to interested users. Service-oriented content adaptation has emerged as a potential solution to the content-device mismatch problem. Since an adaptation task can potentially be performed by multiple content adaptation services (CAS), an approach for CAS discovery is a fundamental component of service-oriented content adaptation environment. In this paper, we propose a service discovery approach that considers the client device capability and the service’s attributes to discover appropriate CAS while optimizing performance and functionality. The efficiency of the proposed CAS discovery protocol is studied experimentally. The results show that the proposed discovery approach is effective in terms of discovering appropriate content adaptation services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RFID, in its different forms, but especially following EPCglobal standards, has become a key enabling technology for many applications. An essential component to develop track and trace applications in a complex multi-vendor scenario are the Discovery Services. Although they are already envisaged as part of the EPCglobal network architecture, the functional definition and standardization of the Discovery Service is still at a very early stage. Within the scope of the BRIDGE project, a specification for the interfaces of Discovery Services has been developed, together with a prototype to validate the design and different models to enhance supply-chain control through track and trace applications. © 2008 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is growing interest in Discovery Services for locating RFID and supply chain data between companies globally, to obtain product lifecycle information for individual objects. Discovery Services are heralded as a means to find serial-level data from previously unknown parties, however more realistically they provide a means to reduce the communications load on the information services, the network and the requesting client application. Attempts to design a standardised Discovery Service will not succeed unless security is considered in every aspect of the design. In this paper we clearly show that security cannot be bolted-on in the form of access control, although this is also required. The basic communication model of the Discovery Service critically affects who shares what data with whom, and what level of trust is required between the interacting parties. © 2009 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

RFID is a technology that enables the automated capture of observations of uniquely identified physical objects as they move through supply chains. Discovery Services provide links to repositories that have traceability information about specific physical objects. Each supply chain party publishes records to a Discovery Service to create such links and also specifies access control policies to restrict who has visibility of link information, since it is commercially sensitive and could reveal inventory levels, flow patterns, trading relationships, etc. The requirement of being able to share information on a need-to-know basis, e.g. within the specific chain of custody of an individual object, poses a particular challenge for authorization and access control, because in many supply chain situations the information owner might not have sufficient knowledge about all the companies who should be authorized to view the information, because the path taken by an individual physical object only emerges over time, rather than being fully pre-determined at the time of manufacture. This led us to consider novel approaches to delegate trust and to control access to information. This paper presents an assessment of visibility restriction mechanisms for Discovery Services capable of handling emergent object paths. We compare three approaches: enumerated access control (EAC), chain-of-communication tokens (CCT), and chain-of-trust assertions (CTA). A cost model was developed to estimate the additional cost of restricting visibility in a baseline traceability system and the estimates were used to compare the approaches and to discuss the trade-offs. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nondedicated clusters are currently at the forefront of the development of high performance computing systems. These clusters are relatively intolerant of hardware failures and cannot manage dynamic cluster membership efficiently. This report presents the logical design of an innovative self discovery service that provides for automated cluster management and resource discovery. The proposed service has an ability to share or recover unused computing resources, and to adapt to transient conditions autonomically, as well as the capability of providing dynamically scalable virtual computers on demand.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

While the emergence of clouds had lead to a significant paradigm shift in business and research, cloud computing is still in its infancy. Specifically, there is no effective publication and discovery service nor are cloud services easy to use. This paper presents a new technology for offering ease of discovery, selection and use of clusters hosted within clouds. By improving these services, cloud clusters become easily accessible to all clients, software services to noncomputing human user.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many academic libraries are implementing discovery services as a way of giving their users a single comprehensive search option for all library resources. These tools are designed to change the research experience, yet very few studies have investigated the impact of discovery service implementation. This study examines one aspect of that impact by asking whether usage of publisher-hosted journal content changes after implementation of a discovery tool. Libraries that have begun using the four major discovery services have seen an increase in usage of this content, suggesting that for this particular type of material, discovery services have a positive impact on use. Though all discovery services significantly increased usage relative to a no discovery service control group, some had a greater impact than others, and there was extensive variation in usage change among libraries using the same service. Future phases of this study will look at other types of content.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Queensland University of Technology’s Institutional Repository, QUT ePrints (http://eprints.qut.edu.au/), was established in 2003. With the help of an institutional mandate (endorsed in 2004) the repository now holds over 11,000 open access publications. The repository’s success is celebrated within the University and acknowledged nationally and internationally. QUT ePrints was built on GNU EPrints open source repository software (currently running v.3.1.3) and was originally configured to accommodate open access versions of the traditional range of research publications (journal articles, conference papers, books, book chapters and working papers). However, in 2009, the repository’s scope, content and systems were broadened and the ‘QUT Digital repository’ is now a service encompassing a range of digital collections, services and systems. For a work to be accepted in to the institutional repository, at least one of the authors/creators must have a current affiliation with QUT. However, the success of QUT ePrints in terms of its capacity to increase the visibility and accessibility of our researchers' scholarly works resulted in requests to accept digital collections of works which were out of scope. To address this need, a number of parallel digital collections have been developed. These collections include, OZcase, a collection of legal research materials and ‘The Sugar Industry Collection’; a digitsed collection of books and articles on sugar cane production and processing. Additionally, the Library has responded to requests from academics for a service to support the publication of new, and existing, peer reviewed open access journals. A project is currently underway to help a group of senior QUT academics publish a new international peer reviewed journal. The QUT Digital Repository website will be a portal for access to a range of resources to support copyright management. It is likely that it will provide an access point for the institution’s data repository. The data repository, provisionally named the ‘QUT Data Commons’, is currently a work-in-progress. The metadata for some QUT datasets will also be harvested by and discoverable via ‘Research Data Australia’, the dataset discovery service managed by the Australian National Data Service (ANDS). QUT Digital repository will integrate a range of technologies and services related to scholarly communication. This paper will discuss the development of the QUT Digital Repository, its strategic functions, the stakeholders involved and lessons learned.