936 resultados para Audio-visual library service


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usage of HTTP adaptive streaming (HAS) has become widely spread in multimedia services. Because it allows the service providers to improve the network resource utilization and user׳s Quality of Experience (QoE). Using this technology, the video playback interruption is reduced since the network and server status in addition to capability of user device, all are taken into account by HAS client to adapt the quality to the current condition. Adaptation can be done using different strategies. In order to provide optimal QoE, the perceptual impact of adaptation strategies from point of view of the user should be studied. However, the time-varying video quality due to the adaptation which usually takes place in a long interval introduces a new type of impairment making the subjective evaluation of adaptive streaming system challenging. The contribution of this paper is two-fold: first, it investigates the testing methodology to evaluate HAS QoE by comparing the subjective experimental outcomes obtained from ACR standardized method and a semi-continuous method developed to evaluate the long sequences. In addition, influence of using audiovisual stimuli to evaluate the video-related impairment is inquired. Second, impact of some of the adaptation technical factors including the quality switching amplitude and chunk size in combination with high range of commercial content type is investigated. The results of this study provide a good insight toward achieving appropriate testing method to evaluate HAS QoE, in addition to designing switching strategies with optimal visual quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

https://bluetigercommons.lincolnu.edu/pli/1010/thumbnail.jpg

Relevância:

30.00% 30.00%

Publicador:

Resumo:

https://bluetigercommons.lincolnu.edu/pli/1009/thumbnail.jpg

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: In a pilot study, the library had good results using SERVQUAL, a respected and often-used instrument for measuring customer satisfaction. The SERVQUAL instrument itself, however, received some serious and well-founded criticism from the respondents to our survey. The purpose of this study was to test the comparability of the results of SERVQUAL with a revised and shortened instrument modeled on SERVQUAL. The revised instrument, the Assessment of Customer Service in Academic Health Care Libraries (ACSAHL), was designed to better assess customer service in academic health care libraries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To examine the types of questions received by Clinical Informatics Consult Service (CICS) librarians from clinicians on rounds and to analyze the number of clearly differentiated viewpoints provided in response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Patient Informatics Consult Service (PICS) at the Eskind Biomedical Library at Vanderbilt University Medical Center (VUMC) provides patients with consumer-friendly information by using an information prescription mechanism. Clinicians refer patients to the PICS by completing the prescription and noting the patient's condition and any relevant factors. In response, PICS librarians critically appraise and summarize consumer-friendly materials into a targeted information report. Copies of the report are given to both patient and clinician, thus facilitating doctor-patient communication and closing the clinician-librarian feedback loop. Moreover, the prescription form also circumvents many of the usual barriers for patients in locating information, namely, patients' unfamiliarity with medical terminology and lack of knowledge of authoritative sources. PICS librarians capture the time and expertise put into these reports by creating Web-based pathfinders on prescription topics. Pathfinders contain librarian-created disease overviews and links to authoritative resources and seek to minimize the consumer's exposure to unreliable information. Pathfinders also adhere to strict guidelines that act as a model for locating, appraising, and summarizing information for consumers. These mechanisms—the information prescription, research reports, and pathfinders—serve as steps toward the long-term goal of full integration of consumer health information into patient care at VUMC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outreach is now a prevailing activity in health sciences libraries. As an introduction to a series of papers on current library outreach to rural communities, this paper traces the evolution of such activities by proponents in health sciences libraries from 1924 to 1992. Definitions of rural and outreach are followed by a consideration of the expanding audience groups. The evolution in approaches covers the package library and enhancements in extension service, library development, circuit librarianship, and self-service arrangements made possible by such programs as the Georgia Interactive Network (GaIN) and Grateful Med.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Muitos dos problemas auditivos não são notados por pais e professores. Este fato prejudica a aprendizagem da criança principalmente no ambiente escolar. Por isso, programas de triagem auditiva podem ser utilizados com o intuito de detectar e, posteriormente, diagnosticar escolares a fim de que se possa prevenir ou minimizar o impacto a que possíveis sequelas auditivas venham prejudicar o rendimento escolar da criança. Hoje em dia podemos contar com programas que permitem o melhor acompanhamento de populações que necessitam de cuidados preventivos e curativos, e a audição é um aspecto muito importante que pode ser avaliado quando estes programas são colocados em prática. O Programa Nacional de Reorientação da Formação Profissional em Saúde (Pró-Saúde), que visou reorientar a formação profissional, teve como objetivo integrar ensino-serviço e promover atenção básica por meio da abordagem integral do processo saúde-doença. Ambientes externos podem ser utilizados por alunos e professores universitários para que possam colocadas em prática ações que possibilitem a humanização das práticas de atenção a saúde e a integralidade das mesmas, por meio da articulação de ações e serviços de saúde, preventivos e curativos, individuais e coletivos. A escola é considerada um dos ambientes que este trabalho pode ser realizado. O Programa Saúde na Escola (PSE) abre o ambiente escolar com a finalidade de contribuir para a formação integral dos estudantes da rede pública de educação básica por meio de ações de prevenção, promoção e atenção à saúde. Sendo um estudo do tipo retrospectivo transversal, como objetivo principal caracterizar o perfil audiológico de escolares de escola pública do município de Bauru SP, contando com a integração de profissionais da área da saúde e educação no ambiente escolar, o que teve como base os programas citados acima. A triagem auditiva foi realizada com a aplicação dos seguintes procedimentos: imitanciometria, inspeção visual do meato acústico externo, emissões otoacústicas por produto de distorção e audiometria tonal liminar. Observou-se que do total de 652 estudantes, a grande maioria (97,1%) dos participantes com faixa etária entre 10 e 18 anos, apresentaram audição normal. Em 2,9% desta população foi encontrada alguma alteração auditiva temporária. Com a exceção de um único participante, portador de perda auditiva sensorioneural. Apesar de encontrarmos muitas crianças e adolescentes com audição normal, o que mais ressalta a importância deste trabalho é a necessidade da triagem auditiva em ambientes escolares e, essencialmente, o acompanhamento das mesmas nesta faixa etária, já que são escassos os estudos referentes a ela. Apesar das poucas alterações auditivas encontradas serem passageiras, são exatamente estas que interferem no bom rendimento escolar e outros fatores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is multifaceted: 1) to describe eScience research in acomprehensive way; 2) to help library and information specialists understand the realm of eScience research and the information needs of the community and demonstrate the importance of LIS professionals within the eScience domain; 3) and to explore the current state of curricular content of ALA accredited MLS/MLIS programs to understand the extent to which they prepare new professionals within eScience librarianship. The literature review focuses heavily on eScientists and other data-driven researchers’ information service needs in addition to demonstrating how and why librarians and information specialists can and should fulfill these service gaps and information needs within eScience research. By looking at the current curriculum of American Library Association (ALA) accredited MLS/MLIS programs, we can identify potential gaps in knowledge and where to improve in order to prepare and train new MLS/MLIS graduates to fulfill the needs of eScientists. This investigation is meant to be informative and can be used as a tool for LIS programs to assess their curriculums in comparison to the needs of eScience and other data-driven and networked research. Finally, this investigation will provide awareness and insight into the services needed to support a thriving eScience and data-driven research community to the LIS profession.