958 resultados para ontologies, digital library, personalization
Resumo:
Once regarded as the public’s center of knowledge and information, public libraries today are challenged by the rise of mobile technology and the Internet. Information behavior of everyday library patrons have transformed to rely on instant access of information through Google search instead of the resources housed in their local libraries. The focus of public library design is shifting from storing & protecting valuable resources (books) to the experience of an active public space of learning, engaging and reading. This thesis reimagines a public library branch in East Baltimore City by evaluating the architecture of public library examples of the past and of today. By understanding the user experience of the three key elements of public library design – procession, services & flexible space - a new public library design that engages and responds to the local community can be proposed.
Resumo:
The poster was presented at the 2016 Tri-Chapter Meeting (MACMLA, NY-NJ and PHIL Chapters), The 3Ls - Librarians, Leadership and Learning on September 25, 2016 in Philadelphia, PA (http://macmla.libguides.com/tri-chapter2016-posters).
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
Al ser la biblioteca pública de acceso libre y gratuito, se constituye como uno de los medios de información y conocimiento que ofrece oportunidades a todos los miembros de la comunidad y desde donde se pueden hacer efectivos los derechos a la información, la educación, la cultura y la recreación.De acuerdo con la UNESCO, es responsabilidad de la bibliotecapública facilitar el acceso al conocimiento y a la información a través de Internet, y con ello lograr su desarrollo y evolución tecnológica.Actualmente, la biblioteca pública costarricense esta incursionando paulatinamente en el mundo de las TIC y está tratando de integrarse en la revolución digital para ofrecer servicios novedosos, de calidad y seguir siendo útiles para los ciudadanos (as), y a la vez, está procurando acortar la brecha digital de quienes tienen menos oportunidades.
Resumo:
This presentation was given at the 2015 USETDA (United States Electronic Theses and Dissertations Association) conference in Austin, Texas explores the history of Digital Collections Center at Florida International University and where and how it functions in the process of publishing, archiving, and promoting the university's electronic theses and dissertations. Additionally, the functionality of Digital Commons is discussed along with the use of Adobe Acrobat for creating archival quality PDFs. The final section discusses promotion techniques used via social media for increased discoverability of ETDs.
A Digital Collection Center's Experience: ETD Discovery, Promotion, and Workflows in Digital Commons
Resumo:
This presentation was given at the Digital Commons Southeastern User Group conference at Winthrop University, South Carolina on June 5, 2015. The presentation discusses how the digital collections center (DCC) at Florida International University uses Digital Commons as their tool for ingesting, editing, tracking, and publishing university theses and dissertations. The basic DCC workflow is covered as well as institutional repository promotion.
Resumo:
This presentation was given at the Panhandle Library Access Network's (PLAN) Innovation Conference: Digitization- Preserving the Past for the Future Conference on August 14th, 2015. The presentation uses a specific collection of directories as a case study of the complications librarians and archivists face in digitizing older materials that may also be quite large, such as a directory. Prime OCR and Abbyy Fine Reader are discussed and their pros and cons covered. Troubleshooting and editing with Adobe Photoshop is also discussed.
Resumo:
This presentation was given at the FLVC regional conference at Broward College on May 7, 2015 and introduced scanning, processing, record creation, dissemination, and preservation in FIU Libraries' Digital Collections Center. The main focus was on processing, specifically employing OCR technology with difficult sources.
Resumo:
As novas possibilidades de comunicação também oferecem novas oportunidades para a formação, análise e avaliação da investigação. Cientistas e investigadores usam com frequência as aplicações baseadas na web em investigação. Praticamente em todas as áreas de investigação, as ferramentas digitais converteram-se em indispensáveis; o aparecimento de novos paradigmas como o acesso aberto, as métricas alternativas e as redes sociais são um importante exemplo de como estas mudanças afetaram a forma como os estudiosos pensam no futuro das publicações académicas. Estes acontecimentos criaram novas possibilidades e novos desafios na avaliação da qualidade da investigação, ao nível dos investigadores individuais e do desenvolvimento profissional. É a este nível que a biblioteca desempenha um papel indispensável na formação de competências e habilidades informativas que se repercutirá na valorização social do profissional, na sua satisfação profissional e, em última instância, na qualidade da própria instituição. Destacam-se os aspetos mais relevantes nos novos paradigmas de comunicação e difusão científica e, a esse respeito, recomendam-se as ações mais adequadas.
Resumo:
Thesis (Master, Education) -- Queen's University, 2016-08-29 15:56:53.748
Resumo:
Many years have passed since Berners-Lee envi- sioned the Web as it should be (1999), but still many information professionals do not know their precise role in its development, especially con- cerning ontologies –considered one of its main elements. Why? May it still be a lack of under- standing between the different academic commu- nities involved (namely, Computer Science, Lin- guistics and Library and Information Science), as reported by Soergel (1999)? The idea behind the Semantic Web is that of several technologies working together to get optimum information re- trieval performance, which is based on proper resource description in a machine-understandable way, by means of metadata and vocabularies (Greenberg, Sutton and Campbell, 2003). This is obviously something that Library and Information Science professionals can do very well, but, are we doing enough? When computer scientists put on stage the ontology paradigm they were asking for semantically richer vocabularies that could support logical inferences in artificial intelligence as a way to improve information retrieval systems. Which direction should vocabulary development take to contribute better to that common goal? The main objective of this paper is twofold: 1) to identify main trends, issues and problems con- cerning ontology research and 2) to identify pos- sible contributions from the Library and Information Science area to the development of ontologies for the semantic web. To do so, our paper has been structured in the following manner. First, the methodology followed in the paper is reported, which is based on a thorough literature review, where main contributions are analysed. Then, the paper presents a discussion of the main trends, issues and problems concerning ontology re- search identified in the literature review. Recom- mendations of possible contributions from the Library and Information Science area to the devel- opment of ontologies for the semantic web are finally presented.
Resumo:
Harnessing the potential of semantic web technologies to support and diversify scholarship is gaining popularity in the digital humanities. This talk describes a number of projects utilising Linked Data ranging from musicology and library metadata, to the representation of the narrative structure, philological, bibliographical, and museological data of ancient Mesopotamian literary compositions.
Resumo:
O CERN - a Organização Europeia para a Investigação Nuclear - é um dos maiores centros de investigação a nível mundial, responsável por diversas descobertas na área da física bem como na área das ciências da computação. O CERN Document Server, também conhecido como CDS Invenio, é um software desenvolvido no CERN, que tem como objectivo fornecer um conjunto de ferramentas para gerir bibliotecas digitais. A fim de melhorar as funcionalidades do CDS Invenio foi criado um novo módulo, chamado BibCirculation, para gerir os livros (e outros itens) da biblioteca do CERN, funcionando como um sistema integrado de gestão de bibliotecas. Esta tese descreve os passos que foram dados para atingir os vários objectivos deste projecto, explicando, entre outros, o processo de integração com os outros módulos existentes bem como a forma encontrada para associar informações dos livros com os metadados do CDS lnvenio. É também possível encontrar uma apresentação detalhada sobre todo o processo de implementação e os testes realizados. Finalmente, são apresentadas as conclusões deste projecto e o trabalho a desenvolver futuramente. ABSTRACT: CERN - The European Organization for Nuclear Research - is one of the largest research centers worldwide, responsible for several discoveries in physics as well as in computer science. The CERN Document Server, also known as CDS Invenio, is a software developed at CERN, which aims to provide a set of tools for managing digital libraries. ln order to improve the functionalities of CDS Invenio a new module was developed, called BibCirculation, to manage books (and other items) from the CERN library, and working as an Integrated Library System. This thesis shows the steps that have been done to achieve the several goals of this project, explaining, among others aspects, the process of integration with other existing modules as well as the way to associate the information about books with the metadata from CDS lnvenio. You can also find detailed explanation of the entire implementation process and testing. Finally, there are presented the conclusions of this project and ideas for future development.
Resumo:
Pain is a highly complex phenomenon involving intricate neural systems, whose interactions with other physiological mechanisms are not fully understood. Standard pain assessment methods, relying on verbal communication, often fail to provide reliable and accurate information, which poses a critical challenge in the clinical context. In the era of ubiquitous and inexpensive physiological monitoring, coupled with the advancement of artificial intelligence, these new tools appear as the natural candidates to be tested to address such a challenge. This thesis aims to conduct experimental research to develop digital biomarkers for pain assessment. After providing an overview of the state-of-the-art regarding pain neurophysiology and assessment tools, methods for appropriately conditioning physiological signals and controlling confounding factors are presented. The thesis focuses on three different pain conditions: cancer pain, chronic low back pain, and pain experienced by patients undergoing neurorehabilitation. The approach presented in this thesis has shown promise, but further studies are needed to confirm and strengthen these results. Prior to developing any models, a preliminary signal quality check is essential, along with the inclusion of personal and health information in the models to limit their confounding effects. A multimodal approach is preferred for better performance, although unimodal analysis has revealed interesting aspects of the pain experience. This approach can enrich the routine clinical pain assessment procedure by enabling pain to be monitored when and where it is actually experienced, and without the involvement of explicit communication,. This would improve the characterization of the pain experience, aid in antalgic therapy personalization, and bring timely relief, with the ultimate goal of improving the quality of life of patients suffering from pain.