69 resultados para Digitisation
Resumo:
Publishing is no doubt one of the oldest and most diverse sectors in the creative economy. While publishing originally was associated with print and paper, the term is nowadays also commonly used to represent organizations that control, administer and license intellectual properties in other sectors of the creative economy such as videogames and music. While the title of this chapter is ‘Publishing’, we have no intention of covering all publishing related activities, but will focus on the economic consequences of digitization on two traditional and important print media sectors, namely books and magazines. Within these sectors we will specifically focus on consumer magazines and trade books, in other words books and magazines that are sold via commercial retailers to consumers. It is relevant to study these two publishing industries, since they share a number of very fundamental characteristics and have experienced similar economic consequences caused by the digitization of the creative economy. Both industries have undergone a gradual shift from print to digital and increasingly rely on revenues based on digital content carriers such as e- books, tablet magazine applications, special interest websites, blogs and so on.
Resumo:
Accurate three-dimensional representations of cultural heritage sites are highly valuable for scientific study, conservation, and educational purposes. In addition to their use for archival purposes, 3D models enable efficient and precise measurement of relevant natural and architectural features. Many cultural heritage sites are large and complex, consisting of multiple structures spatially distributed over tens of thousands of square metres. The process of effectively digitising such geometrically complex locations requires measurements to be acquired from a variety of viewpoints. While several technologies exist for capturing the 3D structure of objects and environments, none are ideally suited to complex, large-scale sites, mainly due to their limited coverage or acquisition efficiency. We explore the use of a recently developed handheld mobile mapping system called Zebedee in cultural heritage applications. The Zebedee system is capable of efficiently mapping an environment in three dimensions by continually acquiring data as an operator holding the device traverses through the site. The system was deployed at the former Peel Island Lazaret, a culturally significant site in Queensland, Australia, consisting of dozens of buildings of various sizes spread across an area of approximately 400 × 250 m. With the Zebedee system, the site was scanned in half a day, and a detailed 3D point cloud model (with over 520 million points) was generated from the 3.6 hours of acquired data in 2.6 hours. We present results demonstrating that Zebedee was able to accurately capture both site context and building detail comparable in accuracy to manual measurement techniques, and at a greatly increased level of efficiency and scope. The scan allowed us to record derelict buildings that previously could not be measured because of the scale and complexity of the site. The resulting 3D model captures both interior and exterior features of buildings, including structure, materials, and the contents of rooms.
Resumo:
In 2012 the Australian Commonwealth government was scheduled to release the first dedicated policy for culture and the arts since the Keating government's Creative Nation (1994). Investing in a Creative Australia was to appear after a lengthy period of consultation between the Commonwealth government and all interested cultural sectors and organisations. When it eventuates, the policy will be of particular interest to those information professionals working in the GLAM (galleries, libraries, archives and museums) environment. GLAM is a cross-institutional field which seeks to find points of commonality among various cultural-heritage institutions, while still recognising their points of difference. Digitisation, collaboration and convergence are key themes and characteristics of the GLAM sector and its associated theoretical discipline. The GLAM movement has seen many institutions seeking to work together to create networks of practice that are beneficial to the cultural-heritage industry and sector. With a new Australian cultural policy imminent, it is timely to reflect on the issues and challenges that GLAM principles present to national cultural-heritage institutions by discussing their current practices. In doing so, it is possible to suggest productive ways forward for these institutions which could then be supported at a policy level by the Commonwealth government. Specifically, this paper examines four institutions: the National Gallery of Australia, the National Library of Australia, the National Archives of Australia and the National Museum of Australia. The paper reflects on their responses to the Commonwealth's 2011 Cultural Policy Discussion Paper. It argues that by encouraging and supporting collecting institutions to participate more fully in GLAM practices the Commonwealth government's cultural policy would enable far greater public access to, and participation in, Australia's cultural heritage. Furthermore, by considering these four institutions, the paper presents a discussion of the challenges and the opportunities that GLAM theoretical and disciplinary principles present to the cultural-heritage sector. Implications for Best Practice * GLAM is a developing field of theory and practice that encompasses many issues and challenges for practitioners in this area. * GLAM principles and practices are increasingly influencing the cultural-heritage sector. * Cultural policy is a key element in shaping the future of Australia's cultural-heritage sector and needs to incorporate GLAM principles.
Resumo:
This project constructed virtual plant leaf surfaces from digitised data sets for use in droplet spray models. Digitisation techniques for obtaining data sets for cotton, chenopodium and wheat leaves are discussed and novel algorithms for the reconstruction of the leaves from these three plant species are developed. The reconstructed leaf surfaces are included into agricultural droplet spray models to investigate the effect of the nozzle and spray formulation combination on the proportion of spray retained by the plant. A numerical study of the post-impaction motion of large droplets that have formed on the leaf surface is also considered.
Resumo:
This paper examines key issues emerging from the July 2014 Where We Are Heading sessions conducted between The National Film and Sound Archive (NFSA) CEO Michael Loebenstein, industry stakeholders and members of the public seeking to engage with the future direction of the NFSA. Analysis of transcripts from these public meetings reveal significant conceptual and programmatic gaps exist between what the NFSA has done in the past, how it “self-actualises” in terms of a national collection and what it can practically and effectively achieve in the near future. These significant challenges to the historical function of the Archive occur at a time of pronounced economic austerity for public cultural institutions and expanding, digitally driven curatorial responsibilities. Tensions exist between the need for the NFSA to increase revenue while preserving the function of an open and accessible Archive. Three key areas of challenge are addressed - digitisation, funding and the need for the NFSA to connect more broadly and more deeply with Australian society. The latter area is identified as crucial as the NFSA continues to articulate and actively promote the public value of the Archive through renewed program and outreach efforts.
Resumo:
While people in Catholic parishes in Ireland appear keenly aware of parish boundaries, these understandings are more often oral than cartographic. There is no digital map of all of the Catholic parishes in Ireland. However, the institutional Catholic Church insists that no square kilometre can exist outside of a parish boundary. In this paper, I explain a process whereby the Catholic parishes of Ireland were produced digitally. I will outline some of the technical challenges of digitizing such boundaries. In making these maps, it is not only a question of drawing lines but mapping people’s understanding of their locality. Through an example of one part of the digitisation project, I want to talk about how verifying maps with local people often complicates something which may have at first sight seemed simple. The paper ends on a comparison with how other communities of interest are territorialised in Ireland and elsewhere to draw out some broader theoretical and theological issues of concern.
Resumo:
This paper is concerned with the development of digital humanities infrastructure – tools and resources which make using existing e-content easier to discover, utilise and embed in teaching and research. The past development of digital content in the humanities (in the United Kingdom) is considered with its resource-focused approach, as are current barriers facing digital humanities as a discipline. Existing impacts from e-infrastructure are discussed, based largely on the authors’ own discrete or collaborative projects. This paper argues that we need to consider further how digital resources are actually used, and the ways in which future digital resources might enable new types of research questions to be asked. It considers the potential for such enabling resources to advance digital humanities significantly in the near future.
Resumo:
Recent debates about media literacy and the internet have begun to acknowledge the importance of active user-engagement and interaction. It is not enough simply to access material online, but also to comment upon it and re-use. Yet how do these new user expectations fit within digital initiatives which increase access to audio-visual-content but which prioritise access and preservation of archives and online research rather than active user-engagement? This article will address these issues of media literacy in relation to audio-visual content. It will consider how these issues are currently being addressed, focusing particularly on the high-profile European initiative EUscreen. EUscreen brings together 20 European television archives into a single searchable database of over 40,000 digital items. Yet creative re-use restrictions and copyright issues prevent users from re-working the material they find on the site. Instead of re-use, EUscreen instead offers access and detailed contextualisation of its collection of material. But if the emphasis for resources within an online environment rests no longer upon access but on user-engagement, what does EUscreen and similar sites offer to different users?
Resumo:
SPHERE (Stormont Parliamentary Hansards: Embedded in Research and Education) was a JISC-funded project based at King’s College, London and Queen’s University, Belfast, working in Partnership with the Northern Ireland Assembly Library, and the NIA Official Report (Hansard). Its purpose was to assess the use, value and impact of The Stormont Papers digital resource, and to use the results of this assessment to make recommendations for a series of practical approaches to embed the resource within teaching, learning and research among the wider user community. The project began in November 2010 and was concluded in April 2010.
A series of formal reports on the project are published by JISC online at http://www.jisc.ac.uk/whatwedo/programmes/digitisation/impactembedding/sphere.aspx
SPHERE Impact analysis summary
Portable Document Format
SPHERE interviews report
SPHERE Outreach use case
SPHERE research use case
SPHERE teaching use_case
SPHERE web survey report
SPHERE web analysis
Resumo:
Copyright & Risk: Scoping the Wellcome Digital Library is a comprehensive case study which assesses the merits of the risk-managed approach to copyright clearance adopted by the Wellcome Library in the course of their pilot digitisation project Codebreakers: Makers of Modern Genetics (http://wellcomelibrary.org/collections/digital-collections/makers-of-modern-genetics/#).
Resumo:
The study focuses on five lower secondary school pupils’ daily use of their one-toone computers, the overall aim being to investigate literacy in this form of computing. Theoretically, the study is rooted in the New Literacy tradition with an ecological perspective, in combination with socio-semiotic theory in a multimodal perspective. New Literacy in the ecological perspective focuses on literacy practices and place/space and on the links between them. Literacy is viewed as socially based, in specific situations and in recurring social practices. Socio-semiotic theory embodying the multimodal perspective is used for the text analysis. The methodology is known as socio-semiotic ethnography. The ethnographic methods encompass just over two years of fieldwork with participating observations of the five participants’ computing activities at home, at school and elsewhere. The participants, one boy and two girls from the Blue (Anemone) School and two girls from the White (Anemone) School, were chosen to reflect a broad spectrum in terms of sociocultural and socioeconomic background. The study shows the existence of a both broad and deep variation in the way digital literacy features in the participants’ one-to-one computing. These variations are associated with experience in relation to the home, the living environment, place, personal qualities and school. The more varied computer usage of the Blue School participants is connected with the interests they developed in their homes and living environments and in the computing practices undertaken in school. Their more varied usage of the computer is reflected in their broader digital literacy repertoires and their greater number and variety of digital literacy abilities. The Blue School participants’ text production is more multifaceted, covers a wider range of subjects and displays a broader palette of semiotic resources. It also combines more text types and the texts are generally longer than those of the White School participants. The Blue School girls have developed a text culture that is close to that of the school. In their case, there is clear linkage between school-initiated and self-initiated computing activities, while other participants do not have the same opportunities to link and integrate self-initiated computing activities into the school context. It also becomes clear that the Blue School girls can relate and adapt their texts to different communicative practices and recipients. In addition, the study shows that the Blue School girls have some degree of scope in their school practice as a result of incorporating into it certain communicative practices that they have developed in nonschool contexts. Quite contrary to the hopes expressed that one-to-one computing would reduce digital inequality, it has increased between these participants. Whether the same or similar results apply in a larger perspective, on a more structural level, is a question that this study cannot answer. It can only draw attention to the need to investigate the matter. The study shows in a variety of ways that the White School participants do not have the same opportunity to develop their digital literacy as the Blue School participants. In an equivalence perspective, schools have a compensational task to perform. It is abundantly clear from the study that investing in one-to-one projects is not enough to combat digital inequality and achieve the digitisation goals established for school education. Alongside their investments in technology, schools need to develop a didactic that legitimises and compensates for the different circumstances of different pupils. The compensational role of schools in this connection is important not only for the present participants but also for the community at large, in that it can help to secure a cohesive, open and democratic society.
Resumo:
Travail réalisé à l'EBSI, Université de Montréal, sous la direction de M. Yvon Lemay dans le cadre du cours SCI6111 - Politique de gestion des archives, à l'automne 2012.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.