868 resultados para Recording and registration
Resumo:
Preserving the cultural heritage of the performing arts raises difficult and sensitive issues, as each performance is unique by nature and the juxtaposition between the performers and the audience cannot be easily recorded. In this paper, we report on an experimental research project to preserve another aspect of the performing arts—the history of their rehearsals. We have specifically designed non-intrusive video recording and on-site documentation techniques to make this process transparent to the creative crew, and have developed a complete workflow to publish the recorded video data and their corresponding meta-data online as Open Data using state-of-the-art audio and video processing to maximize non-linear navigation and hypervideo linking. The resulting open archive is made publicly available to researchers and amateurs alike and offers a unique account of the inner workings of the worlds of theater and opera.
Resumo:
Pseudoneglect represents the tendency for healthy individuals to show a slight but consistent bias in favour of stimuli appearing in the left visual field. The bias is often measured using variants of the line bisection task. An accurate model of the functional architecture of the visuospatial attention system must account for this widely observed phenomenon, as well as for modulation of the direction and magnitude of the bias within individuals by a variety of factors relating to the state of the participant and/or stimulus characteristics. To date, the neural correlates of pseudoneglect remain relatively unmapped. In the current thesis, I employed a combination of psychophysical measurements, electroencephalography (EEG) recording and transcranial direct current stimulation (tDCS) in an attempt to probe the neural generator(s) of pseudoneglect. In particular, I wished to utilise and investigate some of the factors known to modulate the bias (including age, time-on-task and the length of the to-be-bisected line) in order to identify neural processes and activity that are necessary and sufficient for the lateralized bias to arise. Across four experiments utilising a computerized version of a perceptual line bisection task, pseudoneglect was consistently observed at baseline in healthy young participants. However, decreased line length (experiments 1, 2 and 3), time-on-task (experiment 1) and healthy aging (experiment 3) were all found to modulate the bias. Specifically, all three modulations induced a rightward shift in subjective midpoint estimation. Additionally, the line length and time-on-task effects (experiment 1) and the line length and aging effects (experiment 3) were found to have additive relationships. In experiment 2, EEG measurements revealed the line length effect to be reflected in neural activity 100 – 200ms post-stimulus onset over source estimated posterior regions of the right hemisphere (RH: temporo-parietal junction (TPJ)). Long lines induced a hemispheric asymmetry in processing (in favour of the RH) during this period that was absent in short lines. In experiment 4, bi-parietal tDCS (Left Anodal/Right Cathodal) induced a polarity-specific rightward shift in bias, highlighting the crucial role played by parietal cortex in the genesis of pseudoneglect. The opposite polarity (Left Cathodal/Right Anodal) did not induce a change in bias. The combined results from the four experiments of the current thesis provide converging evidence as to the crucial role played by the RH in the genesis of pseudoneglect and in the processing of visual input more generally. The reduction in pseudoneglect with decreased line length, increased time-on-task and healthy aging may be explained by a reduction in RH function, and hence contribution to task processing, induced by each of these modulations. I discuss how behavioural and neuroimaging studies of pseudoneglect (and its various modulators) can provide empirical data upon which accurate formal models of visuospatial attention networks may be based and further tested.
Resumo:
Following its inception in 1994, the certification of European Registered Toxicologists (ERT) by EUROTOX has been recognized as ensuring professional competence as well as scientific integrity and credibility. Criteria and procedures for registration are contained in the ERT "Guidelines for Registration 2012". The register of ERT currently has over 1900 members. In order to continue the harmonisation of requirements and processes between national registering bodies as a prerequisite for official recognition of the ERT title as a standard, and to take account of recent developments in toxicology, an update of the ERT Guidelines has been prepared in a series of workshops by the EUROTOX subcommittees for education and registration, in consultation with representatives of national toxicology societies and registers. The update includes details of topics and learning outcomes for theoretical training, and how these can be assessed. The importance of continuing professional development as the cornerstone of re-registration is emphasised. To help with the process of harmonisation, it is necessary to collate and share best practices of registration conditions and procedures across Europe. Importantly, this information can also be used to audit compliance with the EUROTOX standards. As recognition of professionals in toxicology, including specialist qualifications, is becoming more important than ever, we believe that this can best be achieved based on the steps for harmonisation outlined here together with the proposed new Guidelines.
Resumo:
Preserving the cultural heritage of the performing arts raises difficult and sensitive issues, as each performance is unique by nature and the juxtaposition between the performers and the audience cannot be easily recorded. In this paper, we report on an experimental research project to preserve another aspect of the performing arts—the history of their rehearsals. We have specifically designed non-intrusive video recording and on-site documentation techniques to make this process transparent to the creative crew, and have developed a complete workflow to publish the recorded video data and their corresponding meta-data online as Open Data using state-of-the-art audio and video processing to maximize non-linear navigation and hypervideo linking. The resulting open archive is made publicly available to researchers and amateurs alike and offers a unique account of the inner workings of the worlds of theater and opera.
Resumo:
The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.
Resumo:
Research Background - Young people with negative experiences of mainstream education often display low levels of traditional academic achievement. These young people tend to display considerable cultural and social resources developed through their repeated experiences of adversity. Education research has a duty to provide these young people with opportunities to showcase, assess and translate their social and cultural resources into symbolic forms of capital. This creative work addresses the following research question. How can educators develop disengaged teenager's social and cultural capital through live music performances? Research Contribution - These live music performances afford the young participants opportunities to display their artistic, technical, social and cultural resources through a popular cultural format. In doing so they require education institutions to provide venues that demonstrate the skills these young people acquire through flexible learning environments. The new knowledge derived from this research focuses on the academic and self confidence benefits for disengaged young people using festival performances as authentic learning activities. Research Significance - This research is significant because it aims to maximise the number of tangible outcomes related to a school-based arts project. The young participants gained technical, artistic, social and commercial skills during this project. This performance led to more recording and opportunities to perform at other youth festivals in SE QLD. Individual performances were distributed and downloaded via creative commons licences at the Australian Creative Resource Archive. It also contributed to their certified qualifications and acted as pilot research data for two competitively funded ARC grants (DP0209421 & LP0883643)
Resumo:
Digital Songlines (DSL) is an Australasian CRC for Interaction Design (ACID) project that is developing protocols, methodologies and toolkits to facilitate the collection, education and sharing of indigenous cultural heritage knowledge. This paper outlines the goals achieved over the last three years in the development of the Digital Songlines game engine (DSE) toolkit that is used for Australian Indigenous storytelling. The project explores the sharing of indigenous Australian Aboriginal storytelling in a sensitive manner using a game engine. The use of the game engine in the field of Cultural Heritage is expanding. They are an important tool for the recording and re-presentation of historically, culturally, and sociologically significant places, infrastructure, and artefacts, as well as the stories that are associated with them. The DSL implementation of a game engine to share storytelling provides an educational interface. Where the DSL implementation of a game engine in a CH application differs from others is in the nature of the game environment itself. It is modelled on the 'country' (the 'place' of their heritage which is so important to the clients' collective identity) and authentic fauna and flora that provides a highly contextualised setting for the stories to be told. This paper provides an overview on the development of the DSL game engine.
Resumo:
An interpretative methodology for understanding meaning in cinema since the 1950s, auteur analysis is an approach to film studies in which an individual, usually the director, is studied as the author of her or his films. The principal argument of this thesis is that proponents of auteurism have privileged examination of the visual components in a film-maker’s body of work, neglecting the potentially significant role played by sound. The thesis seeks to address this problematic imbalance by interrogating the creative use of sound in the films written and directed by Rolf de Heer, asking the question, “Does his use of sound make Rolf de Heer an aural auteur?” In so far as the term ‘aural’ encompasses everything in the film that is heard by the audience, the analysis seeks to discover if de Heer has, as Peter Wollen suggests of the auteur and her or his directing of the visual components (1968, 1972 and 1998), unconsciously left a detectable aural signature on his films. The thesis delivers an innovative outcome by demonstrating that auteur analysis that goes beyond the mise-en-scène (i.e. visuals) is productive and worthwhile as an interpretive response to film. De Heer’s use of the aural point of view and binaural sound recording, his interest in providing a ‘voice’ for marginalised people, his self-penned song lyrics, his close and early collaboration with composer Graham Tardif and sound designer Jim Currie, his ‘hands-on’ approach to sound recording and sound editing and his predilection for making films about sound are all shown to be examples of de Heer’s aural auteurism. As well as the three published (or accepted for publication) interviews with de Heer, Tardif and Currie, the dissertation consists of seven papers refereed and published (or accepted for publication) in journals and international conference proceedings, a literature review and a unifying essay. The papers presented are close textual analyses of de Heer’s films which, when considered as a whole, support the thesis’ overall argument and serve as a comprehensive auteur analysis, the first such sustained study of his work, and the first with an emphasis on the aural.
Resumo:
This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.
Resumo:
Embedded generalized markup, as applied by digital humanists to the recording and studying of our textual cultural heritage, suffers from a number of serious technical drawbacks. As a result of its evolution from early printer control languages, generalized markup can only express a document’s ‘logical’ structure via a repertoire of permissible printed format structures. In addition to the well-researched overlap problem, the embedding of markup codes into texts that never had them when written leads to a number of further difficulties: the inclusion of potentially obsolescent technical and subjective information into texts that are supposed to be archivable for the long term, the manual encoding of information that could be better computed automatically, and the obscuring of the text by highly complex technical data. Many of these problems can be alleviated by asserting a separation between the versions of which many cultural heritage texts are composed, and their content. In this way the complex inter-connections between versions can be handled automatically, leaving only simple markup for individual versions to be handled by the user.
Resumo:
This thesis explores the business environment for self-publishing musicians at the end of the 20th century and the start of the 21st century from theoretical and empirical standpoints. The exploration begins by asking three research questions: what are the factors affecting the sustainability of an Independent music business; how many of those factors can be directly influenced by an Independent musician in the day-to-day operations of their musical enterprise; and how can those factors be best manipulated to maximise the benefit generated from digital music assets? It answers these questions by considering the nature of value in the music business in light of theories of political economy, then quantitative and qualitative examinations of the nature of participation in the music business, and then auto-ethnographic approaches to the application of two technologically enabled tools available to Independent musicians. By analyzing the results of five different examinations of the topic it answers each research question with reference to four sets of recurring issues that affect the operations of a 21st century music business: the musicians’ personal characteristics, their ability to address their business’s informational needs; their ability to manage the relationships upon which their business depends; and their ability to resolve the remaining technological problems that confront them. It discusses ways in which Independent self-publishing musicians can and cannot deal with these four issues on a day-to-day basis and highlights aspects for which technological solutions do not exist as well as ways in which technology is not as effective as has been claimed. It then presents a self-critique and proposes some directions for further study before concluding by suggesting some common features of 21st century Independent music businesses. This thesis makes three contributions to knowledge. First, it provides a new understanding of the sources of musical value, shows how this explains changes in the music industries over the past 30 years, and provides a framework for predicting future developments in those industries. Second, it shows how the technological discontinuity that has occurred around the start of the 21st century has and has not affected the production and distribution of digital cultural artefacts and thus the attitudes, approaches, and business prospects of Independent musicians. Third, it argues for new understandings of two methods by which self-publishing musicians can grow a business using production methods that are only beginning to be more broadly understood: home studio recording and fan-sourced production. Developed from the perspective of working musicians themselves, this thesis identifies four sets of issues that determine the probable success of musicians’ efforts to adopt new technologies to capture the value of the musicians’ creativity and thereby foster growth that will sustain an Independent music business in the 21st century.
Resumo:
A broad range of positions is articulated in the academic literature around the relationship between recordings and live performance. Auslander (2008) argues that “live performance ceased long ago to be the primary experience of popular music, with the result that most live performances of popular music now seek to replicate the music on the recording”. Elliott (1995) suggests that “hit songs are often conceived and produced as unambiguous and meticulously recorded performances that their originators often duplicate exactly in live performances”. Wurtzler (1992) argues that “as socially and historically produced, the categories of the live and the recorded are defined in a mutually exclusive relationship, in that the notion of the live is premised on the absence of recording and the defining fact of the recorded is the absence of the live”. Yet many artists perform in ways that fundamentally challenge such positions. Whilst it is common practice for musicians across many musical genres to compose and construct their musical works in the studio such that the recording is, in Auslander’s words, the ‘original performance’, the live version is not simply an attempt to replicate the recorded version. Indeed in some cases, such replication is impossible. There are well known historical examples. Queen, for example, never performed the a cappella sections of Bohemian Rhapsody because it they were too complex to perform live. A 1966 recording of the Beach Boys studio creation Good Vibrations shows them struggling through the song prior to its release. This paper argues that as technology develops, the lines between the recording studio and live performance change and become more blurred. New models for performance emerge. In a 2010 live performance given by Grammy Award winning artist Imogen Heap in New York, the artist undertakes a live, improvised construction of a piece as a performative act. She invites the audience to choose the key for the track and proceeds to layer up the various parts in front of the audience as a live performance act. Her recording process is thus revealed on stage in real time and she performs a process that what would have once been confined to the recording studio. So how do artists bring studio production processes into the live context? What aspects of studio production are now performable and what consistent models can be identified amongst the various approaches now seen? This paper will present an overview of approaches to performative realisations of studio produced tracks and will illuminate some emerging relationships between recorded music and performance across a range of contexts.
Resumo:
Previous research has indicated that road crashes are the most common form of work related fatalities (Haworth et al., 2000). Historically, industry has often taken a “silver bullet” approach developing and implementing a single countermeasure to address all their work related road safety issues, despite legislative requirements to discharge obligations through minimising risk and enhancing safety. This paper describes the results and implications from a series of work related road safety audits that were undertaken across five organisations to determine deficiencies in each organisation‟s safe driving management and practice. Researchers conducted a series of structured interviews, reviewed documentation relating to work related driving, and analysed vehicle related crash and incident records to determine each organisation‟s current situation in the management of work related road safety and driver behaviour. A number of consistent themes and issues across each organisation were identified relating to managing driver behaviour, organisational policies, incident recording and reporting, communication and education, and formalisation of key work related road safety strategies. Although organisations are required to undertake risk reduction strategies for all work related driving, the results of the research suggest that many organisations fail to systematically manage driver behaviour and mitigate work related road safety risk. Future improvements in work related road safety will require organisations to firstly acknowledge the high risk associated with drivers driving for work and secondly adopt comprehensive risk mitigation strategies in a similar manner to managing other workplace hazards.