29 resultados para Musical pitch
em Queensland University of Technology - ePrints Archive
Resumo:
Pitch discrimination skills are important for general musicianship. The ability to name musical notes or produce orally any named note without the benefit of a known reference is called Absolute Pitch (AP) and is comparatively rare. Relative Pitch (RP) is the ability to name notes when a known reference is available. AP has historically been regarded as being innate. This paper will examine the notion that pitch discrimination skill is based on knowledge constructed through a suite of experiences. That is, it is learnt. In particular, it will be argued that early experiences promote the development of AP. Second it will argue that AP and RP represent different types of knowledge, and that this knowledge emerges from different experiences. AP is a unique research phenomenon because it spans the fields of cognition and perception, in that it links verbal labels with physiological sensations, and because of its rarity. It may provide a vantage for investigating the nature/nurture of musicianship; expertise; knowledge structure development; and the role of knowledge in perception. The study of AP may inform educational practice and curriculum design both in music and cross-curriculur. This paper will report an initial investigation into the similarities and differences between the musical experiences of AP possessors and the manifestation of their AP skill. Interview and questionnaire data will be used for the development and proposal of a preliminary model for AP development.
Resumo:
Most advanced musicians are able to identify and label a heard pitch if given an opportunity to compare it to a known reference note. This is called ‘relative pitch’ (RP). A much rarer skill is the ability to identify and label a heard pitch without the need for a reference. This is colloquially referred to as ‘perfect pitch’, but appears in the academic literature as ‘absolute pitch’ (AP). AP is considered by many as a remarkable skill. As people do not seem able to develop it intentionally, it is generally regarded as innate. It is often seen as a unitary skill and that a set of identifiable criteria can distinguish those who possess the skill from those who do not. However, few studies have interrogated these notions. The present study developed and applied an interactive computer program to map pitch-labelling responses to various tonal stimuli without a known reference tone available to participants. This approach enabled the identification of the elements of sound that impacted on AP. Pitch-labelling responses of 14 participants with AP were recorded for their accuracy. Each participant’s response to the stimuli was unique. Their accuracy of labelling varied across dimensions such as timbre, range and tonality. The diversity of performance between individuals appeared to reflect their personal musical experience histories.
Resumo:
When communicating emotion in music, composers and performers encode their expressive intentions through the control of basic musical features such as: pitch, loudness, timbre, mode, and articulation. The extent to which emotion can be controlled through the systematic manipulation of these features has not been fully examined. In this paper we present CMERS, a Computational Music Emotion Rule System for the control of perceived musical emotion that modifies features at the levels of score and performance in real-time. CMERS performance was evaluated in two rounds of perceptual testing. In experiment I, 20 participants continuously rated the perceived emotion of 15 music samples generated by CMERS. Three music works, each with five emotional variations were used (normal, happy, sad, angry, and tender). The intended emotion by CMERS was correctly identified 78% of the time, with significant shifts in valence and arousal also recorded, regardless of the works’ original emotion.
Resumo:
Drawing from ethnographic, empirical, and historical / cultural perspectives, we examine the extent to which visual aspects of music contribute to the communication that takes place between performers and their listeners. First, we introduce a framework for understanding how media and genres shape aural and visual experiences of music. Second, we present case studies of two performances, and describe the relation between visual and aural aspects of performance. Third, we report empirical evidence that visual aspects of performance reliably influence perceptions of musical structure (pitch related features) and affective interpretations of music. Finally, we trace new and old media trajectories of aural and visual dimensions of music, and highlight how our conceptions, perceptions and appreciation of music are intertwined with technological innovation and media deployment strategies.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.
Resumo:
This paper discusses a method, Generation in Context, for interrogating theories of music analysis and music perception. Given an analytic theory, the method consists of creating a generative process that implements the theory in reverse. Instead of using the theory to create analyses from scores, the theory is used to generate scores from analyses. Subjective evaluation of the quality of the musical output provides a mechanism for testing the theory in a contextually robust fashion. The method is exploratory, meaning that in addition to testing extant theories it provides a general mechanism for generating new theoretical insights. We outline our initial explorations in the use of generative processes for music research, and we discuss how generative processes provide evidence as to the veracity of theories about how music is experienced, with insights into how these theories may be improved and, concurrently, provide new techniques for music creation. We conclude that Generation in Context will help reveal new perspectives on our understanding of music.
Resumo:
This paper explores a method of comparative analysis and classification of data through perceived design affordances. Included is discussion about the musical potential of data forms that are derived through eco-structural analysis of musical features inherent in audio recordings of natural sounds. A system of classification of these forms is proposed based on their structural contours. The classifications include four primitive types; steady, iterative, unstable and impulse. The classification extends previous taxonomies used to describe the gestural morphology of sound. The methods presented are used to provide compositional support for eco-structuralism.
Resumo:
The attention paid by the British music press in 1976 to the release of The Saints first single “I’m Stranded” was the trigger for a commercial and academic interest in the Brisbane music scene which still has significant energy. In 2007, Brisbane was identifed by Billboard Magazine as a “hot spot” of independent music. A place to watch. Someone turned a torch on this town, had a quick look, moved on. But this town has always had music in it. Some of it made by me. So, I’m taking this connection of mine, and working it into a contextual historical analysis of the creative lives of Brisbane musicians. I will be interviewing a number of Brisbane musicians. These interviews have begun, and will continue to be be conducted in 2011/2012. I will ask questions and pursue memories that will encompass family, teenage years, siblings, the suburbs, the city, venues, television and radio; but then widen to welcome the river, the hills and mountains, foes and friends, beliefs and death. The wider research will be a contextual historical analysis of the creative lives of Brisbane musicians. It will explore the changing nature of their work practices over time and will consider the notion, among other factors, of ‘place’ in both their creative practice and their creative output. It will also examine how the presence of the practitioners and their work is seen to contribute to the cultural life of the city and the creative lives of its citizens into the future. This paper offers an analysis of this last notion: how does this city see its music-makers? In addition to the interviews, over 300 Brisbane musicians were surveyed in September 2009 as part of a QUT-initiated recorded music event (BIGJAM). Their responses will inform the production of this paper.
Resumo:
Drawn from a larger mixed methods study, this case study provides an account of aspects of the music education programme that occurred with one teacher and a kindergarten class of children aged three and four years. Contrary to transmission approaches that are often used in Hong Kong, the case depicts how musical creativity was encouraged by the teacher in response to children’s participation during the time for musical free play. It shows how the teacher scaffolded the attempts of George, a child aged 3.6 years to use musical notation. The findings are instructive for kindergarten teachers in Hong Kong and suggest ways in which teachers might begin to incorporate more creative approaches to musical education. They are also applicable to other kindergarten settings where transmission approaches tend to dominate and teachers want to encourage children’s musical creativity.
Resumo:
Live coding performances provide a context with particular demands and limitations for music making. In this paper we discuss how as the live coding duo aa-cell we have responded to these challenges, and what this experience has revealed about the computational representation of music and approaches to interactive computer music performance. In particular we have identified several effective and efficient processes that underpin our practice including probability, linearity, periodicity, set theory, and recursion and describe how these are applied and combined to build sophisticated musical structures. In addition, we outline aspects of our performance practice that respond to the improvisational, collaborative and communicative requirements of musical live coding.
Resumo:
To date, the majority of films that utilise or feature hip hop music and culture, have either been in the realms of documentary, or in ‘show musicals’ (where the film musical’s device of characters’ bursting into song, is justified by the narrative of a pursuit of a career in the entertainment industry). Thus, most films that feature hip hop expression have in some way been tied to the subject of hip hop. A research interest and enthusiasm was developed for utilising hip hop expression in film in a new way, which would extend the narrative possibilities of hip hop film to wider topics and themes. The creation of the thesis film Out of My Cloud, and the writing of this accompanying exegesis, investigates a research concern of the potential for the use of hip hop expression in an ‘integrated musical’ film (where characters’ break into song without conceit or explanation). Context and rationale for Out of My Cloud (an Australian hip hop ‘integrated musical’ film) is provided in this writing. It is argued that hip hop is particularly suitable for use in a modern narrative film, and particularly in an ‘integrated musical’ film, due to its: current vibrancy and popularity, rap (vocal element of hip hop) music’s focus on lyrical message and meaning, and rap’s use as an everyday, non-performative method of communication. It is also argued that Australian hip hop deserves greater representation in film and literature due to: its current popularity, and its nature as a unique and distinct form of hip hop. To date, representation of Australian hip hop in film and television has almost solely been restricted to the documentary form. Out of My Cloud borrows from elements of social realist cinema such as: contrasts with mainstream cinema, an exploration/recognition of the relationship between environment and development of character, use of non-actors, location-shooting, a political intent of the filmmaker, displaying sympathy for an underclass, representation of underrepresented character types and topics, and a loose narrative structure that does not offer solid resolution. A case is made that it may be appropriate to marry elements of social realist film with hip hop expression due to common characteristics, such as: representation of marginalised or underrepresented groups and issues in society, political objectives of the artist/s, and sympathy for an underclass. In developing and producing Out of My Cloud, a specific method of working with, and filming actor improvisation was developed. This method was informed by improvisation and associated camera techniques of filmmakers such as Charlie Chaplin, Mike Leigh, Khoa Do, Dogme 95 filmmakers, and Lars von Trier (post-Dogme 95). A review of techniques used by these filmmakers is provided in this writing, as well as the impact it has made on my approach. The method utilised in Out of My Cloud was most influenced by Khoa Do’s technique of guiding actors to improvise fairly loosely, but with a predetermined endpoint in mind. A variation of this technique was developed for use in Out of My Cloud, which involved filming with two cameras to allow edits from multiple angles. Specific processes for creating Out of My Cloud are described and explained in this writing. Particular attention is given to the approaches regarding the story elements and the music elements. Various significant aspects of the process are referred to including the filming and recording of live musical performances, the recording of ‘freestyle’ performances (lyrics composed and performed spontaneously) and the creation of a scored musical scene involving a vocal performance without regular timing or rhythm. The documentation of processes in this writing serve to make the successful elements of this film transferable and replicable to other practitioners in the field, whilst flagging missteps to allow fellow practitioners to avoid similar missteps in future projects. While Out of My Cloud is not without its shortcomings as a short film work (for example in the areas of story and camerawork) it provides a significant contribution to the field as a working example of how hip hop may be utilised in an ‘integrated musical’ film, as well as being a rare example of a narrative film that features Australian hip hop. This film and the accompanying exegesis provide insights that contribute to an understanding of techniques, theories and knowledge in the field of filmmaking practice.
Resumo:
Gesture in performance is widely acknowledged in the literature as an important element in making a performance expressive and meaningful. The body has been shown to play an important role in the production and perception of vocal performance in particular. This paper is interested in the role of gesture in creative works that seek to extend vocal performance via technology. A creative work for vocal performer, laptop computer and a Human Computer Interface called the eMic (Extended Microphone Stand Interface controller) is presented as a case study, to explore the relationships between movement, voice production, and musical expression. The eMic is an interface for live vocal performance that allows the singers’ gestures and interactions with a sensor based microphone stand to be captured and mapped to musical parameters. The creative work discussed in this paper presents a new compositional approach for the eMic by working with movement as a starting point for the composition and thus using choreographed gesture as the basis for musical structures. By foregrounding the body and movement in the creative process, the aim is to create a more visually engaging performance where the performer is able to more effectively use the body to express their musical objectives.
Resumo:
The use of Cellular Automata (CA) for musical purposes has a rich history. In general the mapping of CA states to note-level music representations has focused on pitch mapping and downplayed rhythm. This paper reports experiments in the application of one-dimensional cellular automata to the generation and evolution of rhythmic patterns. A selection of CA tendencies are identified that can be used as compositional tools to control the rhythmic coherence of monophonic passages and the polyphonic texture of musical works in broad-brush, rather than precisely deterministic, ways. This will provide the composer and researcher with a clearer understanding of the useful application of CAs for generative music.