984 resultados para Computer music
Resumo:
The need to digitise music scores has led to the development of Optical Music Recognition (OMR) tools. Unfortunately, the performance of these systems is still far from providing acceptable results. This situation forces the user to be involved in the process due to the need of correcting the mistakes made during recognition. However, this correction is performed over the output of the system, so these interventions are not exploited to improve the performance of the recognition. This work sets the scenario in which human and machine interact to accurately complete the OMR task with the least possible effort for the user.
Resumo:
An issue on generative music in Contemporary Music Review allows space to explore many of these controversies, and to explore the rich algorithmic scene in contemporary practice, as well as the diverse origins and manifestations of such a culture. A roster of interesting exponents from both academic and arts practice backgrounds are involved, matching the broad spectrum of current work. Contributed articles range from generative algorithms in live systems, from live coding to interactive music systems to computer games, through algorithmic modelling of longer-term form, evolutionary algorithms, to interfaces between modalities and mediums, in algorithmic choreography. A retrospective on the intensive experimentation into algorithmic music and sound synthesis at the Institute of Sonology in the 1960s and 70s creates a complementary strand, as well as an open paper on the issues raised by open source, as opposed to proprietary, software and operating systems, with consequences in the creation and archiving of algorithmic work.
Resumo:
This paper examines three functions of music technology in the study of music. Firstly, as a tool, secondly, as an instrument and, lastly, as a medium for thinking. As our societies become increasingly embroiled in digital media for representation and communication, our philosophies of music education need to adapt to integrate these developments while maintaining the essence of music. The foundation of music technology in the 1990s is the digital representation of sound. It is this fundamental shift to a new medium with which to represent sound that carries with it the challenge to address digital technology and its multiple effects on music creation and presentation. In this paper I suggest that music institutions should take a broad and integrated approach to the place of music technology in their courses, based on the understanding of digital representation of sound and these three functions it can serve. Educators should reconsider digital technologies such as synthesizers and computers as music instruments and cognitive amplifiers, not simply as efficient tools.
Resumo:
Generative music systems can be performed by manipulating the values of their algorithmic parameters, and their semi-autonomous nature provides an opportunity for coordinated interaction amongst a network of systems, a practice we call Network Jamming. This paper outlines the characteristics of this networked performance practice and discusses the types of mediated musical relationships and ensemble configurations that can arise. We have developed and tested the jam2jam network jamming software over recent years. We describe this system, draw from our experiences with it, and use it to illustrate some characteristics of Network Jamming.
Resumo:
Using information and communication technology devices in public urban places can help to create a personalised space. Looking at a mobile phone screen or listening to music on an MP3 player is a common practice avoiding direct contact with others e.g. whilst using public transport. However, such devices can also be utilised to explore how to build new meaningful connections with the urban space and the collocated people within. We present findings of work-in-progress on Capital Music, a mobile application enabling urban dwellers to listen to music songs as usual, but also allowing them to announce song titles and discover songs currently being listened to by other people in the vicinity. We study the ways that this tool can change or even enhance people’s experience of public urban spaces. Our first user study also found changes in choosing different songs. Anonymous social interactions based on users’ music selection are implemented in the first iteration of the prototype that we studied.
Resumo:
Music making affects relationships with self and others by generating a sense of belonging to a culture or ideology (Bamford, 2006; Barovick, 2001; Dillon & Stewart, 2006; Fiske, 2000; Hallam, 2001). Whilst studies from arts education research present compelling examples of these relationships, others argue that they do not present sufficiently validated evidence of a causal link between music making experiences and cognitive or social change (Winner & Cooper, 2000; Winner & Hetland, 2000a, 2000b, 2001). I have suggested elsewhere that this disconnection between compelling evidence and observations of the effects of music making are in part due to the lack of rigor in research and the incapacity of many methods to capture these experiences in meaningful ways (Dillon, 2006). Part of the answer to these questions about rigor and causality lay in the creative use of new media technologies that capture the results of relationships in music artefacts. Crucially, it is the effective management of these artefacts within computer systems that allows researchers and practitioners to collect, organize, analyse and then theorise such music making experiences.
Resumo:
Computers are now being widely used in the making of music and the extent of computer use in the music that we hear is farlarger than most people realise. This paper will discuss how computer technology is used in music, with particular reference to music education.
Resumo:
Gesture in performance is widely acknowledged in the literature as an important element in making a performance expressive and meaningful. The body has been shown to play an important role in the production and perception of vocal performance in particular. This paper is interested in the role of gesture in creative works that seek to extend vocal performance via technology. A creative work for vocal performer, laptop computer and a Human Computer Interface called the eMic (Extended Microphone Stand Interface controller) is presented as a case study, to explore the relationships between movement, voice production, and musical expression. The eMic is an interface for live vocal performance that allows the singers’ gestures and interactions with a sensor based microphone stand to be captured and mapped to musical parameters. The creative work discussed in this paper presents a new compositional approach for the eMic by working with movement as a starting point for the composition and thus using choreographed gesture as the basis for musical structures. By foregrounding the body and movement in the creative process, the aim is to create a more visually engaging performance where the performer is able to more effectively use the body to express their musical objectives.
Resumo:
This research explores music in space, as experienced through performing and music-making with interactive systems. It explores how musical parameters may be presented spatially and displayed visually with a view to their exploration by a musician during performance. Spatial arrangements of musical components, especially pitches and harmonies, have been widely studied in the literature, but the current capabilities of interactive systems allow the improvisational exploration of these musical spaces as part of a performance practice. This research focuses on quantised spatial organisation of musical parameters that can be categorised as grid music systems (GMSs), and interactive music systems based on them. The research explores and surveys existing and historical uses of GMSs, and develops and demonstrates the use of a novel grid music system designed for whole body interaction. Grid music systems provide plotting of spatialised input to construct patterned music on a two-dimensional grid layout. GMSs are navigated to construct a sequence of parametric steps, for example a series of pitches, rhythmic values, a chord sequence, or terraced dynamic steps. While they are conceptually simple when only controlling one musical dimension, grid systems may be layered to enable complex and satisfying musical results. These systems have proved a viable, effective, accessible and engaging means of music-making for the general user as well as the musician. GMSs have been widely used in electronic and digital music technologies, where they have generally been applied to small portable devices and software systems such as step sequencers and drum machines. This research shows that by scaling up a grid music system, music-making and musical improvisation are enhanced, gaining several advantages: (1) Full body location becomes the spatial input to the grid. The system becomes a partially immersive one in four related ways: spatially, graphically, sonically and musically. (2) Detection of body location by tracking enables hands-free operation, thereby allowing the playing of a musical instrument in addition to “playing” the grid system. (3) Visual information regarding musical parameters may be enhanced so that the performer may fully engage with existing spatial knowledge of musical materials. The result is that existing spatial knowledge is overlaid on, and combined with, music-space. Music-space is a new concept produced by the research, and is similar to notions of other musical spaces including soundscape, acoustic space, Smalley's “circumspace” and “immersive space” (2007, 48-52), and Lotis's “ambiophony” (2003), but is rather more textural and “alive”—and therefore very conducive to interaction. Music-space is that space occupied by music, set within normal space, which may be perceived by a person located within, or moving around in that space. Music-space has a perceivable “texture” made of tensions and relaxations, and contains spatial patterns of these formed by musical elements such as notes, harmonies, and sounds, changing over time. The music may be performed by live musicians, created electronically, or be prerecorded. Large-scale GMSs have the capability not only to interactively display musical information as music representative space, but to allow music-space to co-exist with it. Moving around the grid, the performer may interact in real time with musical materials in music-space, as they form over squares or move in paths. Additionally he/she may sense the textural matrix of the music-space while being immersed in surround sound covering the grid. The HarmonyGrid is a new computer-based interactive performance system developed during this research that provides a generative music-making system intended to accompany, or play along with, an improvising musician. This large-scale GMS employs full-body motion tracking over a projected grid. Playing with the system creates an enhanced performance employing live interactive music, along with graphical and spatial activity. Although one other experimental system provides certain aspects of immersive music-making, currently only the HarmonyGrid provides an environment to explore and experience music-space in a GMS.
Resumo:
From one view of composition—let us call it the inspired or “Mozartian” view—musical compositions arrive fully formed in the mind of the composer and simply require transcription. In reality, however, it seems that very few people are so inspired, and composition is often more akin to a gradual clarification and refinement of partially formed ideas on the musical landscape. Particular landmarks in the compositional landscape tend to become clear before others, such that the incomplete piece is a patchwork of disconnected musical islands. An interactive evolutionary morphing system may provide some assistance for composers, to help build bridges between musical islands by generating hybrid musical transitions.
Resumo:
This project investigates machine listening and improvisation in interactive music systems with the goal of improvising musically appropriate accompaniment to an audio stream in real-time. The input audio may be from a live musical ensemble, or playback of a recording for use by a DJ. I present a collection of robust techniques for machine listening in the context of Western popular dance music genres, and strategies of improvisation to allow for intuitive and musically salient interaction in live performance. The findings are embodied in a computational agent – the Jambot – capable of real-time musical improvisation in an ensemble setting. Conceptually the agent’s functionality is split into three domains: reception, analysis and generation. The project has resulted in novel techniques for addressing a range of issues in each of these domains. In the reception domain I present a novel suite of onset detection algorithms for real-time detection and classification of percussive onsets. This suite achieves reasonable discrimination between the kick, snare and hi-hat attacks of a standard drum-kit, with sufficiently low-latency to allow perceptually simultaneous triggering of accompaniment notes. The onset detection algorithms are designed to operate in the context of complex polyphonic audio. In the analysis domain I present novel beat-tracking and metre-induction algorithms that operate in real-time and are responsive to change in a live setting. I also present a novel analytic model of rhythm, based on musically salient features. This model informs the generation process, affording intuitive parametric control and allowing for the creation of a broad range of interesting rhythms. In the generation domain I present a novel improvisatory architecture drawing on theories of music perception, which provides a mechanism for the real-time generation of complementary accompaniment in an ensemble setting. All of these innovations have been combined into a computational agent – the Jambot, which is capable of producing improvised percussive musical accompaniment to an audio stream in real-time. I situate the architectural philosophy of the Jambot within contemporary debate regarding the nature of cognition and artificial intelligence, and argue for an approach to algorithmic improvisation that privileges the minimisation of cognitive dissonance in human-computer interaction. This thesis contains extensive written discussions of the Jambot and its component algorithms, along with some comparative analyses of aspects of its operation and aesthetic evaluations of its output. The accompanying CD contains the Jambot software, along with video documentation of experiments and performances conducted during the project.
Resumo:
In order to create music, the student must establish a relationship with the musical materials. In this thesis, I examine the capacity of a generative music system called jam2jam to offer individuals a virtual musical play-space to explore. I outline the development of an iteration of software development named jam2jam blue and the evolution of a games-like user interface in the research design that jointly revealed the nature of this musical exploration. The findings suggest that the jam2jam blue interface provided an expressive gestural instrument to jam and experience musicmaking. By using the computer as an instrument, participants in this study were given access to meaningful musical experiences in both solo and ensemble situations and the researcher is allowed a view of their development of a relationship with the musical materials from the perspective of the individual participants. Through an iterative software development methodology, pedagogy and experience design were created simultaneously. The research reveals the potential for the jam2jam software to be used as a reflective tool for feedback and assessment purposes. The power of access to ensemble music making is realised though the participants’ virtual experiences which are brought into their physical space by sharing their experience with others. It is suggested that this interaction creates an environment conducive to self-initiated learning in which music is the language of interaction. The research concludes that the development of a relationship between the explorer and the musical materials is subject to the collaborative nature of the interaction through which the music is experienced.
Resumo:
This paper presents Capital Music, a mobile application enabling real-time sharing of song choices with collocated urban dwellers. Due to the real-time, location-based peer-to-peer approach of the application, a user experience study was performed utilising the Wizard of Oz method. The study provides insight into how sharing non-privacy sensitive but personal data in an anonymous way can influence the user experience of people in public urban places. We discuss the findings in relation to how Capital Music influences the process of “cocooning” in public urban places, the practice of designing anonymous interactions between collocated strangers, and how the sharing of song choices can create a sense of commonality between anonymous users in the urban space. The outcomes of this study are relevant for future location-based social networking applications that aim to create interactions between collocated strangers.
Resumo:
This article examines the philosophy and practice of open-source technology in the development of the jam2jam XO software for the One Laptop Per Child (OLPC) computer. It explores how open-source software principles, pragmatist philosophy, improvisation and constructionist epistemologies are operationalized in the design and development of music software, and how such reflection reveals both the strengths and weaknesses of the open-source software development paradigm. An overview of the jam2jam XO platform, its development processes and music educational uses is provided and resulting reflections on the strengths and weaknesses of open-source development for music education are discussed. From an educational and software development perspective, the act of creating open-source software is shown to be a valuable enterprise, however, just because the source code, creative content and experience design are accessible and 'open' to be changed, does not guarantee that educational practices in the use of that software will change. Research around the development and use of jam2jam XO suggests that open-source software development principles can have an impact beyond software development and on to aspects of experience design and learning relationships.
Resumo:
Sound tagging has been studied for years. Among all sound types, music, speech, and environmental sound are three hottest research areas. This survey aims to provide an overview about the state-of-the-art development in these areas.We discuss about the meaning of tagging in different sound areas at the beginning of the journey. Some examples of sound tagging applications are introduced in order to illustrate the significance of this research. Typical tagging techniques include manual, automatic, and semi-automatic approaches.After reviewing work in music, speech and environmental sound tagging, we compare them and state the research progress to date. Research gaps are identified for each research area and the common features and discriminations between three areas are discovered as well. Published datasets, tools used by researchers, and evaluation measures frequently applied in the analysis are listed. In the end, we summarise the worldwide distribution of countries dedicated to sound tagging research for years.