650 resultados para Sound-music sensibility
Resumo:
High-span individuals (as measured by the operation span [CSPAN] technique) are less likely than low-span individuals to notice their own names in an unattended auditory stream (A. R. A. Conway, N. Cowan, & M F. Bunting, 2001). The possibility that OSPAN accounts for individual differences in auditory distraction on an immediate recall test was examined. There was no evidence that high-OSPAN participants were more resistant to the disruption caused by irrelevant speech in serial or in free recall. Low-OSPAN participants did, however, make more semantically related intrusion errors from the irrelevant sound stream in a free recall test (Experiment 4). Results suggest that OSPAN mediates semantic components of auditory distraction dissociable from other aspects of the irrelevant sound effect.
Resumo:
Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.
Using simulation to determine the sensibility of error sources for software effort estimation models
Resumo:
Inspired by a type of synesthesia where colour typically induces musical notes the MusiCam project investigates this unusual condition, particularly the transition from colour to sound. MusiCam explores the potential benefits of this idiosyncrasy as a mode of human computer interaction (1-10), providing a host of meaningful applications spanning control, communication and composition. Colour data is interpreted by means of an off-the-shelf webcam, and music is generated in real-time through regular speakers. By making colour-based gestures users can actively control the parameters of sounds, compose melodies and motifs or mix multiple tracks on the fly. The system shows great potential as an interactive medium and as a musical controller. The trials conducted to date have produced encouraging results, and only hint at the new possibilities achievable by such a device.
Resumo:
An animated film commissioned and screened by Art Review Magazine on their website (Oct-Dec 2010), and a double page comic strip (Art Review, Oct 2010. The project addresses a key problem with contemporary debates regarding ideas of ‘performativity’ and ‘fictioning’ (Foucault/Deleuze/Butler) whereby the structural requirement for an ‘End’ pre-determines or back-codes the ‘story’ or progression of events leading up to this ‘End’ and therefore cuts against the potentials claimed for ‘performance’ and ‘performativity’. Film credits Primary soundtrack: Music: Rose Kallal. Spoken word: Mark Beasley Voices: Katie Barrington, Marnie Watts, Maria Deegan & John Russell Sound engineer: Bob Geal PLUS Special bonus track: (after 'The End'): 'Strychnine Motive' (2011) by Gum Takes Tooth
Resumo:
In a workshop setting, two pieces of recorded music were presented to a group of adult non-specialists; a key feature was to set up structured discussion within which the respondents considered each piece of music as a whole and not in its constituent parts. There were two areas of interest, namely to explore whether the respondents were likely to identify the musical features or to make extra-musical associations and, to establish the extent to which there would be commonality and difference in their approach to formulating the verbal responses. An inductive approach was used in the analysis of data to reveal some of the working theories underpinning the intuitive musicianship of the adult non-specialist listener. Findings have shown that, when unprompted by forced choice responses, the listeners generated responses that could be said to be information-poor in terms of musical features but rich in terms of the level of personal investment they made in formulating their responses. This is evidenced in a number of connections they made between the discursive and the non-discursive, including those which are relational and mediated by their experiences. Implications for music education are considered.