7 resultados para sound processing

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

premiered by Mel Puga Iglesias

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Composers of digital music today have a bewildering variety of sound-processing tools and techniques at their disposal. At their best, these tools allow composers to hone a sound to perfection. However, they can also lead us into a routine which bypasses avenues of experimentation, simply because the known tools work so well and their sonic output is so attractive. An alternative strategy is oracular sound processing. An oracular sound processor creates a derived version of its input whose characteristics could not have been fully predicted, while affording the user little or no parametric control over the process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Explicit finite difference (FD) schemes can realise highly realistic physical models of musical instruments but are computationally complex. A design methodology is presented for the creation of FPGA-based micro-architectures for FD schemes which can be applied to a range of applications with varying computational requirements, excitation and output patterns and boundary conditions. It has been applied to membrane and plate-based sound producing models, resulting in faster than real-time performance on a Xilinx XC2VP50 device which is 10 to 35 times faster than general purpose and DSP processors. The models have developed in such a way to allow a wide range of interaction (by a musician) thereby leading to the possibility of creating a highly realistic digital musical instrument.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to its efficiency and simplicity, the finite-difference time-domain method is becoming a popular choice for solving wideband, transient problems in various fields of acoustics. So far, the issue of extracting a binaural response from finite difference simulations has only been discussed in the context of embedding a listener geometry in the grid. In this paper, we propose and study a method for binaural response rendering based on a spatial decomposition of the sound field. The finite difference grid is locally sampled using a volumetric array of receivers, from which a plane wave density function is computed and integrated with free-field head related transfer functions, in the spherical harmonics domain. The volumetric array is studied in terms of numerical robustness and spatial aliasing. Analytic formulas that predict the performance of the array are developed, facilitating spatial resolution analysis and numerical binaural response analysis for a number of finite difference schemes. Particular emphasis is placed on the effects of numerical dispersion on array processing and on the resulting binaural responses. Our method is compared to a binaural simulation based on the image method. Results indicate good spatial and temporal agreement between the two methods.