997 resultados para Musical notes
Resumo:
This paper discusses two pitch detection algorithms (PDA) for simple audio signals which are based on zero-cross rate (ZCR) and autocorrelation function (ACF). As it is well known, pitch detection methods based on ZCR and ACF are widely used in signal processing. This work shows some features and problems in using these methods, as well as some improvements developed to increase their performance. © 2008 IEEE.
Resumo:
"Imprinted by Richard Grafton ... 1550"--Colophon.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
A aula de educação musical : pedagogias diferentes conduzem a diferentes resultados na aprendizagem?
Resumo:
Sendo professor de Educação Musical no ensino básico, decidi realizar o presente trabalho com o objetivo de averiguar se, para o universo de alunos indicado, é mais vantajoso trabalhar a partir das propostas pedagógicas de Edwin Gordon, que se baseiam no conceito de audiação como forma de levar o aluno a compreender a música (audiação é a capacidade de ouvirmos e compreendermos sons que podem estar, ou não, fisicamente presentes), ou nos ensinamentos de Jos Wuytack, que defende a utilização de técnicas de imitação nas fases iniciais de ensino da música a jovens. Tendo esta investigação sido realizada ao longo de um semestre letivo, não seria adequado nem possível aplicar extensivamente todas as propostas dos pedagogos referidos. Como tal, os trabalhos aqui apresentados foram limitados aos conceitos que considerei mais adequados para o tempo e para os objetivos definidos para o nível de ensino aqui em estudo. Foram trabalhadas as audiações números um, dois e quatro, por um lado, e, por outro, as técnicas de imitação melódica e rítmica. Foi feita uma avaliação contínua da evolução de cada aluno, como forma de estabelecer um padrão de desenvolvimento que permitisse concluir qual das duas metodologias de ensino da música a jovens se revelou mais adequada na globalidade e qual a que produziu melhores resultados no que diz respeito à melhoria da afinação vocal, do conhecimento das notas musicais, do rigor rítmico e da dedilhação na flauta de bisel. Os resultados obtidos não nos permitiram retirar nenhuma conclusão definitiva.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Unlike humans, who communicate in frequency bands between 250 Hz and 6 kHz, rats can communicate in frequencies above 18 kHz. Their vocalization types depend on the context and are normally associated to subjective or emotional states. It was reported significant vocal changes due to administration of replacement testosterone in a trained tenor singer with hypogonadism. Speech-Language Pathology clinical practices are being sought by singers who sporadically use anabolic steroids associated with physical exercise. They report difficulties in reaching and keeping high notes, ""breakage"" in the passage of musical notes and post singing vocal fatigue. Those abnormalities could be raised by the association of anabolic steroids and physical exercise. Thus, in order to verify if this association could promote vocal changes, maximum, minimum and fundamental frequencies and call duration in rats treated with anabolic steroids and physically trained (10 weeks duration) were evaluated. The vocalizations were obtained by handling the animals. At the end of that period, rats treated and trained showed significant decrease in call duration, but not in other parameters. The decrease in call duration could be associated to functional alterations in the vocal folds of treated and trained animals due to a synergism between anabolic steroids and physical training. (C) 2010 Acoustical Society of America. [DOI: 10.1121/1.3488350]
Resumo:
A motivação para este trabalho vem da necessidade que o autor tem em poder registar as notas tocadas na guitarra durante o processo de improviso. Quando o músico está a improvisar na guitarra, muitas vezes não se recorda das notas tocadas no momento, este trabalho trata o desenvolvimento de uma aplicação para guitarristas, que permita registar as notas tocadas na guitarra eléctrica ou clássica. O sinal é adquirido a partir da guitarra e processado com requisitos de tempo real na captura do sinal. As notas produzidas pela guitarra eléctrica, ligada ao computador, são representadas no formato de tablatura e/ou partitura. Para este efeito a aplicação capta o sinal proveniente da guitarra eléctrica a partir da placa de som do computador e utiliza algoritmos de detecção de frequência e algoritmos de estimação de duração de cada sinal para construir o registo das notas tocadas. A aplicação é desenvolvida numa perspectiva multi-plataforma, podendo ser executada em diferentes sistemas operativos Windows e Linux, usando ferramentas e bibliotecas de domínio público. Os resultados obtidos mostram a possibilidade de afinar a guitarra com valores de erro na ordem de 2 Hz em relação às frequências de afinação standard. A escrita da tablatura apresenta resultados satisfatórios, mas que podem ser melhorados. Para tal será necessário melhorar a implementação de técnicas de processamento do sinal bem como a comunicação entre processos para resolver os problemas encontrados nos testes efectuados.
Resumo:
Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user’s mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user’s emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal-Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music.
Resumo:
Inspired by a type of synesthesia where colour typically induces musical notes the MusiCam project investigates this unusual condition, particularly the transition from colour to sound. MusiCam explores the potential benefits of this idiosyncrasy as a mode of human computer interaction (1-10), providing a host of meaningful applications spanning control, communication and composition. Colour data is interpreted by means of an off-the-shelf webcam, and music is generated in real-time through regular speakers. By making colour-based gestures users can actively control the parameters of sounds, compose melodies and motifs or mix multiple tracks on the fly. The system shows great potential as an interactive medium and as a musical controller. The trials conducted to date have produced encouraging results, and only hint at the new possibilities achievable by such a device.
Resumo:
Includes index.
Resumo:
Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.
Resumo:
No more published.