791 resultados para audio recording


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the formation of stripe domains in permalloy (NisoFe20) thin films was investigated mainly utilizing magnetic force microscopy. Stripe domains are a known phenomenon, which reduces the "softness" of magnetic material and introduces a significant source of noise when used in perpendicular magnetic media. For the particular setup mentioned in this report, a critical thickness for stripe domains initiation depended on the sputtering rate, the substrate temperature, and the film thickness. Beyond the stripe domain formation, an increase in the periodicity of highly ordered stripe domains was evident with increasing film thickness. Above a particular thickness, stripe domains periodicity decreased along with magnetic domain randomization. The results led to the inference that the perpendicular anisotropy responsible for the formation of stripe domains originated mainly from magnetostriction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reverberation is caused by the reflection of the sound in adjacent surfaces close to the sound source during its propagation to the listener. The impulsive response of an environment represents its reverberation characteristics. Being dependent on the environment, reverberation takes to the listener characteristics of the space where the sound is originated and its absence does not commonly sounds like “natural”. When recording sounds, it is not always possible to have the desirable characteristics of reverberation of an environment, therefore methods for artificial reverberation have been developed, always seeking a more efficient implementations and more faithful to the real environments. This work presents an implementation in FPGAs (Field Programmable Gate Arrays ) of a classic digital reverberation audio structure, based on a proposal of Manfred Schroeder, using sets of all-pass and comb filters. The developed system exploits the use of reconfigurable hardware as a platform development and implementation of digital audio effects, focusing on the modularity and reuse characteristics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an audio cueing system, a teacher is presented with randomly spaced auditory signals via tape recorder or intercom. The teacher is instructed to praise a child who is on-task each time the cue is presented. In this study, a baseline was obtained on the teacher's praise rate and the children's on-task behaviour in a Grade 5 class of 37 students. Children were then divided into high, medium and low on-task groups. Followinq baseline, the teacher's praise rate and the children's on-task behaviour were observed under the following successively implemented conditions: (l) Audio Cueing 1: Audio cueing at a rate of 30 cues per hour was introduced into the classroom and remained in effect during subsequent conditions. A group of consistently low on-task children were delineated. (2) Audio Cueing Plus 'focus praise package': Instructions to direct two-thirds o£ the praise to children identified by the experimenter (consistently low on-task children), feedback and experimenter praise for meeting or surpassing the criterion distribution of praise ('focus praise package') were introduced. (3) Audio Cueing 2: The 'focus praise package' was removed. (4) Audio Cueing Plus 'increase praise package': Instructions to increase the rate of praise, feedback and experimenter praise for improved praise rates ('increase praise package') were introduced. The primary aims of the study were to determine the distribution of praise among hi~h, medium and low on-task children when audio cueinq was first introduced and to investigate the effect of the 'focus praise package' on the distribution of teacher praise. The teacher distributed her praise evenly among the hiqh, medium and low on-task groups during audio cueing 1. The effect of the 'focus praise package' was to increase the percentage of praise received by the consistently low on-task children. Other findings tended to suggest that audio cueing increased the teacher's praise rate. However, the teacher's praise rate unexpectedly decreased to a level considerably below the cued rate during audio cueing 2. The 'increase praise package' appeared to increase the teacher's praise rate above the audio cueing 2 level. The effect of an increased praise rate and two distributions of praise on on-task behaviour were considered. Significant increases in on-task behaviour were found in audio cueing 1 for the low on-task group, in the audio cueing plus 'focus praise package' condition for the entire class and the consistently low on-task group and in audio cueing 2 for the medium on-task group. Except for the high on-task children who did not change, the effects of the experimental manipulations on on-task behaviour were e quivocal. However, there were some indications that directing 67% of the praise to the consistently low on-task children was more effective for increasing this group's on-task behaviour than distributing praise equally among on-task groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

HomeBank is introduced here. It is a public, permanent, extensible, online database of daylong audio recorded in naturalistic environments. HomeBank serves two primary purposes. First, it is a repository for raw audio and associated files: one database requires special permissions, and another redacted database allows unrestricted public access. Associated files include metadata such as participant demographics and clinical diagnostics, automated annotations, and human-generated transcriptions and annotations. Many recordings use the child-perspective LENA recorders (LENA Research Foundation, Boulder, Colorado, United States), but various recordings and metadata can be accommodated. The HomeBank database can have both vetted and unvetted recordings, with different levels of accessibility. Additionally, HomeBank is an open repository for processing and analysis tools for HomeBank or similar data sets. HomeBank is flexible for users and contributors, making primary data available to researchers, especially those in child development, linguistics, and audio engineering. HomeBank facilitates researchers' access to large-scale data and tools, linking the acoustic, auditory, and linguistic characteristics of children's environments with a variety of variables including socioeconomic status, family characteristics, language trajectories, and disorders. Automated processing applied to daylong home audio recordings is now becoming widely used in early intervention initiatives, helping parents to provide richer speech input to at-risk children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is an increased need for 3D recording of archaeological sites and digital preservation of their artifacts. Digital photogrammetry with prosumer DSLR cameras is a suitable tool for recording epigraphy in particular, as it allows for the recording of inscribed surfaces with very high accuracy, often better than 2 mm and with only a short time spent in the field. When photogrammetry is fused with other computational photography techniques like panoramic tours and Reflectance Transformation Imaging, a workflow exists to rival traditional LiDAR­based methods. The difficulty however, arises in the presentation of 3D data. It requires an enormous amount of storage and end­user sophistication. The proposed solution is to use game­engine technology and high definition virtual tours to provide not only scholars, but also the general public with an uncomplicated interface to interact with the detailed 3D epigraphic data. The site of Stobi, located near Gradsko, in the Former Yugoslav Republic of Macedonia (FYROM) was used as a case study to demonstrate the effectiveness of RTI, photogrammetry and virtual tour imaging working in combination. A selection of nine sets of inscriptions from the archaeological site were chosen to demonstrate the range of application for the techniques. The chosen marble, sandstone and breccia inscriptions are representative of the varying levels of deterioration and degradation of the epigraphy at Stobi, in which both their rates of decay and resulting legibility is varied. This selection includes those which are treated and untreated stones as well as those in situ and those in storage. The selection consists of both Latin and Greek inscriptions with content ranging from temple dedication inscriptions to statue dedications. This combination of 3D modeling techniques presents a cost and time efficient solution to both increase the legibility of severely damaged stones and to digitally preserve the current state of the inscriptions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.

Relevância:

20.00% 20.00%

Publicador: