934 resultados para Sounds(waterways)
Resumo:
Limited data on cervical auscultation (CA) sounds in normal swallows of various food and fluid textures during the transitional feeding period of 4-36 months exists. This study documents the acoustic and perceptual parameters of swallowing sounds in healthy children aged 4–36 months over a range of food and fluid consistencies.
Resumo:
A rain forest dusk chorus consists of a large number of individuals of acoustically communicating species signaling at the same time. How different species achieve effective intra-specific communication in this complex and noisy acoustic environment is not well understood. In this study we examined acoustic masking interference in an assemblage of rain forest crickets and katydids. We used signal structures and spacing of signalers to estimate temporal, spectral and active space overlap between species. We then examined these overlaps for evidence of strategies of masking avoidance in the assemblage: we asked whether species whose signals have high temporal or spectral overlap avoid calling together. Whereas we found evidence that species with high temporal overlap may avoid calling together, there was no relation between spectral overlap and calling activity. There was also no correlation between the spectral and temporal overlaps of the signals of different species. In addition, we found little evidence that species calling in the understorey actively use spacing to minimize acoustic overlap. Increasing call intensity and tuning receivers however emerged as powerful strategies to minimize acoustic overlap. Effective acoustic overlaps were on average close to zero for most individuals in natural, multispecies choruses, even in the absence of behavioral avoidance mechanisms such as inhibition of calling or active spacing. Thus, call temporal structure, intensity and frequency together provide sufficient parameter space for several species to call together yet communicate effectively with little interference in the apparent cacophony of a rain forest dusk chorus.
Resumo:
Giant salvinia (Salvinia molesta Mitchell) is an invasive aquatic fern that has been discovered at several locations in southeast Texas. Field reflectance measurements were made on two classes of giant salvinia [green giant salvinia (green foliage) and senesced giant salvinia (mixture of green and brown foliage)] and several associated species. Reflectance measurements showed that green giant salvinia could be best distinguished at the visible green wavelength, whereas senesced giant salvinia could generally be best separated at the near-infrared (NIR) wavelength. Green giant salvinia and senesced giant salvinia could be detected on color-infrared (CIR) aerial photographs where them had pink and grayish-pink or olive-green image responses, respectively. Both classes of giant salvinia could be distinguished in reflectance measurements made on multiple dates and at several locations in southeast Texas. Likewise, they could he detected in CIR photographs obtained on several dates and at widely separated locations. Computer analysis of a CIR photographic transparency showed that green giant salvinia and senesced giant salvinia populations could he quantified. An accuracy assessment performed on the classified image showed an overall accuracy of 87.0%.
Resumo:
Congress established a legal imperative to restore the quality of our surface waters when it enacted the Clean Water Act in 1972. The act requires that existing uses of coastal waters such as swimming and shellfishing be protected and restored. Enforcement of this mandate is frequently measured in terms of the ability to swim and harvest shellfish in tidal creeks, rivers, sounds, bays, and ocean beaches. Public-health agencies carry out comprehensive water-quality sampling programs to check for bacteria contamination in coastal areas where swimming and shellfishing occur. Advisories that restrict swimming and shellfishing are issued when sampling indicates that bacteria concentrations exceed federal health standards. These actions place these coastal waters on the U.S. Environmental Protection Agencies’ (EPA) list of impaired waters, an action that triggers a federal mandate to prepare a Total Maximum Daily Load (TMDL) analysis that should result in management plans that will restore degraded waters to their designated uses. When coastal waters become polluted, most people think that improper sewage treatment is to blame. Water-quality studies conducted over the past several decades have shown that improper sewage treatment is a relatively minor source of this impairment. In states like North Carolina, it is estimated that about 80 percent of the pollution flowing into coastal waters is carried there by contaminated surface runoff. Studies show this runoff is the result of significant hydrologic modifications of the natural coastal landscape. There was virtually no surface runoff occurring when the coastal landscape was natural in places such as North Carolina. Most rainfall soaked into the ground, evaporated, or was used by vegetation. Surface runoff is largely an artificial condition that is created when land uses harden and drain the landscape surfaces. Roofs, parking lots, roads, fields, and even yards all result in dramatic changes in the natural hydrology of these coastal lands, and generate huge amounts of runoff that flow over the land’s surface into nearby waterways. (PDF contains 3 pages)
Resumo:
Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.
Resumo:
Phocoenids are generally considered to be nonwhistling species that produce only high-frequency pulsed sounds. Here our results show that neonatal finless porpoises (Neophocaena phocaenoides) frequently produce clear low-frequency (2-3 kHz) pulsed signals, without distinct high-frequency energy, just after birth and can produce both low- (2-3 kHz) and high-frequency (>100 kHz) pulsed signals simultaneously until about 20 days postnatal. The results indicate that low-frequency signals of neonatal finless porpoises are not an early form of high-frequency signals and suggest that low- and high-frequency signals may be produced by different sound production mechanisms. (C) 2008 Acoustical Society of America.
Resumo:
Acoustic signals from wild Neophocaena phocaenoides sunameri were recorded in the waters off Liao-dong-wan Bay located in Bohai Sea, China. Signal analysis shows that N. p. sunameri produced "typical" phocoenid clicks. The peak frequencies f(p), of clicks ranged from. 113 to 131 kHz with an average of 121 +/- 3.78 kHz (n=71). The 3 dB bandwidths Delta f ranged from 10.9 to 25.0 kHz with an average of 17.5 +/- 3.30 kHz. The signal durations At ranged from 56 to 109 mu s with an average 80 +/- 11.49 mu s. The number of cycles N, ranged from 7 to 13 with an average of 9 +/- 1.48. With increasing peak frequency there was a faint tendency of decrease in bandwidth, which implies a nonconstant value of f(p)/Delta f. On occasion there were some click trains with faint click energy presenting below 70 kHz, however, it was possibly introduced by interference effect from multiple pulses structures. The acoustic parameters of the clicks were compared between the investigated population and a riverine population of finless porpoise. (c) 2007 Acoustical Society of America.
Resumo:
This article reflects on the changing environment through the passage of time and how certain technologies for a creative proposal allow the preservation and transmission of a significant part of that ephemeral heritage for future generations. The general purpose of this particular project is aimed to achieve the sound synthesis of a specific and representative cityscape as the old train station in Cuenca –located in the heart of the city– that could be preserved and reproduced as an unique document of a present time, ascertainable in the future: a memory that interpret sound as a time capsule. This soundscape was made to mark the arrival of the high speed train in 2010 to a brand new station in the outskirts of the city. Therefore, the goal of this research was focused on achieving a synthetic document that provided a sound memory capable of reflecting the significant social, cultural and logistical features, of what was until then the only railway communication symbol in the city of Cuenca from 1883 to the first decade of the 21st century.
Resumo:
Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami , 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes , 2004), and suggest new avenues for the remediation of developmental dyslexia. © 2010 Blackwell Publishing Ltd.
Resumo:
Understanding how the timing of motor output is coupled to sensory temporal information is largely based on synchronisation of movements through small motion gaps (finger taps) to mostly empty sensory intervals (discrete beats). This study investigated synchronisation of movements between target barriers over larger motion gaps when closing time gaps of intervals were presented as either continuous, dynamic sounds, or discrete beats. Results showed that although synchronisation errors were smaller for discrete sounds, the variability of errors was lower for continuous sounds. Furthermore, finger movement between targets was found to be more sinusoidal when continuous sensory information was presented during intervals compared to discrete. When movements were made over larger amplitudes, synchronisation errors tended to be more positive and movements between barriers more sinusoidal, than for movements over shorter amplitudes. These results show that the temporal control of movement is not independent from the form of the sensory information that specifies time gaps or the magnitude of the movement required for synchronisation.
Resumo:
Many studies have examined the processes involved in recognizing types of human action through sound, but little is known about whether the physical characteristics of an action (such as kinetic and kinematic parameters) can be perceived and imitated from sound. Twelve young healthy adults listened to recordings of footsteps on a gravel path taken from walks of different stride lengths (SL) and cadences. In 1 protocol, participants performed a real-time reenactment of the walking action depicted in a sound sample. Second, participants listened to 2 different sound samples and discriminated differences in SL. In a 2nd experiment, these procedures were repeated using synthesized sounds derived from the kinetic interactions between the foot and walking surface. A 3rd experiment examined the influence of altered cadence on participants' ability to discriminate changes in SL. Participants significantly adapted their own SL and cadence according to those depicted in both real and synthesized sounds (p <.01). However, although participants accurately discriminated between large changes in SL, these perceptions were heavily influenced by temporal factors, that is, when cadence changed between samples. These findings show that spatial attributes of action sounds can be both mimicked and discriminated, even when only basic kinetic interactions present within the action are specified. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Resumo:
Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.