159 resultados para Vocalization
Resumo:
Unlike humans, who communicate in frequency bands between 250 Hz and 6 kHz, rats can communicate in frequencies above 18 kHz. Their vocalization types depend on the context and are normally associated to subjective or emotional states. It was reported significant vocal changes due to administration of replacement testosterone in a trained tenor singer with hypogonadism. Speech-Language Pathology clinical practices are being sought by singers who sporadically use anabolic steroids associated with physical exercise. They report difficulties in reaching and keeping high notes, ""breakage"" in the passage of musical notes and post singing vocal fatigue. Those abnormalities could be raised by the association of anabolic steroids and physical exercise. Thus, in order to verify if this association could promote vocal changes, maximum, minimum and fundamental frequencies and call duration in rats treated with anabolic steroids and physically trained (10 weeks duration) were evaluated. The vocalizations were obtained by handling the animals. At the end of that period, rats treated and trained showed significant decrease in call duration, but not in other parameters. The decrease in call duration could be associated to functional alterations in the vocal folds of treated and trained animals due to a synergism between anabolic steroids and physical training. (C) 2010 Acoustical Society of America. [DOI: 10.1121/1.3488350]
Resumo:
Screaming and other types of disruptive vocalization are commonly observed among nursing home residents. Depressive symptoms are also frequently seen in this group, although the relationship between disruptive vocalization and depressive symptoms is unclear. Accordingly, we sought to examine this relationship in older nursing home residents. We undertook a controlled comparison of 41 vocally disruptive nursing home residents and 43 non-vocally-disruptive nursing home residents. All participants were selected to have Mini-Mental State Examination (MMSE) scores of at least 10. Participants had a mean age of 81.0 years (range 63-97 years) and had a mean MMSE score of 17.8 (range 10-29). Nurse ratings of disruptive vocalization according to a semioperationalized definition were validated against the noisy behavior subscale of the Cohen-Mansfield Agitation Inventory. Subjects were independently rated for depressive symptoms by a psychiatrist using the Dementia Mood Assessment Scale, the Cornell Scale for Depression in Dementia, and the Depressive Signs Scale. Vocally disruptive nursing home residents scored significantly higher than controls on each of these three depression-in-dementia scales. These differences remained significant when the effects of possible confounding variables of cognitive impairment, age, and sex were removed. We conclude that depressive symptoms are associated with disruptive vocalization and may have an etiological role in the generation of disruptive vocalization behaviors in elderly nursing home residents.
Resumo:
Vocalization generated by the application of a noxious stimulus is an integrative response related to the affective-motivational component of pain. The rostral ventromedial medulla (RVM) plays an important role in descending pain modulation, and opiates play a major role in modulation of the antinociception mediated by the RVM. Further, it has been suggested that morphine mediates antinociception indirectly, by inhibition of tonically active GABAergic neurons. The current study evaluated the effects of the opioids and GABA agonists and antagonists in the RVM on an affective-motivational pain model. Additionally, we investigated the opioidergic-GABAergic interaction in the RVM in the vocalization response to noxious stimulation. Microinjection of either morphine (4.4 nmo1/0.2 mu l) or bicuculline (0.4 nmo1/0.2 mu l) into the RVM decreased the vocalization index, whereas application of the GABA(A) receptor agonist, musci-mol (0.5 nmo1/0.2 mu l) increased the vocalization index during noxious stimulation. Furthermore, prior microinjection of either the opioid antagonist naloxone (2.7 nmo1/0.2 mu l) or muscimol (0.25 nmo1/0.2 mu l) into the RVM blocked the reduction in vocalization index induced by morphine. These observations suggest an antinociceptive and pro-nociceptive role of the opioidergic and GABAergic neurotransmitters in the RVM, respectively. Our data show that opioids have an antinociceptive effect in the RVM, while GABAergic neurotransmission is related to the facilitation of nociceptive responses. Additionally, our results indicate that the antinociceptive effect of the opioids in the RVM could be mediated by a disinhibition of tonically active GABAergic interneurons in the downstream projection neurons of the descending pain control system; indicating an interaction between the opioidergic and GABAergic pathways of pain modulation. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Introduction: Discrimination of species-specific vocalizations is fundamental for survival and social interactions. Its unique behavioral relevance has encouraged the identification of circumscribed brain regions exhibiting selective responses (Belin et al., 2004), while the role of network dynamics has received less attention. Those studies that have examined the brain dynamics of vocalization discrimination leave unresolved the timing and the inter-relationship between general categorization, attention, and speech-related processes (Levy et al., 2001, 2003; Charest et al., 2009). Given these discrepancies and the presence of several confounding factors, electrical neuroimaging analyses were applied to auditory evoked-potential (AEPs) to acoustically and psychophysically controlled non-verbal human and animal vocalizations. This revealed which region(s) exhibit voice-sensitive responses and in which sequence. Methods: Subjects (N=10) performed a living vs. man-made 'oddball' auditory discrimination task, such that on a given block of trials 'target' stimuli occurred 10% of the time. Stimuli were complex, meaningful sounds of 500ms duration. There were 120 different sound files in total, 60 of which represented sounds of living objects and 60 man-made objects. The stimuli that were the focus of the present investigation were restricted to those of living objects within blocks where no response was required. These stimuli were further sorted between human non-verbal vocalizations and animal vocalizations. They were also controlled in terms of their spectrograms and formant distributions. Continuous 64-channel EEG was acquired through Neuroscan Synamps referenced to the nose, band-pass filtered 0.05-200Hz, and digitized at 1000Hz. Peri-stimulus epochs of continuous EEG (-100ms to 900ms) were visually inspected for artifacts, 40Hz low-passed filtered and baseline corrected using the pre-stimulus period . Averages were computed from each subject separately. AEPs in response to animal and human vocalizations were analyzed with respect to differences of Global Field Power (GFP) and with respect to changes of the voltage configurations at the scalp (reviewed in Murray et al., 2008). The former provides a measure of the strength of the electric field irrespective of topographic differences; the latter identifies changes in spatial configurations of the underlying sources independently of the response strength. In addition, we utilized the local auto-regressive average distributed linear inverse solution (LAURA; Grave de Peralta Menendez et al., 2001) to visualize and statistically contrast the likely underlying sources of effects identified in the preceding analysis steps. Results: We found differential activity in response to human vocalizations over three periods in the post-stimulus interval, and this response was always stronger than that to animal vocalizations. The first differential response (169-219ms) was a consequence of a modulation in strength of a common brain network localized into the right superior temporal sulcus (STS; Brodmann's Area (BA) 22) and extending into the superior temporal gyrus (STG; BA 41). A second difference (291-357ms) also followed from strength modulations of a common network with statistical differences localized to the left inferior precentral and prefrontal gyrus (BA 6/45). These two first strength modulations correlated (Spearman's rho(8)=0.770; p=0.009) indicative of functional coupling between temporally segregated stages of vocalization discrimination. A third difference (389-667ms) followed from strength and topographic modulations and was localized to the left superior frontal gyrus (BA10) although this third difference did not reach our spatial criterion of 12 continuous voxels. Conclusions: We show that voice discrimination unfolds over multiple temporal stages, involving a wide network of brain regions. The initial stages of vocalization discrimination are based on modulations in response strength within a common brain network with no evidence for a voice-selective module. The latency of this effect parallels that of face discrimination (Bentin et al., 2007), supporting the possibility that voice and face processes can mutually inform one another. Putative underlying sources (localized in the right STS; BA 22) are consistent with prior hemodynamic imaging evidence in humans (Belin et al., 2004). Our effect over the 291-357ms post-stimulus period overlaps the 'voice-specific-response' reported by Levy et al. (Levy et al., 2001) and the estimated underlying sources (left BA6/45) were in agreement with previous findings in humans (Fecteau et al., 2005). These results challenge the idea that circumscribed and selective areas subserve con-specific vocalization processing.
Resumo:
The vocalization of the shrews Suncus etruscus and Crocidura russula during normothermia and torpor is investigated. While frequency and call duration are independent of body temperature, the tremolo structure shows a spreading correlated with falling body temperature. The particular calls emitted during torpor are defence calls, modified by merely physiological factors. Their main function might be of intraspecific nature
Resumo:
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.
Resumo:
Among the challenges of pig farming in today's competitive market, there is factor of the product traceability that ensures, among many points, animal welfare. Vocalization is a valuable tool to identify situations of stress in pigs, and it can be used in welfare records for traceability. The objective of this work was to identify stress in piglets using vocalization, calling this stress on three levels: no stress, moderate stress, and acute stress. An experiment was conducted on a commercial farm in the municipality of Holambra, São Paulo State , where vocalizations of twenty piglets were recorded during the castration procedure, and separated into two groups: without anesthesia and local anesthesia with lidocaine base. For the recording of acoustic signals, a unidirectional microphone was connected to a digital recorder, in which signals were digitized at a frequency of 44,100 Hz. For evaluation of sound signals, Praat® software was used, and different data mining algorithms were applied using Weka® software. The selection of attributes improved model accuracy, and the best attribute selection was used by applying Wrapper method, while the best classification algorithms were the k-NN and Naive Bayes. According to the results, it was possible to classify the level of stress in pigs through their vocalization.
Resumo:
This study aimed to identify differences in swine vocalization pattern according to animal gender and different stress conditions. A total of 150 barrow males and 150 females (Dalland® genetic strain), aged 100 days, were used in the experiment. Pigs were exposed to different stressful situations: thirst (no access to water), hunger (no access to food), and thermal stress (THI exceeding 74). For the control treatment, animals were kept under a comfort situation (animals with full access to food and water, with environmental THI lower than 70). Acoustic signals were recorded every 30 minutes, totaling six samples for each stress situation. Afterwards, the audios were analyzed by Praat® 5.1.19 software, generating a sound spectrum. For determination of stress conditions, data were processed by WEKA® 3.5 software, using the decision tree algorithm C4.5, known as J48 in the software environment, considering cross-validation with samples of 10% (10-fold cross-validation). According to the Decision Tree, the acoustic most important attribute for the classification of stress conditions was sound Intensity (root node). It was not possible to identify, using the tested attributes, the animal gender by vocal register. A decision tree was generated for recognition of situations of swine hunger, thirst, and heat stress from records of sound intensity, Pitch frequency, and Formant 1.
Resumo:
In order to reach higher broiler performance, farmers target losses reduction. One way to make this possible is by rearing sexed broilers as male and female present diverse performance due to their physiological differences. Birds from different genetic strain also have a distinct performance at the same age. Considering that sexed flocks may present higher performance this study aimed to identify one-day-old chicks’ sex throughout their vocalization. This research also investigated the possibility of identifying the genetic strain by their vocalization attributes. A total of 120 chicks, half of them were from Cobb® genetic strain and the other half from Ross® genetic strain. From each group, a total of 30 were males and 30 females, which were previously separated by sex using their secondary physiological characteristics at the hatchery. Vocalizations audio recording was done inside a semi-anechoic chamber using a unidirectional microphone connected to an audio input of a digital recorder. Vocalizations were recorded for two minutes. Acoustic characteristics of the sounds were analyzed being calculated the fundamental frequency Pitch, the sound intensity, the first formant, and second formant. Results indicated that the vocalizations of both sexes could be identified by the second formant, and the genetic strain was detected by both the second formant and the Pitch.
Resumo:
Adult rats emit 22 kHz ultrasonic alann calls in aversive situations. This type of call
IS a component of defensive behaviour and it functions predominantly to warn
conspecifics about predators. Production of these calls is dependent on the central
cholinergic system. The laterodorsal tegmental nucleus (LDT) and pedunculopontine
tegmental nucleus (PPT) contain largely cholinergic neurons, which create a continuous
column in the brainstem. The LDT projects to structures in the forebrain, and it has been
implicated in the initiation of 22 kHz alarm calls. It was hypothesized that release of
acetylcholine from the ascending LDT terminals in mesencephalic and diencephalic areas
initiates 22 kHz alarm vocalization. Therefore, the tegmental cholinergic neurons should
be more active during emission of alarm calls. The aim of this study was to demonstrate
increased activity of LDT cholinergic neurons during emission of 22 kHz calls induced
by air puff stimuli. Immunohistochemical staining of the enzyme choline
acetyltransferase identified cell bodies of cholinergic neurons, and c-Fos immunolabeling
identified active cells. Double labeled cells were regarded as active cholinergic cells.
There were significantly more (p
Resumo:
An ascending cholinergic projection, which originates in the laterodorsal tegmental nucleus (LDT), was implicated in the initiation of ultrasonic vocalization. The goal of this study was to histochemically examine the activity the LDT following ultrasonic calls induced by two methods. It was hypothesized that cholinergic LDT cells would be more active during air puffinduced vocalization than carbachol-induced one. Choline acetyltransferase (ChAT), and cFos protein were visualized histochemically as markers of cholinergic calls and cellular activity, respectively. Results indicated that animals vocalizing after carbachol, but not after air puff, had a significantly higher number of Fos labeled nuclei within the LDT than non vocalizing controls. A significantly higher number of doublelabeled neurons were discovered in the LDT of vocalizing animals (in both groups) as compared to control conditions. Thus, there were significantly more active cholinergic cells in the LDT of vocalizing than non-vocalizing rats for both methods of call induction.
Resumo:
Ultrasonic vocalization plays an important role in intraspecies communication for rats. It has been well demonstrated that rats will emit 22kHz vocalization in stressfiil or threatening situations. Although the neural mechanism underlying vocahzation is not well understood, it is known that chohnergic input to the basal forebrain induces such alarm calls. A number of experiments have found that intracerebral injection of carbachol, a predominantly muscarinic agonist, into die anterior hypothalamic/preoptic area (AH/POA) rehably induces vocalization similar to naturally emitted ultrasonic calls. It has also been shown that carbachol has extensive inhibitory effects on neuronal firing in the same area. This result impUes that the inhibitory effects of carbachol in the AH/POA could trigger vocahzation, and that the GABAergic system could be involved. The purpose of this study is to investigate the effects ofGABA agonists and antagonists on flie production of carbachol induced 22kHz vocalization. The following hypotheses were examined: 1) apphcation ofGABA (a naturally occurring inhibitory neurotransmitter) will have a synergistic effect with carbachol, increasing vocalization; and 2) tiie apphcation ofGABA antagonists (picrotoxin or bicuculline) will reduce caibachol-induced vocalization. A total of sixty rats were implanted with stainless steel guide cannulae in the AH/POA area. After recovery, animals were locally pretreated with 1) GABA (l-40ng), 2) picrotoxin (1 .5^g) or bicuculhne (0.03ng), or 3) sahne; before injection with carbachol (1 .5^g). The resulting vocalization was measured and quantitated. The results indicate that pretreatment with GABA or GABA antagonists had no significant effect on vocalization. Local pretreatment with GABA did not potentiate the vocal response as measured by its duration, latraicy, and total number of calls. Similarly, pretreatment with picrotoxin or bicuculline had no effects on the same measures of vocalization. The results suggest tfiat chohnoceptive neurons involved in the production of alarm calls are not under direct GABAergic control.
Resumo:
The article discusses the vocalization of cattle in six slaughter plants and the results indicate that "vocalization scoring could be used as a simple method for detecting welfare problems that need to be corrected".
Resumo:
We redescribe Hyla pulchella joaquini and describe its tadpole and vocalization. The taxonomic status of this subspecies is reevaluated; and on the basis of morphology, geographic distribution, and vocalization, we propose the elevation of this subspecies to specific level with the name Hyla joaquini B. Lutz 1968. We also discuss the relationship of H, joaquini within the species groups of H. pulchella Dumeril and Bibron 1841 and H. circumdata (Cope 1871).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)