966 resultados para Speech in Noise
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Resumo:
The aim was to investigate the effect of different speech tasks, i.e. recitation of prose (PR), alliteration (AR) and hexameter (HR) verses and a control task (mental arithmetic (MA) with voicing of the result on end-tidal CO2 (PETCO2), cerebral hemodynamics and oxygenation. CO2 levels in the blood are known to strongly affect cerebral blood flow. Speech changes breathing pattern and may affect CO2 levels. Measurements were performed on 24 healthy adult volunteers during the performance of the 4 tasks. Tissue oxygen saturation (StO2) and absolute concentrations of oxyhemoglobin ([O2Hb]), deoxyhemoglobin ([HHb]) and total hemoglobin ([tHb]) were measured by functional near-infrared spectroscopy (fNIRS) and PETCO2 by a gas analyzer. Statistical analysis was applied to the difference between baseline before the task, 2 recitation and 5 baseline periods after the task. The 2 brain hemispheres and 4 tasks were tested separately. A significant decrease in PETCO2 was found during all 4 tasks with the smallest decrease during the MA task. During the recitation tasks (PR, AR and HR) a statistically significant (p < 0.05) decrease occurred for StO2 during PR and AR in the right prefrontal cortex (PFC) and during AR and HR in the left PFC. [O2Hb] decreased significantly during PR, AR and HR in both hemispheres. [HHb] increased significantly during the AR task in the right PFC. [tHb] decreased significantly during HR in the right PFC and during PR, AR and HR in the left PFC. During the MA task, StO2 increased and [HHb] decreased significantly during the MA task. We conclude that changes in breathing (hyperventilation) during the tasks led to lower CO2 pressure in the blood (hypocapnia), predominantly responsible for the measured changes in cerebral hemodynamics and oxygenation. In conclusion, our findings demonstrate that PETCO2 should be monitored during functional brain studies investigating speech using neuroimaging modalities, such as fNIRS, fMRI to ensure a correct interpretation of changes in hemodynamics and oxygenation.
Resumo:
From the moment of their birth, a person's life is determined by their sex. Ms. Goroshko wants to know why this difference is so striking, why society is so concerned to sustain it, and how it is able to persist even when certain national or behavioural stereotypes are erased between people. She is convinced of the existence of not only social, but biological differences between men and women, and set herself the task, in a manuscript totalling 126 pages, written in Ukrainian and including extensive illustrations, of analysing these distinctions as they are manifested in language. She points out that, even before 1900, certain stylistic differences between the ways that men and women speak had been noted. Since then it has become possible, for instance in the case of Japanese, to point to examples of male and female sub-languages. In general, one can single out the following characteristics. Males tend to write with less fluency, to refer to events in a verb-phrase, to be time-oriented, to involve themselves more in their references to events, to locate events in their personal sphere of activity, and to refer less to others. Therefore, concludes Ms Goroshko, the male is shown to be more active, more ego-involved in what he does, and less concerned about others. Women, in contrast, were more fluent, referred to events in a noun-phrase, were less time-oriented, tended to be less involved in their event-references, locate events within their interactive community and refer more to others. They spent much more time discussing personal and domestic subjects, relationship problems, family, health and reproductive matters, weight, food and clothing, men, and other women. As regards discourse strategies, Ms Goroshko notes the following. Men more often begin a conversation, they make more utterances, these utterances are longer, they make more assertions, speak less carefully, generally determine the topic of conversation, speak more impersonally, use more vulgar expressions, and use fewer diminutives and more imperatives. Women's speech strategies, apart from being the opposite of those enumerated above, also contain more euphemisms, polite forms, apologies, laughter and crying. All of the above leads Ms. Goroshko to conclude that the differences between male and female speech forms are more striking than the similarities. Furthermore she is convinced that the biological divergence between the sexes is what generates the verbal divergence, and that social factors can only intensify or diminish the differentiation in verbal behaviour established by the sex of a person. Bearing all this in mind, Ms Goroshko set out to construct a grammar of male and female styles of speaking within Russian. One of her most important research tools was a certain type of free association test. She took a list comprising twelve stimuli (to love, to have, to speak, to fuck, a man, a woman, a child, the sky, a prayer, green, beautiful) and gave it to a group of participants specially selected, according to a preliminary psychological testing, for the high levels of masculinity or femininity they displayed. Preliminary responses revealed that the female reactions were more diverse than the male ones, there were more sentences and word combinations in the female reactions, men gave more negative responses to the stimulus and sometimes didn't want to react at all, women reacted more to adjectives and men to nouns, and that, surprisingly, women coloured more negatively their reactions to the words man, to love and a child (Ms. Goroshko is inclined to attribute this to the present economic situation in Russia). Another test performed by Ms. Goroshko was the so-called "defective text" developed by A.A. Brudny. All participants were distributed with packets of complete sentences, which had been taken from a text and then mixed at random. The task was to reconstruct the original text. There were three types of test, the first descriptive, the second narrative, and the third logical. Ms. Goroshko created computer programmes to analyse the results. She found that none of the reconstructed texts was coincident with the original, differing both from the original text and amongst themselves and that there were many more disparities in the male than the female texts. In the descriptive and logical texts the differences manifested themselves more clearly in the male texts, and in the narrative texts in the female texts. The widest dispersal of values was observed at the outset, while the female text ending was practically coincident with the original (in contrast to the male ending). The greatest differences in text reconstruction for both males and females were registered in the middle of the texts. Women, Ms. Goroshko claims, were more sensitive to the semantic structure of the texts, since they assembled the narrative text much more accurately than the other two, while the men assembled more accurately the logical text. Texts written by women were assembled more accurately by women and texts by men by men. On the basis of computer analysis, Ms. Goroshko found that female speech was substantially more emotional. It was expressed by various means, hyperbole, metaphor, comparisons, epithets, ways of enumeration, and with the aid of interjections, rhetorical questions, exclamations. The level of literacy was higher for female speech, and there were fewer mistakes in grammar and spelling in female texts. The last stage of Ms Goroshko's research concerned the social stereotypes of beliefs about men and women in Russian society today. A large number of respondents were asked questions such as "What merits must a woman possess?", "What are male vices and virtues?", etc. After statistical manipulation, an image of modern man and woman, as it exists in the minds of modern Russian men and women, emerged. Ms. Goroshko believes that her findings are significant not only within the field of linguistics. She has already successfully worked on anonymous texts and been able to decide on the sex of the author and consequently believes that in the future her research may even be of benefit to forensic science.
Resumo:
Users of cochlear implant systems, that is, of auditory aids which stimulate the auditory nerve at the cochlea electrically, often complain about poor speech understanding in noisy environments. Despite the proven advantages of multimicrophone directional noise reduction systems for conventional hearing aids, only one major manufacturer has so far implemented such a system in a product, presumably because of the added power consumption and size. We present a physically small (intermicrophone distance 7 mm) and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. Supporting algorithms, which allow the adjustment of the opening angle and the maximum noise suppression, are proposed and evaluated. A portable real-time device for test in real acoustic environments is presented.
Resumo:
The fields of Rhetoric and Communication usually assume a competent speaker who is able to speak well with conscious intent; however, what happens when intent and comprehension are intact but communicative facilities are impaired (e.g., by stroke or traumatic brain injury)? What might a focus on communicative success be able to tell us in those instances? This project considers this question in examining communication disorders through identifying and analyzing patterns of (dis) fluent speech between 10 aphasic and 10 non-aphasic adults. The analysis in this report is centered on a collection of data provided by the Aphasia Bank database. The database’s collection protocol guides aphasic and non-aphasic participants through a series of language assessments, and for my re-analysis of the database’s transcripts I consider communicative success is and how it is demonstrated during a re-telling of the Cinderella narrative. I conducted a thorough examination of a set of participant transcripts to understand the contexts in which speech errors occur, and how (dis) fluencies may follow from aphasic and non-aphasic participant’s speech patterns. An inductive mixed-methods approach, informed by grounded theory, qualitative, and linguistic analyses of the transcripts functioned as a means to balance the classification of data, providing a foundation for all sampling decisions. A close examination of the transcripts and the codes of the Aphasia Bank database suggest that while the coding is abundant and detailed, that further levels of coding and analysis may be needed to reveal underlying similarities and differences in aphasic vs. non-aphasic linguistic behavior. Through four successive levels of increasingly detailed analysis, I found that patterns of repair by aphasics and non-aphasics differed primarily in degree rather than kind. This finding may have therapeutic impact, in reassuring aphasics that they are on the right track to achieving communicative fluency.
Resumo:
This article examines social network users’ legal defences against content removal under the EU and ECHR frameworks, and their implications for the effective exercise of free speech online. A review of the Terms of Use and content moderation policies of two major social network services, Facebook and Twitter, shows that end users are unlikely to have a contractual defence against content removal. Under the EU and ECHR frameworks, they may demand the observance of free speech principles in state-issued blocking orders and their implementation by intermediaries, but cannot invoke this ‘fair balance’ test against the voluntary removal decisions by the social network service. Drawing on practical examples, this article explores the threat to free speech created by this lack of accountability: Firstly, a shift from legislative regulation and formal injunctions to public-private collaborations allows state authorities to influence these ostensibly voluntary policies, thereby circumventing constitutional safeguards. Secondly, even absent state interference, the commercial incentives of social media cannot be guaranteed to coincide with democratic ideals. In light of the blurring of public and private functions in the regulation of social media expression, this article calls for the increased accountability of the social media services towards end users regarding the observance of free speech principles
Resumo:
Facebook is a medium of social interaction producing its own style. I study how users from Malaga create this style through phonic features of the local variety and how they reflect on the use of these features. I then analyse the use of non-standard features by users from Malaga and compare them to an oral corpus. Results demonstrate that social factors work differently in real and virtual speech. Facebook communication is seen as a style serving to create social meaning and to express linguistic identity.
Resumo:
Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.
Resumo:
Introduction In several studies, we found that during guided rhythmic speech exercises, a decrease in cerebral hemodynamics and oxygenation occurred as the result of a decrease in the partial pressure of carbon dioxide in the arterial blood (PaCO2) during speaking. To further explore the effect of PaCO2 variations on cerebral hemodynamics and oxygenation, the aim of the present study was to investigate the impact of spoken, inner and heard speech tasks on these parameters. Material and Methods Speech tasks included recitation or inner recitation or listening to hexameter, alliteration, prose, or performing mental arithmetic. The following physiological parameters were measured: tissue oxygen saturation (StO2) and absolute concentrations of oxyhemoglobin, deoxyhemoglobin, total hemoglobin (over the left and right anterior prefrontal cortex, using an ISS OxiplexTS frequency domain near-infrared spectrometer) and end-tidal CO2 (PETCO2; using Nellcor N1000 and Datex NORMOCAP capnographs). Statistical analysis was applied to the differences between baseline, 2 tasks, and 3 post-baseline periods. Data of 3 studies with 24, 7 and 29 healthy subjects, respectively, were combined, and linear regression analyses were calculated. Results Linear regression analyses revealed significant relations between changes in oxyhemoglobin, deoxyhemoglobin, total hemoglobin or StO2 and the participants’ age, the baseline PETCO2 or certain speech tasks. While hexameter verses affected changes during the tasks, alliteration verses only affected changes during the recovery phase. Discussion and Conclusion The observed effects in hemodynamics and oxygenation indicate a combination of neurovascular coupling (increased neuronal activity leading to an increase in the cerebral metabolic rate of oxygen resulting in an increase in cerebral flood flow/volume) and CO2 reactivity (increased breathing during speech tasks causing a decrease in PaCO2 leading to vasoconstriction and decrease in cerebral blood flow). The neurovascular coupling characteristics are task-dependent. References Scholkmann F, Gerber U, Wolf M, Wolf U. End-tidal CO2: An important parameter for a correct interpretation in functional brain studies using speech tasks. Neuroimage 2013;66:71-79. Scholkmann F, Wolf M, Wolf U. The effect of inner speech on arterial CO2, cerebral hemodynamics and oxygenation – A functional NIRS study. Adv Exp Med Biol 2013;789:81-87.