936 resultados para SPEECH


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fields of Rhetoric and Communication usually assume a competent speaker who is able to speak well with conscious intent; however, what happens when intent and comprehension are intact but communicative facilities are impaired (e.g., by stroke or traumatic brain injury)? What might a focus on communicative success be able to tell us in those instances? This project considers this question in examining communication disorders through identifying and analyzing patterns of (dis) fluent speech between 10 aphasic and 10 non-aphasic adults. The analysis in this report is centered on a collection of data provided by the Aphasia Bank database. The database’s collection protocol guides aphasic and non-aphasic participants through a series of language assessments, and for my re-analysis of the database’s transcripts I consider communicative success is and how it is demonstrated during a re-telling of the Cinderella narrative. I conducted a thorough examination of a set of participant transcripts to understand the contexts in which speech errors occur, and how (dis) fluencies may follow from aphasic and non-aphasic participant’s speech patterns. An inductive mixed-methods approach, informed by grounded theory, qualitative, and linguistic analyses of the transcripts functioned as a means to balance the classification of data, providing a foundation for all sampling decisions. A close examination of the transcripts and the codes of the Aphasia Bank database suggest that while the coding is abundant and detailed, that further levels of coding and analysis may be needed to reveal underlying similarities and differences in aphasic vs. non-aphasic linguistic behavior. Through four successive levels of increasingly detailed analysis, I found that patterns of repair by aphasics and non-aphasics differed primarily in degree rather than kind. This finding may have therapeutic impact, in reassuring aphasics that they are on the right track to achieving communicative fluency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Humankind today is challenged by numerous threats brought about by global change. Climate has been and is being modified by human activities, which calls for mitigation and adaptation measures at an unprecedented scale. Natural resources have been degraded by human development by means of land cover and land use changes, for which protective and restoration measures have to be taken by land users and governments in most countries of the North and South. Low levels of economic development and insufficient policies in most developing countries have led to widespread poverty, which affects nearly half of the world’s population and directly threatens almost one billion people. Finally, uncontrolled economic growth has increased disparities between and within populations and has led to widespread environmental problems in many nations. Generating and sharing knowledge is a key to addressing such global challenges. Knowledge can be used to develop the best solutions and to avoid or repair threats. Research partnerships have proven to be suitable means to bridge the divides and disparities between knowledge societies and developing countries, thereby reducing gaps. Research partnerships are tools for further capacity development and thereby lead to societal empowerment. Institutional settings allowing for research partnerships are needed both in the North and the South, so that the different networks can work together in a long-term enabling environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article examines social network users’ legal defences against content removal under the EU and ECHR frameworks, and their implications for the effective exercise of free speech online. A review of the Terms of Use and content moderation policies of two major social network services, Facebook and Twitter, shows that end users are unlikely to have a contractual defence against content removal. Under the EU and ECHR frameworks, they may demand the observance of free speech principles in state-issued blocking orders and their implementation by intermediaries, but cannot invoke this ‘fair balance’ test against the voluntary removal decisions by the social network service. Drawing on practical examples, this article explores the threat to free speech created by this lack of accountability: Firstly, a shift from legislative regulation and formal injunctions to public-private collaborations allows state authorities to influence these ostensibly voluntary policies, thereby circumventing constitutional safeguards. Secondly, even absent state interference, the commercial incentives of social media cannot be guaranteed to coincide with democratic ideals. In light of the blurring of public and private functions in the regulation of social media expression, this article calls for the increased accountability of the social media services towards end users regarding the observance of free speech principles

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bone-anchored hearing implants (BAHI) are routinely used to alleviate the effects of the acoustic head shadow in single-sided sensorineural deafness (SSD). In this study, the influence of the directional microphone setting and the maximum power output of the BAHI sound processor on speech understanding in noise in a laboratory setting were investigated. Eight adult BAHI users with SSD participated in this pilot study. Speech understanding in noise was measured using a new Slovak speech-in-noise test in two different spatial settings, either with noise coming from the front and noise from the side of the BAHI (S90N0) or vice versa (S0N90). In both spatial settings, speech understanding was measured without a BAHI, with a Baha BP100 in omnidirectional mode, with a BP100 in directional mode, with a BP110 power in omnidirectional and with a BP110 power in directional mode. In spatial setting S90N0, speech understanding in noise with either sound processor and in either directional mode was improved by 2.2-2.8 dB (p = 0.004-0.016). In spatial setting S0N90, speech understanding in noise was reduced by either BAHI, but was significantly better by 1.0-1.8 dB, if the directional microphone system was activated (p = 0.046), when compared to the omnidirectional setting. With the limited number of subjects in this study, no statistically significant differences were found between the two sound processors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facebook is a medium of social interaction producing its own style. I study how users from Malaga create this style through phonic features of the local variety and how they reflect on the use of these features. I then analyse the use of non-standard features by users from Malaga and compare them to an oral corpus. Results demonstrate that social factors work differently in real and virtual speech. Facebook communication is seen as a style serving to create social meaning and to express linguistic identity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet has affected our lives and society in manifold ways, and partly, in fundamental ways. Therefore, it is no surprise that one of the affected areas is language and communication itself. Over the last few years, online social networks have become a widespread and continuously expanding medium of communication. Being a new medium of social interaction, online social networks produce their own communication style, which in many cases differs considerably from real speech and is also perceived differently. The focus of analysis of my PhD thesis is how social network users from the city of Malaga create this virtual style by means of phonic features typical of the Andalusian variety of Spanish and how the users’ language attitude has an influence on the use of these phonic features. The data collection was fourfold: 1) a main corpus was compiled from 240 informants’ utterances on Facebook and Tuenti; 2) a corpus constituted of broad transcriptions of recordings with 120 people from Malaga served as a comparison; 3) a survey in which 240 participants rated the use of said phonetic variants on the following axes: “good–bad”, “correct–incorrect” and “beautiful–ugly” was carried out; 4) a survey with 240 participants who estimated with which frequency the analysed features are used in Malaga was conducted. For the analysis, which is quantitative and qualitative, ten variables were chosen. Results show that the studied variants are employed differently in virtual and real speech depending on how people perceive these variants. In addition, the use of the features is constrained by social factors. In general, people from Malaga have a more positive attitude towards non-­‐standard features if they are used in virtual speech than in real speech. Thus, virtual communication is seen as a style serving to create social meaning and to express linguistic identity. These stylistic practices reflect an amalgam of social presuppositions about usage conventions and individual strategies for handling a new medium. In sum, the virtual style is an initiative deliberately taken by the users, to create their, real and virtual, identities, and to define their language attitudes towards the features of their variety of speech.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.