1000 resultados para Authoritarian speech


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article examines social network users’ legal defences against content removal under the EU and ECHR frameworks, and their implications for the effective exercise of free speech online. A review of the Terms of Use and content moderation policies of two major social network services, Facebook and Twitter, shows that end users are unlikely to have a contractual defence against content removal. Under the EU and ECHR frameworks, they may demand the observance of free speech principles in state-issued blocking orders and their implementation by intermediaries, but cannot invoke this ‘fair balance’ test against the voluntary removal decisions by the social network service. Drawing on practical examples, this article explores the threat to free speech created by this lack of accountability: Firstly, a shift from legislative regulation and formal injunctions to public-private collaborations allows state authorities to influence these ostensibly voluntary policies, thereby circumventing constitutional safeguards. Secondly, even absent state interference, the commercial incentives of social media cannot be guaranteed to coincide with democratic ideals. In light of the blurring of public and private functions in the regulation of social media expression, this article calls for the increased accountability of the social media services towards end users regarding the observance of free speech principles

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bone-anchored hearing implants (BAHI) are routinely used to alleviate the effects of the acoustic head shadow in single-sided sensorineural deafness (SSD). In this study, the influence of the directional microphone setting and the maximum power output of the BAHI sound processor on speech understanding in noise in a laboratory setting were investigated. Eight adult BAHI users with SSD participated in this pilot study. Speech understanding in noise was measured using a new Slovak speech-in-noise test in two different spatial settings, either with noise coming from the front and noise from the side of the BAHI (S90N0) or vice versa (S0N90). In both spatial settings, speech understanding was measured without a BAHI, with a Baha BP100 in omnidirectional mode, with a BP100 in directional mode, with a BP110 power in omnidirectional and with a BP110 power in directional mode. In spatial setting S90N0, speech understanding in noise with either sound processor and in either directional mode was improved by 2.2-2.8 dB (p = 0.004-0.016). In spatial setting S0N90, speech understanding in noise was reduced by either BAHI, but was significantly better by 1.0-1.8 dB, if the directional microphone system was activated (p = 0.046), when compared to the omnidirectional setting. With the limited number of subjects in this study, no statistically significant differences were found between the two sound processors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facebook is a medium of social interaction producing its own style. I study how users from Malaga create this style through phonic features of the local variety and how they reflect on the use of these features. I then analyse the use of non-standard features by users from Malaga and compare them to an oral corpus. Results demonstrate that social factors work differently in real and virtual speech. Facebook communication is seen as a style serving to create social meaning and to express linguistic identity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet has affected our lives and society in manifold ways, and partly, in fundamental ways. Therefore, it is no surprise that one of the affected areas is language and communication itself. Over the last few years, online social networks have become a widespread and continuously expanding medium of communication. Being a new medium of social interaction, online social networks produce their own communication style, which in many cases differs considerably from real speech and is also perceived differently. The focus of analysis of my PhD thesis is how social network users from the city of Malaga create this virtual style by means of phonic features typical of the Andalusian variety of Spanish and how the users’ language attitude has an influence on the use of these phonic features. The data collection was fourfold: 1) a main corpus was compiled from 240 informants’ utterances on Facebook and Tuenti; 2) a corpus constituted of broad transcriptions of recordings with 120 people from Malaga served as a comparison; 3) a survey in which 240 participants rated the use of said phonetic variants on the following axes: “good–bad”, “correct–incorrect” and “beautiful–ugly” was carried out; 4) a survey with 240 participants who estimated with which frequency the analysed features are used in Malaga was conducted. For the analysis, which is quantitative and qualitative, ten variables were chosen. Results show that the studied variants are employed differently in virtual and real speech depending on how people perceive these variants. In addition, the use of the features is constrained by social factors. In general, people from Malaga have a more positive attitude towards non-­‐standard features if they are used in virtual speech than in real speech. Thus, virtual communication is seen as a style serving to create social meaning and to express linguistic identity. These stylistic practices reflect an amalgam of social presuppositions about usage conventions and individual strategies for handling a new medium. In sum, the virtual style is an initiative deliberately taken by the users, to create their, real and virtual, identities, and to define their language attitudes towards the features of their variety of speech.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to investigate the effects of inner and heard speech on cerebral hemodynamics and oxygenation in the anterior prefrontal cortex (PFC) using functional near-infrared spectroscopy and to test whether potential effects were caused by alterations in the arterial carbon dioxide pressure (PaCO2). Twenty-nine healthy adult volunteers performed six different tasks of inner and heard speech according to a randomized crossover design. During the tasks, we generally found a decrease in PaCO2 (only for inner speech), tissue oxygen saturation (StO2), oxyhemoglobin ([O2Hb]), total hemoglobin ([tHb]) concentration and an increase in deoxyhemoglobin concentration ([HHb]). Furthermore, we found significant relations between changes in [O2Hb], [HHb], [tHb], or StO2 and the participants’ age, the baseline PETCO2, or certain speech tasks. We conclude that changes in breathing during the tasks led to lower PaCO2 (hypocapnia) for inner speech. During heard speech, no significant changes in PaCO2 occurred, but the decreases in StO2, [O2Hb], and [tHb] suggest that changes in PaCO2 were also involved here. Different verse types (hexameter and alliteration) led to different changes in [tHb], implying different brain activations. In conclusion, StO2, [O2Hb], [HHb], and [tHb] are affected by interplay of both PaCO2 reactivity and functional brain activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present study was (i) to investigate the effect of inner speech on cerebral hemodynamics and oxygenation, and (ii) to analyze if these changes could be the result of alternations of the arterial carbon dioxide pressure (PaCO2). To this end, in seven adult volunteers, we measured changes of cerebral absolute [O2Hb], [HHb], [tHb] concentrations and tissue oxygen saturation (StO2) (over the left and right anterior prefrontal cortex (PFC)), as well as changes in end-tidal CO2 (PETCO2), a reliable and accurate estimate of PaCO2. Each subject performed three different tasks (inner recitation of hexameter (IRH) or prose (IRP) verses) and a control task (mental arithmetic (MA)) on different days according to a randomized crossover design. Statistical analysis was applied to the differences between pre-baseline, two tasks, and four post-baseline periods. The two brain hemispheres and three tasks were tested separately. During the tasks, we found (i) PETCO2 decreased significantly (p < 0.05) during the IRH ( ~ 3 mmHg) and MA ( ~ 0.5 mmHg) task. (ii) [O2Hb] and StO2 decreased significantly during IRH ( ~ 1.5 μM; ~ 2 %), IRP ( ~ 1 μM; ~ 1.5 %), and MA ( ~ 1 μM; ~ 1.5 %) tasks. During the post-baseline period, [O2Hb] and [tHb] of the left PFC decreased significantly after the IRP and MA task ( ~ 1 μM and ~ 2 μM, respectively). In conclusion, the study showed that inner speech affects PaCO2, probably due to changes in respiration. Although a decrease in PaCO2 is causing cerebral vasoconstriction and could potentially explain the decreases of [O2Hb] and StO2 during inner speech, the changes in PaCO2 were significantly different between the three tasks (no change in PaCO2 for MA) but led to very similar changes in [O2Hb] and StO2. Thus, the cerebral changes cannot solely be explained by PaCO2.