44 resultados para Audio amplifiers
Resumo:
Recent debates about media literacy and the internet have begun to acknowledge the importance of active user-engagement and interaction. It is not enough simply to access material online, but also to comment upon it and re-use. Yet how do these new user expectations fit within digital initiatives which increase access to audio-visual-content but which prioritise access and preservation of archives and online research rather than active user-engagement? This article will address these issues of media literacy in relation to audio-visual content. It will consider how these issues are currently being addressed, focusing particularly on the high-profile European initiative EUscreen. EUscreen brings together 20 European television archives into a single searchable database of over 40,000 digital items. Yet creative re-use restrictions and copyright issues prevent users from re-working the material they find on the site. Instead of re-use, EUscreen instead offers access and detailed contextualisation of its collection of material. But if the emphasis for resources within an online environment rests no longer upon access but on user-engagement, what does EUscreen and similar sites offer to different users?
Resumo:
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
Resumo:
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.
Resumo:
This paper presents the design of a novel 8-way power-combining transformer for use in mm-wave power amplifier (PA). The combiner exhibits a record low insertion loss of 1.25 dB at 83.5 GHz. A complete circuit comprised of a power splitter, two-stage cascode PA array, a power combiner and input/output matching elements was designed and realized in SiGe technology. Measured gain of at least 16.8 dB was obtained from 76.4 GHz to 85.3 GHz with a peak 19.5 dB at 83 GHz. The prototype delivered 12.5 dBm OP and 14 dBm saturated output power when operated from a 3.2 V DC supply voltage at 78 GHz. © 2013 IEEE.
High-Efficiency Harmonic-Peaking Class-EF Power Amplifiers with Enhanced Maximum Operating Frequency
Resumo:
The recently introduced Class-EF power amplifier (PA) has a peak switch voltage lower than that of the Class-E PA. However, the value of the transistor output capacitance at high frequencies is typically larger than the required Class-EF optimum shunt capacitance. Consequently, soft-switching operation that minimizes power dissipation during off-to-on transition cannot be achieved at high frequencies. Two new Class-EF PA variants with transmission-line load networks, namely, third-harmonic-peaking (THP) and fifth-harmonic-peaking (FHP) Class-EF PAs are proposed in this paper. These permit operation at higher frequencies at no expense to other PA figures of merit. Analytical expressions are derived in order to obtain circuit component values, which satisfy the required Class-EF impedances at fundamental frequency, all even harmonics, and the first few odd harmonics as well as simultaneously providing impedance matching to a 50- Ω load. Furthermore, a novel open-circuit and shorted stub arrangement, which has substantial practical benefits, is proposed to replace the normal quarter-wave line connected at the transistor's drain. Using GaN HEMTs, two PA prototypes were built. Measured peak drain efficiency of 91% and output power of 39.5 dBm were obtained at 2.22 GHz for the THP Class-EF PA. The FHP Class-EF PA delivered output power of 41.9 dBm with 85% drain efficiency at 1.52 GHz.
Resumo:
The development of 5G enabling technologies brings new challenges to the design of power amplifiers (PAs). In particular, there is a strong demand for low-cost, nonlinear PAs which, however, introduce nonlinear distortions. On the other hand, contemporary expensive PAs show great power efficiency in their nonlinear region. Inspired by this trade-off between nonlinearity distortions and efficiency, finding an optimal operating point is highly desirable. Hence, it is first necessary to fully understand how and how much the performance of multiple-input multiple-output (MIMO) systems deteriorates with PA nonlinearities. In this paper, we first reduce the ergodic achievable rate (EAR) optimization from a power allocation to a power control problem with only one optimization variable, i.e. total input power. Then, we develop a closed-form expression for the EAR, where this variable is fixed. Since this expression is intractable for further analysis, two simple lower bounds and one upper bound are proposed. These bounds enable us to find the best input power and approach the channel capacity. Finally, our simulation results evaluate the EAR of MIMO channels in the presence of nonlinearities. An important observation is that the MIMO performance can be significantly degraded if we utilize the whole power budget.
Resumo:
Direct experience of social work in another country is making an increasingly important contribution to internationalising the social work academic curriculum together with the cultural competency of students. However at present this opportunity is still restricted to a limited number of students. The aim of this paper is to describe and reflect on the production of an audio-visual presentation as representing the experience of three students who participated in an exchange with a social work programme in Pune, India. It describes and assesses the rationale, production and use of video to capture student learning from the Belfast/Pune exchange. We also describe the use of the video in a classroom setting with a year group of 53 students from a younger cohort. This exercise aimed to stimulate students’ curiosity about international dimensions of social work and add to their awareness of poverty, social justice, cultural competence and community social work as global issues. Written classroom feedback informs our discussion of the technical as well as the pedagogical benefits and challenges of this approach. We conclude that some benefit of audio-visual presentation in helping students connect with diverse cultural contexts, but that a complementary discussion challenging stereotyped viewpoints and unconscious professional imperialism is also crucial.
Resumo:
Experience obtained in the support of mobile learning using podcast audio is reported. The paper outlines design, storage and distribution via a web site. An initial evaluation of the uptake of the approach in a final year computing module was undertaken. Audio objects were tailored to meet different pedagogical needs resulting in a repository of persistent glossary terms and disposable audio lectures distributed by podcasting. An aim of our approach is to document the interest from the students, and evaluate the potential of mobile learning for supplementing revision
Resumo:
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.