133 resultados para Vehicular video streaming
Resumo:
This paper presents an evaluation of the use of videoconferencing in learning and teaching in a United Kingdom higher education institution involved in initial teacher education. Students had the opportunity to observe naturalistic teaching practices without physically being present in the classroom. The study consisted of semi-structured interviews with the co-ordinator of the link, the head of ICT services in Stranmillis University College and the teacher of the classroom being observed. Students were invited to complete an online questionnaire. The views of the students, the co-ordinator of the link, the teacher of the classroom being observed and the head of ICT services in Stranmillis University College were then triangulated to gain an overall view of the effectiveness of the videoconferencing link. Interviews suggested students benefited in terms of pedagogy. In the early stages of the project, the teacher thought it acted as a form of classroom control. Technical problems were encountered initially and camera control was modified in the light of these. The online questionnaire suggested that students viewed this experience in a positive way and were impressed with the content, technical quality, and potential benefits of the use of this example of new technologies.
Resumo:
With a significant increment of the number of digital cameras used for various purposes, there is a demanding call for advanced video analysis techniques that can be used to systematically interpret and understand the semantics of video contents, which have been recorded in security surveillance, intelligent transportation, health care, video retrieving and summarization. Understanding and interpreting human behaviours based on video analysis have observed competitive challenges due to non-rigid human motion, self and mutual occlusions, and changes of lighting conditions. To solve these problems, advanced image and signal processing technologies such as neural network, fuzzy logic, probabilistic estimation theory and statistical learning have been overwhelmingly investigated.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.
Resumo:
The growth and saturation of Buneman-type instabilities is examined with a particle-in-cell (PIC) simulation for parameters that are representative for the foreshock region of fast supernova remnant shocks. A dense ion beam and the electrons correspond to the upstream plasma and a fast ion beam to the shock-reflected ions. The purpose of the 2D simulation is to identify the nonlinear saturation mechanisms, the electron heating and potential secondary instabilities that arise from anisotropic electron heating and result in the growth of magnetic fields. We confirm that the instabilities between both ion beams and the electrons saturate by the formation of phase space holes by the beam-aligned modes. The slower oblique modes accelerate some electrons, but they cannot heat up the electrons significantly before they are trapped by the faster beam-aligned modes. Two circular electron velocity distributions develop, which are centred around the velocity of each ion beam. They develop due to the scattering of the electrons by the electrostatic wave potentials. The growth of magnetic fields is observed, but their amplitude remains low.
Resumo:
Details of a new low power fast Fourier transform (FFT) processor for use in digital television applications are presented. This has been fabricated using a 0.6-µm CMOS technology and can perform a 64 point complex forward or inverse FFT on real-time video at up to 18 Megasamples per second. It comprises 0.5 million transistors in a die area of 7.8 × 8 mm and dissipates 1 W. The chip design is based on a novel VLSI architecture which has been derived from a first principles factorization of the discrete Fourier transform (DFT) matrix and tailored to a direct silicon implementation.
Resumo:
In this paper, a new reconfigurable multi-standard architecture is introduced for integer-pixel motion estimation and a standard-cell based chip design study is presented. This has been designed to cover most of the common block-based video compression standards, including MPEG-2, MPEG-4, H.263, H.264, AVS and WMV-9. The architecture exhibits simpler control, high throughput and relative low hardware cost and highly competitive when compared with excising designs for specific video standards. It can also, through the use of control signals, be dynamically reconfigured at run-time to accommodate different system constraint such as the trade-off in power dissipation and video-quality. The computational rates achieved make the circuit suitable for high end video processing applications. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards.
Resumo:
This paper describes how worst-case error analysis can be applied to solve some of the practical issues in the development and implementation of a low power, high performance radix-4 FFT chip for digital video applications. The chip has been fabricated using a 0.6 µm CMOS technology and can perform a 64 point complex forward or inverse FFT on real-time video at up to 18 Megasamples per second. It comprises 0.5 million transistors in a die area of 7.8×8 mm and dissipates 1 W, leading to a cost-effective silicon solution for high quality video processing applications. The analysis focuses on the effect that different radix-4 architectural configurations and finite wordlengths has on the FFT output dynamic range. These issues are addressed using both mathematical error models and through extensive simulation.
Resumo:
Objective: The aim of this paper is to bridge the gap between the corpus of imitation research and video-based intervention (VBI) research, and consider the impact imitation skills may be having on VBI outcomes and highlight potential areas for improving efficacy.
Method: A review of the imitation literature was conducted focusing on imitation skill deficits in children with autism followed by a critical review of the video modelling literature focusing on pre-intervention assessment of imitation skills and the impact imitation deficits may have on VBI outcomes.
Results: Children with autism have specific imitation deficits, which may impact VBI outcomes. Imitation training or procedural modifications made to videos may accommodate for these deficits.
Conclusions: There are only six studies where VBI researchers have taken pre-intervention imitation assessments using an assortment of imitation measures. More research is required to develop a standardised multi-dimensional imitation assessment battery that can better inform VBI.
Resumo:
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.