100 resultados para Gravações de video - Produção e direção
Resumo:
Do patterns in the YouTube viewing analytics of Lecture Capture videos point to areas of potential teaching and learning performance enhancement? The goal of this action based research project was to capture and quantitatively analyse the viewing behaviours and patterns of a series of video lecture captures across several computing modules in Queen’s University, Belfast, Northern Ireland. The research sought to establish if a quantitative analysis of viewing behaviours coupled with a qualitative evaluation of the material provided from the students could be correlated to provide generalised patterns that could then be used to understand the learning experience of students during face to face lectures and, thereby, present opportunities to reflectively enhance lecturer performance and the students’ overall learning experience and, ultimately, their level of academic attainment.
Resumo:
Video Capture of university lectures enables learners to be more flexible in their learning behaviour, for instance choosing to attend lectures in person or watch later. However attendance at lectures has been linked to academic success and is of concern for faculty staff contemplating the introduction of Video Lecture Capture. This research study was devised to assess the impact on learning of recording lectures in computer programming courses. The study also considered behavioural trends and attitudes of the students watching recorded lectures, such as when, where, frequency, duration and viewing devices used. The findings suggest there is no detrimental effect on attendance at lectures with video materials being used to support continual and reinforced learning with most access occurring at assessment periods. The analysis of the viewing behaviours provides a rich and accessible data source that could be potentially leveraged to improve lecture quality and enhance lecturer and learning performance.
Resumo:
19.Wang, Y, O’Neill, M, Kurugollu, F, Partial Encryption by Randomized Zig-Zag Scanning for Video Encoding, IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, May 2013
Resumo:
This paper presents a new framework for multi-subject event inference in surveillance video, where measurements produced by low-level vision analytics usually are noisy, incomplete or incorrect. Our goal is to infer the composite events undertaken by each subject from noise observations. To achieve this, we consider the temporal characteristics of event relations and propose a method to correctly associate the detected events with individual subjects. The Dempster–Shafer (DS) theory of belief functions is used to infer events of interest from the results of our vision analytics and to measure conflicts occurring during the event association. Our system is evaluated against a number of videos that present passenger behaviours on a public transport platform namely buses at different levels of complexity. The experimental results demonstrate that by reasoning with spatio-temporal correlations, the proposed method achieves a satisfying performance when associating atomic events and recognising composite events involving multiple subjects in dynamic environments.
Resumo:
High Efficiency Video Coding (HEVC) is the most recent video codec coming after currently most popular H.264/MPEG4 codecs and has promising compression capabilities. It is conjectured that it will be a substitute for current video compression standards. However, to the best knowledge of the authors, none of the current video steganalysis methods designed or tested with HEVC video. In this paper, pixel domain steganography applied on HEVC video is targeted for the first time. Also, its the first paper that employs accordion unfolding transformation, which merges temporal and spatial correlation, in pixel domain video steganalysis. With help of the transformation, temporal correlation is incorporated into the system. Its demonstrated for three different feature sets that integrating temporal dependency substantially increased the detection accuracy.
Resumo:
Summary
Background
The ability to carry out a neurological examination and make an appropriate differential diagnosis is one of the mainstays of our final Bachelor of Medicine (MB) exam; however, with the introduction of objective structured clinical examinations (OSCEs) it has become impossible to arrange for adequate numbers of suitable real patients to participate in the exam.
Context
It is vital that newly qualified doctors can perform a basic neurological examination, interpret the physical signs and formulate a differential diagnosis.
It is vital that newly qualified doctors can perform a basic neurological examination
Innovation
Since 2010 we have introduced an objective structured video examination (OSVE) of a neurological examination of a real patient as part of our final MB OSCE exam. The students view clips of parts of the examination process. They answer questions on the signs that are demonstrated and formulate a differential diagnosis.
Implications
This type of station is logistically a lot easier to organise than a large number of real patients at different examination sites. The featured patients have clearly demonstrated signs and, as every student sees the same patient, are perfectly standardised. It is highly acceptable to examiners and performed well as an assessment tool. There are, however, certain drawbacks in that we are not examining the student's examination technique or their interaction with the patient. Also, certain signs, in particular the assessment of muscle tone and power, are more difficult for a student to estimate in this situation
Resumo:
In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.
https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Project Source Code
Resumo:
A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.