813 resultados para video-otoscopy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction
The use of video capture of lectures in Higher Education is not a recent occurrence with web based learning technologies including digital recording of live lectures becoming increasing commonly offered by universities throughout the world (Holliman and Scanlon, 2004). However in the past decade the increase in technical infrastructural provision including the availability of high speed broadband has increased the potential and use of videoed lecture capture. This had led to a variety of lecture capture formats including pod casting, live streaming or delayed broadcasting of whole or part of lectures.
Additionally in the past five years there has been a significant increase in the popularity of online learning, specifically via Massive Open Online Courses (MOOCs) (Vardi, 2014). One of the key aspects of MOOCs is the simulated recording of lecture like activities. There has been and continues to be much debate on the consequences of the popularity of MOOCs, especially in relation to its potential uses within established University programmes.
There have been a number of studies dedicated to the effects of videoing lectures.
The clustered areas of research in video lecture capture have the following main themes:
• Staff perceptions including attendance, performance of students and staff workload
• Reinforcement versus replacement of lectures
• Improved flexibility of learning
• Facilitating engaging and effective learning experiences
• Student usage, perception and satisfaction
• Facilitating students learning at their own pace
Most of the body of the research has concentrated on student and faculty perceptions, including academic achievement, student attendance and engagement (Johnston et al, 2012).
Generally the research has been positive in review of the benefits of lecture capture for both students and faculty. This perception coupled with technical infrastructure improvements and student demand may well mean that the use of video lecture capture will continue to increase in frequency in the next number of years in tertiary education. However there is a relatively limited amount of research in the effects of lecture capture specifically in the area of computer programming with Watkins 2007 being one of few studies . Video delivery of programming solutions is particularly useful for enabling a lecturer to illustrate the complex decision making processes and iterative nature of the actual code development process (Watkins et al 2007). As such research in this area would appear to be particularly appropriate to help inform debate and future decisions made by policy makers.
Research questions and objectives
The purpose of the research was to investigate how a series of lecture captures (in which the audio of lectures and video of on-screen projected content were recorded) impacted on the delivery and learning of a programme of study in an MSc Software Development course in Queen’s University, Belfast, Northern Ireland. The MSc is conversion programme, intended to take graduates from non-computing primary degrees and upskill them in this area. The research specifically targeted the Java programming module within the course. It also analyses and reports on the empirical data from attendances and various video viewing statistics. In addition, qualitative data was collected from staff and student feedback to help contextualise the quantitative results.
Methodology, Methods and Research Instruments Used
The study was conducted with a cohort of 85 post graduate students taking a compulsory module in Java programming in the first semester of a one year MSc in Software Development. A pre-course survey of students found that 58% preferred to have available videos of “key moments” of lectures rather than whole lectures. A large scale study carried out by Guo concluded that “shorter videos are much more engaging” (Guo 2013). Of concern was the potential for low audience retention for videos of whole lectures.
The lecturers recorded snippets of the lecture directly before or after the actual physical delivery of the lecture, in a quiet environment and then upload the video directly to a closed YouTube channel. These snippets generally concentrated on significant parts of the theory followed by theory related coding demonstration activities and were faithful in replication of the face to face lecture. Generally each lecture was supported by two to three videos of durations ranging from 20 – 30 minutes.
Attendance
The MSc programme has several attendance based modules of which Java Programming was one element. In order to assess the consequence on attendance for the Programming module a control was established. The control used was a Database module which is taken by the same students and runs in the same semester.
Access engagement
The videos were hosted on a closed YouTube channel made available only to the students in the class. The channel had enabled analytics which reported on the following areas for all and for each individual video; views (hits), audience retention, viewing devices / operating systems used and minutes watched.
Student attitudes
Three surveys were taken in regard to investigating student attitudes towards the videoing of lectures. The first was before the start of the programming module, then at the mid-point and subsequently after the programme was complete.
The questions in the first survey were targeted at eliciting student attitudes towards lecture capture before they had experienced it in the programme. The midpoint survey gathered data in relation to how the students were individually using the system up to that point. This included feedback on how many videos an individual had watched, viewing duration, primary reasons for watching and the result on attendance, in addition to probing for comments or suggestions. The final survey on course completion contained questions similar to the midpoint survey but in summative view of the whole video programme.
Conclusions and Outcomes
The study confirmed findings of other such investigations illustrating that there is little or no effect on attendance at lectures. The use of the videos appears to help promote continual learning but they are particularly accessed by students at assessment periods. Students respond positively to the ability to access lectures digitally, as a means of reinforcing learning experiences rather than replacing them. Feedback from students was overwhelmingly positive indicating that the videos benefited their learning. Also there are significant benefits to part recording of lectures rather than recording whole lectures. The behaviour viewing trends analytics suggest that despite the increase in the popularity of online learning via MOOCs and the promotion of video learning on mobile devices in fact in this study the vast majority of students accessed the online videos at home on laptops or desktops However, in part, this is likely due to the nature of the taught subject, that being programming.
The research involved prerecording the lecture in smaller timed units and then uploading for distribution to counteract existing quality issues with recording entire live lectures. However the advancement and consequential improvement in quality of in situ lecture capture equipment may well help negate the need to record elsewhere. The research has also highlighted an area of potentially very significant use for performance analysis and improvement that could have major implications for the quality of teaching. A study of the analytics of the viewings of the videos could well provide a quick response formative feedback mechanism for the lecturer. If a videoed lecture either recorded live or later is a true reflection of the face to face lecture an analysis of the viewing patterns for the video may well reveal trends that correspond with the live delivery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Do patterns in the YouTube viewing analytics of Lecture Capture videos point to areas of potential teaching and learning performance enhancement? The goal of this action based research project was to capture and quantitatively analyse the viewing behaviours and patterns of a series of video lecture captures across several computing modules in Queen’s University, Belfast, Northern Ireland. The research sought to establish if a quantitative analysis of viewing behaviours coupled with a qualitative evaluation of the material provided from the students could be correlated to provide generalised patterns that could then be used to understand the learning experience of students during face to face lectures and, thereby, present opportunities to reflectively enhance lecturer performance and the students’ overall learning experience and, ultimately, their level of academic attainment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video Capture of university lectures enables learners to be more flexible in their learning behaviour, for instance choosing to attend lectures in person or watch later. However attendance at lectures has been linked to academic success and is of concern for faculty staff contemplating the introduction of Video Lecture Capture. This research study was devised to assess the impact on learning of recording lectures in computer programming courses. The study also considered behavioural trends and attitudes of the students watching recorded lectures, such as when, where, frequency, duration and viewing devices used. The findings suggest there is no detrimental effect on attendance at lectures with video materials being used to support continual and reinforced learning with most access occurring at assessment periods. The analysis of the viewing behaviours provides a rich and accessible data source that could be potentially leveraged to improve lecture quality and enhance lecturer and learning performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

19.Wang, Y, O’Neill, M, Kurugollu, F, Partial Encryption by Randomized Zig-Zag Scanning for Video Encoding, IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, May 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new framework for multi-subject event inference in surveillance video, where measurements produced by low-level vision analytics usually are noisy, incomplete or incorrect. Our goal is to infer the composite events undertaken by each subject from noise observations. To achieve this, we consider the temporal characteristics of event relations and propose a method to correctly associate the detected events with individual subjects. The Dempster–Shafer (DS) theory of belief functions is used to infer events of interest from the results of our vision analytics and to measure conflicts occurring during the event association. Our system is evaluated against a number of videos that present passenger behaviours on a public transport platform namely buses at different levels of complexity. The experimental results demonstrate that by reasoning with spatio-temporal correlations, the proposed method achieves a satisfying performance when associating atomic events and recognising composite events involving multiple subjects in dynamic environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High Efficiency Video Coding (HEVC) is the most recent video codec coming after currently most popular H.264/MPEG4 codecs and has promising compression capabilities. It is conjectured that it will be a substitute for current video compression standards. However, to the best knowledge of the authors, none of the current video steganalysis methods designed or tested with HEVC video. In this paper, pixel domain steganography applied on HEVC video is targeted for the first time. Also, its the first paper that employs accordion unfolding transformation, which merges temporal and spatial correlation, in pixel domain video steganalysis. With help of the transformation, temporal correlation is incorporated into the system. Its demonstrated for three different feature sets that integrating temporal dependency substantially increased the detection accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary
Background
The ability to carry out a neurological examination and make an appropriate differential diagnosis is one of the mainstays of our final Bachelor of Medicine (MB) exam; however, with the introduction of objective structured clinical examinations (OSCEs) it has become impossible to arrange for adequate numbers of suitable real patients to participate in the exam.

Context
It is vital that newly qualified doctors can perform a basic neurological examination, interpret the physical signs and formulate a differential diagnosis.

It is vital that newly qualified doctors can perform a basic neurological examination

Innovation
Since 2010 we have introduced an objective structured video examination (OSVE) of a neurological examination of a real patient as part of our final MB OSCE exam. The students view clips of parts of the examination process. They answer questions on the signs that are demonstrated and formulate a differential diagnosis.

Implications
This type of station is logistically a lot easier to organise than a large number of real patients at different examination sites. The featured patients have clearly demonstrated signs and, as every student sees the same patient, are perfectly standardised. It is highly acceptable to examiners and performed well as an assessment tool. There are, however, certain drawbacks in that we are not examining the student's examination technique or their interaction with the patient. Also, certain signs, in particular the assessment of muscle tone and power, are more difficult for a student to estimate in this situation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.

https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Project Source Code

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the many discussions and studies related to video games, one of the most recurrent, widely debated and important relates to the experience of playing video games. The gameplay experience – as appropriated in this study – is the result of the interplay between two essential elements: a video game and a player. Existing studies have explored the resulting experience of video game playing from the perspective of the video game or the player, but none appear to equally balance both of these elements. The study presented here contributes to the ongoing debate with a gameplay experience model. The proposed model, which looks to equally balance the video game and the player elements, considers the gameplay experience to be both an interactive experience (related to the process of playing the video game) and an emotional experience (related to the outcome of playing the video game). The mutual influence of these two experiences during video game play ultimately defines the gameplay experience. To this gameplay experience contributes several dimensions, related to both the video game and player: the video game includes a mechanics, interface and narrative dimension; the player includes a motivations, expectations and background dimension. Also, the gameplay experience is initially defined by a gameplay situation, conditioned by an ambient in which gameplay takes place and a platform on which the video game is played. In order to initially validate the proposed model and attempt to show a relationship among the multiple model dimensions, a multi-case study was carried out using two different video games and player samples. In one study, results show significant correlations between multiple model dimensions, and evidence that video game related changes influence player motivations as well as player visual behavior. In specific player related analysis, results show that while players may be different in terms of background and expectations regarding the game, their motivation to play are not necessarily different, even if their performance in the game is weak. While further validation is necessary, this model not only contributes to the gameplay experience debate, but also demonstrates in a given context how player and video game dimensions evolve during video game play.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract of paper delivered at the 17th International Reversal Theory Conference, Day 3, session 4, 01.07.15

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the first known empirical use of the Reversal Theory State Measure (RTSM) since its publication by Desselles et al. (2014). The RTSM was employed to track responses to three purposely-selected video commercials in a between-subjects design. Results of the study provide empirical support for the central conceptual premise of reversal theory, the experience of metamotivational reversals and the ability of the RTSM to capture them. The RTSM was also found to be psychometrically sound after adjustments were made to two of its three component subscales. Detailed account and rationale is provided for the analytical process of assessing the psychometric robustness of the RTSM, with a number of techniques and interpretations relating to component structure and reliability discussed. Agreeability and critique of the two available versions of the RTSM – the bundled and the branched – is also examined. Researchers are encouraged to assist development of the RTSM through further use, taking into account the analysis and recommendations presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new rate-control algorithm for live video streaming over wireless IP networks, which is based on selective frame discarding. In the proposed mechanism excess 'P' frames are dropped from the output queue at the sender using a congestion estimate based on packet loss statistics obtained from RTCP feedback and from the Data Link (DL) layer. The performance of the algorithm is evaluated through computer simulation. This paper also presents a characterisation of packet losses owing to transmission errors and congestion, which can help in choosing appropriate strategies to maximise the video quality experienced by the end user. Copyright © 2007 Inderscience Enterprises Ltd.