213 resultados para video capture
Resumo:
Having a good automatic anomalous human behaviour detection is one of the goals of smart surveillance systems’ domain of research. The automatic detection addresses several human factor issues underlying the existing surveillance systems. To create such a detection system, contextual information needs to be considered. This is because context is required in order to correctly understand human behaviour. Unfortunately, the use of contextual information is still limited in the automatic anomalous human behaviour detection approaches. This paper proposes a context space model which has two benefits: (a) It provides guidelines for the system designers to select information which can be used to describe context; (b)It enables a system to distinguish between different contexts. A comparative analysis is conducted between a context-based system which employs the proposed context space model and a system which is implemented based on one of the existing approaches. The comparison is applied on a scenario constructed using video clips from CAVIAR dataset. The results show that the context-based system outperforms the other system. This is because the context space model allows the system to considering knowledge learned from the relevant context only.
Resumo:
Modelling activities in crowded scenes is very challenging as object tracking is not robust in complicated scenes and optical flow does not capture long range motion. We propose a novel approach to analyse activities in crowded scenes using a “bag of particle trajectories”. Particle trajectories are extracted from foreground regions within short video clips using particle video, which estimates long range motion in contrast to optical flow which is only concerned with inter-frame motion. Our applications include temporal video segmentation and anomaly detection, and we perform our evaluation on several real-world datasets containing complicated scenes. We show that our approaches achieve state-of-the-art performance for both tasks.
Resumo:
Background When large scale trials are investigating the effects of interventions on appetite, it is paramount to efficiently monitor large amounts of human data. The original hand-held Electronic Appetite Ratings System (EARS) was designed to facilitate the administering and data management of visual analogue scales (VAS) of subjective appetite sensations. The purpose of this study was to validate a novel hand-held method (EARS II (HP® iPAQ)) against the standard Pen and Paper (P&P) method and the previously validated EARS. Methods Twelve participants (5 male, 7 female, aged 18-40) were involved in a fully repeated measures design. Participants were randomly assigned in a crossover design, to either high fat (>48% fat) or low fat (<28% fat) meal days, one week apart and completed ratings using the three data capture methods ordered according to Latin Square. The first set of appetite sensations was completed in a fasted state, immediately before a fixed breakfast. Thereafter, appetite sensations were completed every thirty minutes for 4h. An ad libitum lunch was provided immediately before completing a final set of appetite sensations. Results Repeated measures ANOVAs were conducted for ratings of hunger, fullness and desire to eat. There were no significant differences between P&P compared with either EARS or EARS II (p > 0.05). Correlation coefficients between P&P and EARS II, controlling for age and gender, were performed on Area Under the Curve ratings. R2 for Hunger (0.89), Fullness (0.96) and Desire to Eat (0.95) were statistically significant (p < 0.05). Conclusions EARS II was sensitive to the impact of a meal and recovery of appetite during the postprandial period and is therefore an effective device for monitoring appetite sensations. This study provides evidence and support for further validation of the novel EARS II method for monitoring appetite sensations during large scale studies. The added versatility means that future uses of the system provides the potential to monitor a range of other behavioural and physiological measures often important in clinical and free living trials.
Resumo:
Effective streaming of video can be achieved by providing more bits to the most important region in the frame at the cost of reduced bits in the less important regions. This strategy can be beneficial for delivering high quality videos in mobile devices, especially when the availability of bandwidth is usually low and limited. While the state-of-the-art video codecs such as H.264 may have been optimised for perceived quality, it is hypothesised that users will give more attention to interesting region/object when watching videos. Therefore, giving a higher quality to region of interest (ROI)while reducing quality of other areas may result in improving the overall perceived quality without necessarily increasing the bitrate. In this paper, the impact of ROI-based encoded video on perceived quality is investigated by conducting a user study for varous target bitrates. The results from the user study demonstrate that ROI-based video coding has superior perceived quality compared to normal encoded video at the same bitrate in the lower bitrate range.
Resumo:
Facial expression is an important channel of human social communication. Facial expression recognition (FER) aims to perceive and understand emotional states of humans based on information in the face. Building robust and high performance FER systems that can work in real-world video is still a challenging task, due to the various unpredictable facial variations and complicated exterior environmental conditions, as well as the difficulty of choosing a suitable type of feature descriptor for extracting discriminative facial information. Facial variations caused by factors such as pose, age, gender, race and occlusion, can exert profound influence on the robustness, while a suitable feature descriptor largely determines the performance. Most present attention on FER has been paid to addressing variations in pose and illumination. No approach has been reported on handling face localization errors and relatively few on overcoming facial occlusions, although the significant impact of these two variations on the performance has been proved and highlighted in many previous studies. Many texture and geometric features have been previously proposed for FER. However, few comparison studies have been conducted to explore the performance differences between different features and examine the performance improvement arisen from fusion of texture and geometry, especially on data with spontaneous emotions. The majority of existing approaches are evaluated on databases with posed or induced facial expressions collected in laboratory environments, whereas little attention has been paid on recognizing naturalistic facial expressions on real-world data. This thesis investigates techniques for building robust and high performance FER systems based on a number of established feature sets. It comprises of contributions towards three main objectives: (1) Robustness to face localization errors and facial occlusions. An approach is proposed to handle face localization errors and facial occlusions using Gabor based templates. Template extraction algorithms are designed to collect a pool of local template features and template matching is then performed to covert these templates into distances, which are robust to localization errors and occlusions. (2) Improvement of performance through feature comparison, selection and fusion. A comparative framework is presented to compare the performance between different features and different feature selection algorithms, and examine the performance improvement arising from fusion of texture and geometry. The framework is evaluated for both discrete and dimensional expression recognition on spontaneous data. (3) Evaluation of performance in the context of real-world applications. A system is selected and applied into discriminating posed versus spontaneous expressions and recognizing naturalistic facial expressions. A database is collected from real-world recordings and is used to explore feature differences between standard database images and real-world images, as well as between real-world images and real-world video frames. The performance evaluations are based on the JAFFE, CK, Feedtum, NVIE, Semaine and self-collected QUT databases. The results demonstrate high robustness of the proposed approach to the simulated localization errors and occlusions. Texture and geometry have different contributions to the performance of discrete and dimensional expression recognition, as well as posed versus spontaneous emotion discrimination. These investigations provide useful insights into enhancing robustness and achieving high performance of FER systems, and putting them into real-world applications.
Resumo:
From a law enforcement standpoint, the ability to search for a person matching a semantic description (i.e. 1.8m tall, red shirt, jeans) is highly desirable. While a significant research effort has focused on person re-detection (the task of identifying a previously observed individual in surveillance video), these techniques require descriptors to be built from existing image or video observations. As such, person re-detection techniques are not suited to situations where footage of the person of interest is not readily available, such as a witness reporting a recent crime. In this paper, we present a novel framework that is able to search for a person based on a semantic description. The proposed approach uses size and colour cues, and does not require a person detection routine to locate people in the scene, improving utility in crowded conditions. The proposed approach is demonstrated with a new database that will be made available to the research community, and we show that the proposed technique is able to correctly localise a person in a video based on a simple semantic description.
Resumo:
Student voice is a powerful signifier for sharing the institutional habitus of a campus. With our new Caboolture Campus Community Stories initiative, we place students in the role of vloggers (video bloggers) to capture and distribute the stories, activities and events of the QUT environment. These stories present visual narratives through the eyes of students about university experience, academic practice and the transition from High School to first year, all intending to promote a sense of community and belonging, normalize academic practices and build an inclusive institutional habitus. These stories are placed on community websites and digital signage around campus as resources for first year students and prospective students.
Resumo:
In this paper a real-time vision based power line extraction solution is investigated for active UAV guidance. The line extraction algorithm starts from ridge points detected by steerable filters. A collinear line segments fitting algorithm is followed up by considering global and local information together with multiple collinear measurements. GPU boosted algorithm implementation is also investigated in the experiment. The experimental result shows that the proposed algorithm outperforms two baseline line detection algorithms and is able to fitting long collinear line segments. The low computational cost of the algorithm make suitable for real-time applications.
Resumo:
Quality based frame selection is a crucial task in video face recognition, to both improve the recognition rate and to reduce the computational cost. In this paper we present a framework that uses a variety of cues (face symmetry, sharpness, contrast, closeness of mouth, brightness and openness of the eye) to select the highest quality facial images available in a video sequence for recognition. Normalized feature scores are fused using a neural network and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face recognition system. Experiments on the Honda/UCSD database shows that the proposed method selects the best quality face images in the video sequence, resulting in improved recognition performance.
Resumo:
Studies dedicated to understanding the relationship between gaming and mental health, have traditionally focused on the effects of depression, anxiety, obsessive usage, aggression, obesity, and faltering ‘real life’ relationships. The complexity of game genre and personality aside, this review aims to define a space for a positive relationship between videogame play and wellbeing by applying current videogame research to the criteria that defines the wellbeing construct ‘flourishing’. Self- determination theory (SDT), and flow provide context, and areas of overlap are explored.
Resumo:
While a rich body of literature in television and film studies and media policy studies has tended to focus on the media activities in the formal sector, we know much less about informal media activities, its influence on state policies, as well as the dynamics between the formal and the informal sectors. This article examines these issues with reference to a particularly revealing period following a large-scale government crackdown on peer-to-peer video sharing sites in China in 2008. By analyzing the aim and consequences of the state action, I point to the counter-productive effect in terms of cultural loss and the resurgence of offline piracy; and show the positive impact on forcing the informal into the formal sector, and pressuring the formal to innovate. Meanwhile, an increasing rapprochement between professional and user-created content leads to a new relationship between formal and informal sectors. This case demonstrates the importance of considering the dynamics between the two sectors. It also offers compelling evidence of the role of the informal sector in engendering state action, which in turn impacted on the co-evolution of the formal and the informal sectors.
The backfilled GEI : a cross-capture modality gait feature for frontal and side-view gait recognition
Resumo:
In this paper, we propose a novel direction for gait recognition research by proposing a new capture-modality independent, appearance-based feature which we call the Back-filled Gait Energy Image (BGEI). It can can be constructed from both frontal depth images, as well as the more commonly used side-view silhouettes, allowing the feature to be applied across these two differing capturing systems using the same enrolled database. To evaluate this new feature, a frontally captured depth-based gait dataset was created containing 37 unique subjects, a subset of which also contained sequences captured from the side. The results demonstrate that the BGEI can effectively be used to identify subjects through their gait across these two differing input devices, achieving rank-1 match rate of 100%, in our experiments. We also compare the BGEI against the GEI and GEV in their respective domains, using the CASIA dataset and our depth dataset, showing that it compares favourably against them. The experiments conducted were performed using a sparse representation based classifier with a locally discriminating input feature space, which show significant improvement in performance over other classifiers used in gait recognition literature, achieving state of the art results with the GEI on the CASIA dataset.