819 resultados para Visual surveillance, Human activity recognition, Video annotation
Resumo:
Modelling activities in crowded scenes is very challenging as object tracking is not robust in complicated scenes and optical flow does not capture long range motion. We propose a novel approach to analyse activities in crowded scenes using a “bag of particle trajectories”. Particle trajectories are extracted from foreground regions within short video clips using particle video, which estimates long range motion in contrast to optical flow which is only concerned with inter-frame motion. Our applications include temporal video segmentation and anomaly detection, and we perform our evaluation on several real-world datasets containing complicated scenes. We show that our approaches achieve state-of-the-art performance for both tasks.
Resumo:
In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.
Resumo:
Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.
Resumo:
Quality based frame selection is a crucial task in video face recognition, to both improve the recognition rate and to reduce the computational cost. In this paper we present a framework that uses a variety of cues (face symmetry, sharpness, contrast, closeness of mouth, brightness and openness of the eye) to select the highest quality facial images available in a video sequence for recognition. Normalized feature scores are fused using a neural network and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face recognition system. Experiments on the Honda/UCSD database shows that the proposed method selects the best quality face images in the video sequence, resulting in improved recognition performance.
Resumo:
Introduction: Delirium is a serious issue associated with high morbidity and mortality in older hospitalised people. Early recognition enables diagnosis and treatment of underlying cause/s, which can lead to improved patient outcomes. However, research shows knowledge and accurate nurse recognition of delirium and is poor and lack of education appears to be a key issue related to this problem. Thus, the purpose of this randomised controlled trial (RCT) was to evaluate, in a sample of registered nurses, the usability and effectiveness of a web-based learning site, designed using constructivist learning principles, to improve acute care nurse knowledge and recognition of delirium. Prior to undertaking the RCT preliminary phases involving; validation of vignettes, video-taping five of the validated vignettes, website development and pilot testing were completed. Methods: The cluster RCT involved consenting registered nurse participants (N = 175) from twelve clinical areas within three acute health care facilities in Queensland, Australia. Data were collected through a variety of measures and instruments. Primary outcomes were improved ability of nurses to recognise delirium using written validated vignettes and improved knowledge of delirium using a delirium knowledge questionnaire. The secondary outcomes were aimed at determining nurse satisfaction and usability of the website. Primary outcome measures were taken at baseline (T1), directly after the intervention (T2) and two months later (T3). The secondary outcomes were measured at T2 by participants in the intervention group. Following baseline data collection remaining participants were assigned to either the intervention (n=75) or control (n=72) group. Participants in the intervention group were given access to the learning intervention while the control group continued to work in their clinical area and at that time, did not receive access to the learning intervention. Data from the primary outcome measures were examined in mixed model analyses. Results: Overall, the effect of the online learning intervention over time comparing the intervention group and the control group were positive. The intervention groups‘ scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for delirium recognition when comparing T2 and T1 results (t=2.58 p=0.012) between the control and intervention group but not for changes in delirium recognition scores between the two groups from T3 and T1 (t=1.80 p=0.074). The majority of the participants rated the website highly on the visual, functional and content elements. Additionally, nearly 80% of the participants liked the overall website features and there were self-reported improvements in delirium knowledge and recognition by the registered nurses in the intervention group. Discussion: Findings from this study support the concept that online learning is an effective and satisfying method of information delivery. Embedded within a constructivist learning environment the site produced a high level of satisfaction and usability for the registered nurse end-users. Additionally, the results showed that the website significantly improved delirium knowledge & recognition scores and the improvement in delirium knowledge was retained at a two month follow-up. Given the strong effect of the intervention the online delirium intervention should be utilised as a way of providing information to registered nurses. It is envisaged that this knowledge would lead to improved recognition of delirium as well as improvement in patient outcomes however; translation of this knowledge attainment into clinical practice was outside the scope of this study. A critical next step is demonstrating the effect of the intervention in changing clinical behaviour, and improving patient health outcomes.
Resumo:
The aim of this study was to examine older adults’ understanding and interpretation of a validated questionnaire for physical activity surveillance, the Active Australia Survey (AAS). To address this aim, cognitive interviewing techniques were used during face-to-face semi-structured interviews with 44 adults aged 65-89 years. Qualitative data analysis revealed that participants were confused with questionnaire phrasing, misunderstood the scope of activities to include in answers, and misunderstood the time frame of activities to report. They also struggled to accurately estimate the frequency and duration of their activities. Our findings suggest that AAS questions may be interpreted differently by older adults than intended by survey developers. Findings also suggest that older adults use a range of methods for calculating PA frequency and duration. The issues revealed in this study may be useful for adapting AAS for use in older community-dwelling adults.
Resumo:
Spatio-Temporal interest points are the most popular feature representation in the field of action recognition. A variety of methods have been proposed to detect and describe local patches in video with several techniques reporting state of the art performance for action recognition. However, the reported results are obtained under different experimental settings with different datasets, making it difficult to compare the various approaches. As a result of this, we seek to comprehensively evaluate state of the art spatio- temporal features under a common evaluation framework with popular benchmark datasets (KTH, Weizmann) and more challenging datasets such as Hollywood2. The purpose of this work is to provide guidance for researchers, when selecting features for different applications with different environmental conditions. In this work we evaluate four popular descriptors (HOG, HOF, HOG/HOF, HOG3D) using a popular bag of visual features representation, and Support Vector Machines (SVM)for classification. Moreover, we provide an in-depth analysis of local feature descriptors and optimize the codebook sizes for different datasets with different descriptors. In this paper, we demonstrate that motion based features offer better performance than those that rely solely on spatial information, while features that combine both types of data are more consistent across a variety of conditions, but typically require a larger codebook for optimal performance.
Resumo:
Evaluating physical activity is important for public health population research and evaluating lifestyle interventions among targeted groups. Self-reported questionnaires are frequently used to evaluate physical activity in a variety of contexts where resource or pragmatic limitations prohibit the use of more sophisticated approaches. However, prior research in the use of other patient reported outcomes in healthcare settings has highlighted that simply completing a questionnaire may change a patients’ behaviour or responses to subsequent questions. This methodology study aimed to examine whether completing a standard physical activity questionnaire altered patients responses to two related questions a) whether they are ‘sufficiently physically active’ and b) whether they desire ‘to be more physically active.’
Resumo:
Modelling video sequences by subspaces has recently shown promise for recognising human actions. Subspaces are able to accommodate the effects of various image variations and can capture the dynamic properties of actions. Subspaces form a non-Euclidean and curved Riemannian manifold known as a Grassmann manifold. Inference on manifold spaces usually is achieved by embedding the manifolds in higher dimensional Euclidean spaces. In this paper, we instead propose to embed the Grassmann manifolds into reproducing kernel Hilbert spaces and then tackle the problem of discriminant analysis on such manifolds. To achieve efficient machinery, we propose graph-based local discriminant analysis that utilises within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, respectively. Experiments on KTH, UCF Sports, and Ballet datasets show that the proposed approach obtains marked improvements in discrimination accuracy in comparison to several state-of-the-art methods, such as the kernel version of affine hull image-set distance, tensor canonical correlation analysis, spatial-temporal words and hierarchy of discriminative space-time neighbourhood features.
Resumo:
Purpose: IpRGCs mediate non-image forming functions including photoentrainment and the pupil light reflex (PLR). Temporal summation increases visual sensitivity and decreases temporal resolution for image forming vision, but the summation properties of nonimage forming vision are unknown. We investigated the temporal summation of inner (ipRGC) and outer (rod/cone) retinal inputs to the PLR. Method: The consensual PLR of the left eye was measured in six participants with normal vision using a Maxwellian view infrared pupillometer. Temporal summation was investigated using a double-pulse protocol (100 ms stimulus pairs; 0–1024 ms inter-stimulus interval, ISI) presented to the dilated fellow right eye (Tropicamide 1%). Stimulus lights (blue λmax = 460 nm; red λmax = 638 nm) biased activity to inneror outer retinal inputs to non-image forming vision. Temporal summation was measured suprathreshold (15.2 log photons.cm−2.s−1 at the cornea) and subthreshold (11.4 log photons.cm−2.s−1 at the cornea). Results: RM-ANOVAs showed the suprathreshold and subthreshold 6 second post illumination pupil response (PIPR: expressed as percentage baseline diameter) did not significantly vary for red or blue stimuli (p > .05). The PIPR for a subthreshold red 16 ms double-pulse control condition did not significantly differ with ISI (p > .05). The maximum constriction amplitude for red and blue 100 ms double- pulse stimuli did not significantly vary with ISI (p > .05). Conclusion: The non-significant changes in suprathreshold PIPR and subthreshold maximum pupil constriction indicate that inner retinal ipRGC inputs and outer retinal photoreceptor inputs to the PLR do not show temporal summation. The results suggest a fundamental difference between the temporal summation characteristics of image forming and non-image forming vision.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
The rapid growth of visual information on Web has led to immense interest in multimedia information retrieval (MIR). While advancement in MIR systems has achieved some success in specific domains, particularly the content-based approaches, general Web users still struggle to find the images they want. Despite the success in content-based object recognition or concept extraction, the major problem in current Web image searching remains in the querying process. Since most online users only express their needs in semantic terms or objects, systems that utilize visual features (e.g., color or texture) to search images create a semantic gap which hinders general users from fully expressing their needs. In addition, query-by-example (QBE) retrieval imposes extra obstacles for exploratory search because users may not always have the representative image at hand or in mind when starting a search (i.e. the page zero problem). As a result, the majority of current online image search engines (e.g., Google, Yahoo, and Flickr) still primarily use textual queries to search. The problem with query-based retrieval systems is that they only capture users’ information need in terms of formal queries;; the implicit and abstract parts of users’ information needs are inevitably overlooked. Hence, users often struggle to formulate queries that best represent their needs, and some compromises have to be made. Studies of Web search logs suggest that multimedia searches are more difficult than textual Web searches, and Web image searching is the most difficult compared to video or audio searches. Hence, online users need to put in more effort when searching multimedia contents, especially for image searches. Most interactions in Web image searching occur during query reformulation. While log analysis provides intriguing views on how the majority of users search, their search needs or motivations are ultimately neglected. User studies on image searching have attempted to understand users’ search contexts in terms of users’ background (e.g., knowledge, profession, motivation for search and task types) and the search outcomes (e.g., use of retrieved images, search performance). However, these studies typically focused on particular domains with a selective group of professional users. General users’ Web image searching contexts and behaviors are little understood although they represent the majority of online image searching activities nowadays. We argue that only by understanding Web image users’ contexts can the current Web search engines further improve their usefulness and provide more efficient searches. In order to understand users’ search contexts, a user study was conducted based on university students’ Web image searching in News, Travel, and commercial Product domains. The three search domains were deliberately chosen to reflect image users’ interests in people, time, event, location, and objects. We investigated participants’ Web image searching behavior, with the focus on query reformulation and search strategies. Participants’ search contexts such as their search background, motivation for search, and search outcomes were gathered by questionnaires. The searching activity was recorded with participants’ think aloud data for analyzing significant search patterns. The relationships between participants’ search contexts and corresponding search strategies were discovered by Grounded Theory approach. Our key findings include the following aspects: - Effects of users' interactive intents on query reformulation patterns and search strategies - Effects of task domain on task specificity and task difficulty, as well as on some specific searching behaviors - Effects of searching experience on result expansion strategies A contextual image searching model was constructed based on these findings. The model helped us understand Web image searching from user perspective, and introduced a context-aware searching paradigm for current retrieval systems. A query recommendation tool was also developed to demonstrate how users’ query reformulation contexts can potentially contribute to more efficient searching.
Resumo:
The performance of visual speech recognition (VSR) systems are significantly influenced by the accuracy of the visual front-end. The current state-of-the-art VSR systems use off-the-shelf face detectors such as Viola- Jones (VJ) which has limited reliability for changes in illumination and head poses. For a VSR system to perform well under these conditions, an accurate visual front end is required. This is an important problem to be solved in many practical implementations of audio visual speech recognition systems, for example in automotive environments for an efficient human-vehicle computer interface. In this paper, we re-examine the current state-of-the-art VSR by comparing off-the-shelf face detectors with the recently developed Fourier Lucas-Kanade (FLK) image alignment technique. A variety of image alignment and visual speech recognition experiments are performed on a clean dataset as well as with a challenging automotive audio-visual speech dataset. Our results indicate that the FLK image alignment technique can significantly outperform off-the shelf face detectors, but requires frequent fine-tuning.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.