6 resultados para image feature extraction
Resumo:
Person re-identification involves recognizing a person across non-overlapping camera views, with different pose, illumination, and camera characteristics. We propose to tackle this problem by training a deep convolutional network to represent a person’s appearance as a low-dimensional feature vector that is invariant to common appearance variations encountered in the re-identification problem. Specifically, a Siamese-network architecture is used to train a feature extraction network using pairs of similar and dissimilar images. We show that use of a novel multi-task learning objective is crucial for regularizing the network parameters in order to prevent over-fitting due to the small size the training dataset. We complement the verification task, which is at the heart of re-identification, by training the network to jointly perform verification, identification, and to recognise attributes related to the clothing and pose of the person in each image. Additionally, we show that our proposed approach performs well even in the challenging cross-dataset scenario, which may better reflect real-world expected performance.
Resumo:
Poor sleep is increasingly being recognised as an important prognostic parameter of health. For those with suspected sleep disorders, patients are referred to sleep clinics which guide treatment. However, sleep clinics are not always a viable option due to their high cost, a lack of experienced practitioners, lengthy waiting lists and an unrepresentative sleeping environment. A home-based non-contact sleep/wake monitoring system may be used as a guide for treatment potentially stratifying patients by clinical need or highlighting longitudinal changes in sleep and nocturnal patterns. This paper presents the evaluation of an under-mattress sleep monitoring system for non-contact sleep/wake discrimination. A large dataset of sensor data with concomitant sleep/wake state was collected from both younger and older adults participating in a circadian sleep study. A thorough training/testing/validation procedure was configured and optimised feature extraction and sleep/wake discrimination algorithms evaluated both within and across the two cohorts. An accuracy, sensitivity and specificity of 74.3%, 95.5%, and 53.2% is reported over all subjects using an external validation
dataset (71.9%, 87.9% and 56%, and 77.5%, 98% and 57% is reported for younger and older subjects respectively). These results compare favourably with similar research, however this system provides an ambient alternative suitable for long term continuous sleep monitoring, particularly amongst vulnerable populations.
Resumo:
Displays are a feature of animal contest behaviour and have been interpreted as a means of gathering information on opponent fighting ability, as well as signalling aggressive motivation. In fish, contest displays often include frontal and lateral elements, which in the latter involves contestants showing their flanks to an opponent. Previous work in a range of fish species has demonstrated population-level lateralization of these displays, preferentially showing one side to their opponent. Mirrors are commonly used in place of a real opponent to study aggression in fish, yet they may disrupt the normal pattern of display behaviour. Here, using Siamese fighting fish, Betta splendens, we compare the aggressive behaviour of males to a mirror image and real opponent behind a transparent barrier. As this species is a facultative air-breather, we also quantify surface breathing, providing insights into underlying fight motivation. Consistent with previous work, we found evidence of population-level
lateralization, with a bias to present the left side and use the left eye when facing a real opponent. Contrary to expectations, there were no differences in the aggressive displays to a mirror and real opponent, with positive correlations between the behaviour in the two scenarios. However, there were important differences in surface breathing, which was more frequent and of longer duration in the mirror treatment. The reasons for these differences are discussed in relation to the repertoire of contest behaviour and motivation when facing a real opponent.
Resumo:
Objective
Pedestrian detection under video surveillance systems has always been a hot topic in computer vision research. These systems are widely used in train stations, airports, large commercial plazas, and other public places. However, pedestrian detection remains difficult because of complex backgrounds. Given its development in recent years, the visual attention mechanism has attracted increasing attention in object detection and tracking research, and previous studies have achieved substantial progress and breakthroughs. We propose a novel pedestrian detection method based on the semantic features under the visual attention mechanism.
Method
The proposed semantic feature-based visual attention model is a spatial-temporal model that consists of two parts: the static visual attention model and the motion visual attention model. The static visual attention model in the spatial domain is constructed by combining bottom-up with top-down attention guidance. Based on the characteristics of pedestrians, the bottom-up visual attention model of Itti is improved by intensifying the orientation vectors of elementary visual features to make the visual saliency map suitable for pedestrian detection. In terms of pedestrian attributes, skin color is selected as a semantic feature for pedestrian detection. The regional and Gaussian models are adopted to construct the skin color model. Skin feature-based visual attention guidance is then proposed to complete the top-down process. The bottom-up and top-down visual attentions are linearly combined using the proper weights obtained from experiments to construct the static visual attention model in the spatial domain. The spatial-temporal visual attention model is then constructed via the motion features in the temporal domain. Based on the static visual attention model in the spatial domain, the frame difference method is combined with optical flowing to detect motion vectors. Filtering is applied to process the field of motion vectors. The saliency of motion vectors can be evaluated via motion entropy to make the selected motion feature more suitable for the spatial-temporal visual attention model.
Result
Standard datasets and practical videos are selected for the experiments. The experiments are performed on a MATLAB R2012a platform. The experimental results show that our spatial-temporal visual attention model demonstrates favorable robustness under various scenes, including indoor train station surveillance videos and outdoor scenes with swaying leaves. Our proposed model outperforms the visual attention model of Itti, the graph-based visual saliency model, the phase spectrum of quaternion Fourier transform model, and the motion channel model of Liu in terms of pedestrian detection. The proposed model achieves a 93% accuracy rate on the test video.
Conclusion
This paper proposes a novel pedestrian method based on the visual attention mechanism. A spatial-temporal visual attention model that uses low-level and semantic features is proposed to calculate the saliency map. Based on this model, the pedestrian targets can be detected through focus of attention shifts. The experimental results verify the effectiveness of the proposed attention model for detecting pedestrians.
Resumo:
Displays are a feature of animal contest behaviour and have been interpreted as a means of gathering information on opponent fighting ability, as well as signalling aggressive motivation. In fish, contest displays often include frontal and lateral elements, which in the latter involves contestants showing their flanks to an opponent. Previous work in a range of fish species has demonstrated population-level lateralization of these displays, preferentially showing one side to their opponent. Mirrors are commonly used in place of a real opponent to study aggression in fish, yet they may disrupt the normal pattern of display behaviour. Here, using Siamese fighting fish, Betta splendens, we compare the aggressive behaviour of males to a mirror image and real opponent behind a transparent barrier. As this species is a facultative air-breather, we also quantify surface breathing, providing insights into underlying fight motivation. Consistent with previous work, we found evidence of population-level lateralization, with a bias to present the left side and use the left eye when facing a real opponent. Contrary to expectations, there were no differences in the aggressive displays to a mirror and real opponent, with positive correlations between the behaviour in the two scenarios. However, there were important differences in surface breathing, which was more frequent and of longer duration in the mirror treatment. The reasons for these differences are discussed in relation to the repertoire of contest behaviour and motivation when facing a real opponent.