3 resultados para Video Surveillance
em Universidad de Alicante
Resumo:
In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.
Resumo:
Objectives: In Europe, 25% of workers use video display terminals (VDTs). Occupational health surveillance has been considered a key element in the protection of these workers. Nevertheless, it is unclear if guidelines available for this purpose, based on EU standards and available evidence, meet currently accepted quality criteria. The aim of this study was to appraise three sets of European VDT guidelines (UK, France, Spain) in which regulatory and evidence-based approaches for visual health have been formulated and recommendations for practice made. Methods: Three independent appraisers used an adapted AGREE instrument with seven domains to appraise the guidelines. A modified nominal group technique approach was used in two consecutive phases: first, individual evaluation of the three guidelines simultaneously, and second, a face-to-face meeting of appraisers to discuss scoring. Analysis of ratings obtained in each domain and variability among appraisers was undertaken (correlation and kappa coefficients). Results: All guidelines had low domain scores. The domain evaluated most highly was Scope and purpose, while Applicability was scored minimally. The UK guidelines had the highest overall score, and the Spanish ones had the lowest. The analysis of reliability and differences between scores in each domain showed a high level of agreement. Conclusions: These results suggest current guidelines used in these countries need an update. The formulation of evidence-base European guidelines on VDT could help to reduce the significant variation of national guidelines, which may have an impact on practical application.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.