964 resultados para Video Surveillance
Resumo:
Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.
Resumo:
In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.
https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Project Source Code
Resumo:
Abstract
Publicly available, outdoor webcams continuously view the world and share images. These cameras include traffic cams, campus cams, ski-resort cams, etc. The Archive of Many Outdoor Scenes (AMOS) is a project aiming to geolocate, annotate, archive, and visualize these cameras and images to serve as a resource for a wide variety of scientific applications. The AMOS dataset has archived over 750 million images of outdoor environments from 27,000 webcams since 2006. Our goal is to utilize the AMOS image dataset and crowdsourcing to develop reliable and valid tools to improve physical activity assessment via online, outdoor webcam capture of global physical activity patterns and urban built environment characteristics.
This project’s grand scale-up of capturing physical activity patterns and built environments is a methodological step forward in advancing a real-time, non-labor intensive assessment using webcams, crowdsourcing, and eventually machine learning. The combined use of webcams capturing outdoor scenes every 30 min and crowdsources providing the labor of annotating the scenes allows for accelerated public health surveillance related to physical activity across numerous built environments. The ultimate goal of this public health and computer vision collaboration is to develop machine learning algorithms that will automatically identify and calculate physical activity patterns.
Resumo:
Over the past few decades, there has been an increased frequency and duration of cyanobacterial Harmful Algal Blooms (HABs) in freshwater systems globally. These can produce secondary metabolites called cyanotoxins, many of which are hepatotoxins, raising concerns about repeated exposure through ingestion of contaminated drinking water or food or through recreational activities such as bathing/ swimming. An ultra-performance liquid chromatography tandem mass spectrometry (UPLC–MS/MS) multi-toxin method has been developed and validated for freshwater cyanotoxins; microcystins-LR, -YR, -RR, -LA, -LY and -LF, nodularin, cylindrospermopsin, anatoxin-a and the marine diatom toxin domoic acid. Separation was achieved in around 9 min and dual SPE was incorporated providing detection limits of between 0.3 and 5.6 ng/L of original sample. Intra- and inter-day precision analysis showed relative
standard deviations (RSD) of 1.2–9.6% and 1.3–12.0% respectively. The method was applied to the analysis of aquatic samples (n = 206) from six European countries. The main class detected were the hepatotoxins; microcystin-YR (n = 22), cylindrospermopsin (n = 25), microcystin-RR (n = 17), microcystin-LR (n = 12), microcystin-LY (n = 1), microcystin-LF (n = 1) and nodularin (n = 5). For microcystins, the levels detected ranged from 0.001 to 1.51 mg/L, with two samples showing combined levels above the guideline set by the WHO of 1 mg/L for microcystin-LR. Several samples presented with multiple toxins indicating the potential for synergistic effects and possibly enhanced toxicity. This is the first published pan European survey of freshwater bodies for multiple biotoxins, including two identified for the first time; cylindrospermopsin in Ireland and nodularin in Germany, presenting further incentives for improved monitoring and development of strategies to mitigate human exposure.
Resumo:
Background and AimsTo compare endoscopy and pathology sizing in a large population-based series of colorectal adenomas and to evaluate the implications for patient stratification into surveillance colonoscopy.MethodsEndoscopy and pathology sizes available from intact adenomas removed at colonoscopies performed as part of the Northern Ireland Bowel Cancer Screening Programme, from 2010 to 2015, were included in this study. Chi-squared tests were applied to compare size categories in relation to clinicopathological parameters and colonoscopy surveillance strata according to current American Gastroenterology Association and British Society of Gastroenterology guidelines.ResultsA total of 2521 adenomas from 1467 individuals were included. There was a trend toward larger endoscopy than pathology sizing in 4 of the 5 study centers, but overall sizing concordance was good. Significantly greater clustering with sizing to the nearest 5 mm was evident in endoscopy versus pathology sizing (30% vs 19%, p<0.001), which may result in lower accuracy. Applying a 10-mm cut-off relevant to guidelines on risk stratification, 7.3% of all adenomas and 28.3% of those 8 to 12 mm in size had discordant endoscopy and pathology size categorization. Depending upon which guidelines are applied, 4.8% to 9.1% of individuals had differing risk stratification for surveillance recommendations, with the use of pathology sizing resulting in marginally fewer recommended surveillance colonoscopies.ConclusionsChoice of pathology or endoscopy approaches to determine adenoma size will potentially influence surveillance colonoscopy follow-up in 4.8% to 9.1% of individuals. Pathology sizing appears more accurate than endoscopy sizing, and preferential use of pathology size would result in a small, but clinically important, decreased burden on surveillance colonoscopy demand. Careful endoscopy sizing is required for adenomas removed piecemeal.
Resumo:
A rich model based motion vector steganalysis benefiting from both temporal and spatial correlations of motion vectors is proposed in this work. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this work. Firstly, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring motion vectors for longer distances. Therefore, temporal motion vector dependency along side the spatial dependency is utilized for rigorous motion vector steganalysis. Secondly, unlike the filters previously used, which were heuristically designed against a specific motion vector steganography, a diverse set of many filters which can capture aberrations introduced by various motion vector steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent motion vector steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in motion vector steganalysis field including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.
Resumo:
Among the many discussions and studies related to video games, one of the most recurrent, widely debated and important relates to the experience of playing video games. The gameplay experience – as appropriated in this study – is the result of the interplay between two essential elements: a video game and a player. Existing studies have explored the resulting experience of video game playing from the perspective of the video game or the player, but none appear to equally balance both of these elements. The study presented here contributes to the ongoing debate with a gameplay experience model. The proposed model, which looks to equally balance the video game and the player elements, considers the gameplay experience to be both an interactive experience (related to the process of playing the video game) and an emotional experience (related to the outcome of playing the video game). The mutual influence of these two experiences during video game play ultimately defines the gameplay experience. To this gameplay experience contributes several dimensions, related to both the video game and player: the video game includes a mechanics, interface and narrative dimension; the player includes a motivations, expectations and background dimension. Also, the gameplay experience is initially defined by a gameplay situation, conditioned by an ambient in which gameplay takes place and a platform on which the video game is played. In order to initially validate the proposed model and attempt to show a relationship among the multiple model dimensions, a multi-case study was carried out using two different video games and player samples. In one study, results show significant correlations between multiple model dimensions, and evidence that video game related changes influence player motivations as well as player visual behavior. In specific player related analysis, results show that while players may be different in terms of background and expectations regarding the game, their motivation to play are not necessarily different, even if their performance in the game is weak. While further validation is necessary, this model not only contributes to the gameplay experience debate, but also demonstrates in a given context how player and video game dimensions evolve during video game play.
Resumo:
Abstract of paper delivered at the 17th International Reversal Theory Conference, Day 3, session 4, 01.07.15
Resumo:
This paper reports on the first known empirical use of the Reversal Theory State Measure (RTSM) since its publication by Desselles et al. (2014). The RTSM was employed to track responses to three purposely-selected video commercials in a between-subjects design. Results of the study provide empirical support for the central conceptual premise of reversal theory, the experience of metamotivational reversals and the ability of the RTSM to capture them. The RTSM was also found to be psychometrically sound after adjustments were made to two of its three component subscales. Detailed account and rationale is provided for the analytical process of assessing the psychometric robustness of the RTSM, with a number of techniques and interpretations relating to component structure and reliability discussed. Agreeability and critique of the two available versions of the RTSM – the bundled and the branched – is also examined. Researchers are encouraged to assist development of the RTSM through further use, taking into account the analysis and recommendations presented.
Resumo:
This paper presents a new rate-control algorithm for live video streaming over wireless IP networks, which is based on selective frame discarding. In the proposed mechanism excess 'P' frames are dropped from the output queue at the sender using a congestion estimate based on packet loss statistics obtained from RTCP feedback and from the Data Link (DL) layer. The performance of the algorithm is evaluated through computer simulation. This paper also presents a characterisation of packet losses owing to transmission errors and congestion, which can help in choosing appropriate strategies to maximise the video quality experienced by the end user. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
The Internet as a video distribution medium has seen a tremendous growth in recent years. Currently, the transmission of major live events and TV channels over the Internet can easily reach hundreds or millions of users trying to receive the same content using very distinct receiver terminals, placing both scalability and heterogeneity challenges to content and network providers. In private and well-managed Internet Protocol (IP) networks these types of distributions are supported by specially designed architectures, complemented with IP Multicast protocols and Quality of Service (QoS) solutions. However, the Best-Effort and Unicast nature of the Internet requires the introduction of a new set of protocols and related architectures to support the distribution of these contents. In the field of file and non-real time content distributions this has led to the creation and development of several Peer-to-Peer protocols that have experienced great success in recent years. This chapter presents the current research and developments in Peer-to-Peer video streaming over the Internet. A special focus is made on peer protocols, associated architectures and video coding techniques. The authors also review and describe current Peer-to-Peer streaming solutions. © 2013, IGI Global.
Resumo:
The Joint Video Team, composed by the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), has standardized a scalable extension of the H.264/AVC video coding standard called Scalable Video Coding (SVC). H.264/SVC provides scalable video streams which are composed by a base layer and one or more enhancement layers. Enhancement layers may improve the temporal, the spatial or the signal-to-noise ratio resolutions of the content represented by the lower layers. One of the applications, of this standard is related to video transmission in both wired and wireless communication systems, and it is therefore important to analyze in which way packet losses contribute to the degradation of quality, and which mechanisms could be used to improve that quality. This paper provides an analysis and evaluation of H.264/SVC in error prone environments, quantifying the degradation caused by packet losses in the decoded video. It also proposes and analyzes the consequences of QoS-based discarding of packets through different marking solutions.
Resumo:
The number of software applications available on the Internet for distributing video streams in real time over P2P networks has grown quickly in the last two years. Typical this kind of distribution is made by television channel broadcasters which try to make their content globally available, using viewer's resources to support a large scale distribution of video without incurring in incremental costs. However, the lack of adaptation in video quality, combined with the lack of a standard protocol for this kind of multimedia distribution has driven content providers to basically ignore it as a solution for video delivery over the Internet. While the scalable extension of the H. 264 encoding (H.264/SVC) can be used to support terminal and network heterogeneity, it is not clear how it can be integrated in a P2P overlay to form a large scale and real time distribution. In this paper, we start by defining a solution that combines the most popular P2P file-sharing protocol, the BitTorrent, with the H. 264/SVC encoding for a real-time video content delivery. Using this solution we then evaluate the effect of several parameters in the quality received by peers.