964 resultados para Background Subtraction
Resumo:
Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.
Resumo:
In the recent years, the computer vision community has shown great interest on depth-based applications thanks to the performance and flexibility of the new generation of RGB-D imagery. In this paper, we present an efficient background subtraction algorithm based on the fusion of multiple region-based classifiers that processes depth and color data provided by RGB-D cameras. Foreground objects are detected by combining a region-based foreground prediction (based on depth data) with different background models (based on a Mixture of Gaussian algorithm) providing color and depth descriptions of the scene at pixel and region level. The information given by these modules is fused in a mixture of experts fashion to improve the foreground detection accuracy. The main contributions of the paper are the region-based models of both background and foreground, built from the depth and color data. The obtained results using different database sequences demonstrate that the proposed approach leads to a higher detection accuracy with respect to existing state-of-the-art techniques.
Resumo:
A method for estimating the dimensions of non-delimited free parking areas by using a static surveillance camera is proposed. The proposed method is specially designed to tackle the main challenges of urban scenarios (multiple moving objects, outdoor illumination conditions and occlusions between vehicles) with no training. The core of this work is the temporal analysis of the video frames to detect the occupancy variation of the parking areas. Two techniques are combined: background subtraction using a mixture of Gaussians to detect and track vehicles and the creation of a transience map to detect the parking and leaving of vehicles. The authors demonstrate that the proposed method yields satisfactory estimates on three real scenarios while being a low computational cost solution that can be applied in any kind of parking area covered by a single camera.
Resumo:
In the field of detection and monitoring of dynamic objects in quasi-static scenes, background subtraction techniques where background is modeled at pixel-level, although showing very significant limitations, are extensively used. In this work we propose a novel approach to background modeling that operates at region-level in a wavelet based multi-resolution framework. Based on a segmentation of the background, characterization is made for each region independently as a mixture of K Gaussian modes, considering the model of the approximation and detail coefficients at the different wavelet decomposition levels. Background region characterization is updated along time, and the detection of elements of interest is carried out computing the distance between background region models and those of each incoming image in the sequence. The inclusion of the context in the modeling scheme through each region characterization makes the model robust, being able to support not only gradual illumination and long-term changes, but also sudden illumination changes and the presence of strong shadows in the scene
Resumo:
Low cost RGB-D cameras such as the Microsoft’s Kinect or the Asus’s Xtion Pro are completely changing the computer vision world, as they are being successfully used in several applications and research areas. Depth data are particularly attractive and suitable for applications based on moving objects detection through foreground/background segmentation approaches; the RGB-D applications proposed in literature employ, in general, state of the art foreground/background segmentation techniques based on the depth information without taking into account the color information. The novel approach that we propose is based on a combination of classifiers that allows improving background subtraction accuracy with respect to state of the art algorithms by jointly considering color and depth data. In particular, the combination of classifiers is based on a weighted average that allows to adaptively modifying the support of each classifier in the ensemble by considering foreground detections in the previous frames and the depth and color edges. In this way, it is possible to reduce false detections due to critical issues that can not be tackled by the individual classifiers such as: shadows and illumination changes, color and depth camouflage, moved background objects and noisy depth measurements. Moreover, we propose, for the best of the author’s knowledge, the first publicly available RGB-D benchmark dataset with hand-labeled ground truth of several challenging scenarios to test background/foreground segmentation algorithms.
Resumo:
Local parity-odd domains are theorized to form inside a quark-gluon plasma which has been produced in high-energy heavy-ion collisions. The local parity-odd domains manifest themselves as charge separation along the magnetic field axis via the chiral magnetic effect. The experimental observation of charge separation has previously been reported for heavy-ion collisions at the top RHIC energies. In this Letter, we present the results of the beam-energy dependence of the charge correlations in Au+Au collisions at midrapidity for center-of-mass energies of 7.7, 11.5, 19.6, 27, 39, and 62.4 GeV from the STAR experiment. After background subtraction, the signal gradually reduces with decreased beam energy and tends to vanish by 7.7 GeV. This implies the dominance of hadronic interactions over partonic ones at lower collision energies.
Resumo:
An investigation was carried out to study the potential use of the angular distribution of scattered photons by human breast samples for a rapid identification of neoplasias of breast tissues. This technique has possible applications as diagnostic aid for breast cancer. In this work, a commercial powder diffractometer was used to obtain the scattering profiles from breast tissues histopathologically classified as normal breast tissues, fibroadenomas (benign breast diseases) and carcinomas (malignant breast diseases), in the interval 0.02 angstrom(-1) < x < 0.62 angstrom(-1). The experimental methods and data corrections are discussed in detail, and they included background subtraction, polarization, self-attenuation and geometric effects. The validation of the experimental procedure was achieved through an analysis of water sample. The results showed that the scattering profile is a unique impression of each type of tissue, being correlated with their microscopic morphological features. Multivariate analysis was applied to these profiles in order to verify if the information carried by these scattering profiles allow the differentiation between normal, benign and malignant breast tissues. The statistical analysis results showed that a correct identification of 75% of the analyzed samples is accomplished. The values of sensibility and specificity of this method in correctly differentiating between normal and neoplastic samples were 95.6% and 82.3%, respectively, while the values for differentiation between benign and malignant neoplasias were 78.6% and 62.5%. These initial results indicate the feasible use of commercial powder diffractometer to provide a rapid diagnostic with a high sensitivity.
Resumo:
Radiation dose calculations in nuclear medicine depend on quantification of activity via planar and/or tomographic imaging methods. However, both methods have inherent limitations, and the accuracy of activity estimates varies with object size, background levels, and other variables. The goal of this study was to evaluate the limitations of quantitative imaging with planar and single photon emission computed tomography (SPECT) approaches, with a focus on activity quantification for use in calculating absorbed dose estimates for normal organs and tumors. To do this we studied a series of phantoms of varying complexity of geometry, with three radionuclides whose decay schemes varied from simple to complex. Four aqueous concentrations of (99m)Tc, (131)I, and (111)In (74, 185, 370, and 740 kBq mL(-1)) were placed in spheres of four different sizes in a water-filled phantom, with three different levels of activity in the surrounding water. Planar and SPECT images of the phantoms were obtained on a modern SPECT/computed tomography (CT) system. These radionuclides and concentration/background studies were repeated using a cardiac phantom and a modified torso phantom with liver and ""tumor"" regions containing the radionuclide concentrations and with the same varying background levels. Planar quantification was performed using the geometric mean approach, with attenuation correction (AC), and with and without scatter corrections (SC and NSC). SPECT images were reconstructed using attenuation maps (AM) for AC; scatter windows were used to perform SC during image reconstruction. For spherical sources with corrected data, good accuracy was observed (generally within +/- 10% of known values) for the largest sphere (11.5 mL) and for both planar and SPECT methods with (99m)Tc and (131)I, but were poorest and deviated from known values for smaller objects, most notably for (111)In. SPECT quantification was affected by the partial volume effect in smaller objects and generally showed larger errors than the planar results in these cases for all radionuclides. For the cardiac phantom, results were the most accurate of all of the experiments for all radionuclides. Background subtraction was an important factor influencing these results. The contribution of scattered photons was important in quantification with (131)I; if scatter was not accounted for, activity tended to be overestimated using planar quantification methods. For the torso phantom experiments, results show a clear underestimation of activity when compared to previous experiment with spherical sources for all radionuclides. Despite some variations that were observed as the level of background increased, the SPECT results were more consistent across different activity concentrations. Planar or SPECT quantification on state-of-the-art gamma cameras with appropriate quantitative processing can provide accuracies of better than 10% for large objects and modest target-to-background concentrations; however when smaller objects are used, in the presence of higher background, and for nuclides with more complex decay schemes, SPECT quantification methods generally produce better results. Health Phys. 99(5):688-701; 2010
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
In this paper, we present an integrated system for real-time automatic detection of human actions from video. The proposed approach uses the boundary of humans as the main feature for recognizing actions. Background subtraction is performed using Gaussian mixture model. Then, features are extracted from silhouettes and Vector Quantization is used to map features into symbols (bag of words approach). Finally, actions are detected using the Hidden Markov Model. The proposed system was validated using a newly collected real- world dataset. The obtained results show that the system is capable of achieving robust human detection, in both indoor and outdoor environments. Moreover, promising classification results were achieved when detecting two basic human actions: walking and sitting.
Resumo:
The tt¯ production cross-section dependence on jet multiplicity and jet transverse momentum is reported for proton--proton collisions at a centre-of-mass energy of 7 TeV in the single-lepton channel. The data were collected with the ATLAS detector at the CERN Large Hadron Collider and comprise the full 2011 data sample corresponding to an integrated luminosity of 4.6 fb−1. Differential cross-sections are presented as a function of the jet multiplicity for up to eight jets using jet transverse momentum thresholds of 25, 40, 60, and 80 GeV, and as a function of jet transverse momentum up to the fifth jet. The results are shown after background subtraction and corrections for all detector effects, within a kinematic range closely matched to the experimental acceptance. Several QCD-based Monte Carlo models are compared with the results. Sensitivity to the parton shower modelling is found at the higher jet multiplicities, at high transverse momentum of the leading jet and in the transverse momentum spectrum of the fifth leading jet. The MC@NLO+HERWIG MC is found to predict too few events at higher jet multiplicities.
Resumo:
En aquest projecte s'usa el servidor de vídeo d'Axis Communications 242s IV, basat en el DSP TMS320DM642 de Texas Instruments, com a plataforma per a la implementació d'un algorisme d'extracció de fons i pel desenvolupament d'una solució completa de comptatge de persones per a càmera zenital. En el primer cas, s'ha optimitzat i comparat el rendiment de l'algorisme amb el d'una versió per a PC per a avaluar el DSP com a processador per a lamigració d'una aplicació completa de vídeovigilància. En el segon cas s'han integrat tots els components del servidor en el desenvolupament del comptador per avaluar la plataforma com a base per a solucions completes.
Resumo:
The aim of the present study was to retrospectively estimate the absorbed dose to kidneys in 17 patients treated in clinical practice with 90Y-ibritumomab tiuxetan for non-Hodgkin's lymphoma, using appropriate dosimetric approaches available. METHODS: The single-view effective point source method, including background subtraction, is used for planar quantification of renal activity. Since the high uptake in the liver affects the activity estimate in the right kidney, the dose to the left kidney serves as a surrogate for the dose to both kidneys. Calculation of absorbed dose is based on the Medical Internal Radiation Dose methodology with adjustment for patient kidney mass. RESULTS: The median dose to kidneys, based on the left kidney only, is 2.1 mGy/MBq (range, 0.92-4.4), whereas a value of 2.5 mGy/MBq (range, 1.5-4.7) is obtained, considering the activity in both kidneys. CONCLUSIONS: Irrespective of the method, doses to kidneys obtained in the present study were about 10 times higher than the median dose of 0.22 mGy/MBq (range, 0.00-0.95) were originally reported from the study leading to Food and Drug Administration approval. Our results are in good agreement with kidney-dose estimates recently reported from high-dose myeloablative therapy with 90Y-ibritumomab tiuxetan.
Resumo:
The problem of automatic recognition of the fish from the video sequences is discussed in this Master’s Thesis. This is a very urgent issue for many organizations engaged in fish farming in Finland and Russia because the process of automation control and counting of individual species is turning point in the industry. The difficulties and the specific features of the problem have been identified in order to find a solution and propose some recommendations for the components of the automated fish recognition system. Methods such as background subtraction, Kalman filtering and Viola-Jones method were implemented during this work for detection, tracking and estimation of fish parameters. Both the results of the experiments and the choice of the appropriate methods strongly depend on the quality and the type of a video which is used as an input data. Practical experiments have demonstrated that not all methods can produce good results for real data, whereas on synthetic data they operate satisfactorily.
Resumo:
La vidéosurveillance a pour objectif principal de protéger les personnes et les biens en détectant tout comportement anormal. Ceci ne serait possible sans la détection de mouvement dans l’image. Ce processus complexe se base le plus souvent sur une opération de soustraction de l’arrière-plan statique d’une scène sur l’image. Mais il se trouve qu’en vidéosurveillance, des caméras sont souvent en mouvement, engendrant ainsi, un changement significatif de l’arrière-plan; la soustraction de l’arrière-plan devient alors problématique. Nous proposons dans ce travail, une méthode de détection de mouvement et particulièrement de chutes qui s’affranchit de la soustraction de l’arrière-plan et exploite la rotation de la caméra dans la détection du mouvement en utilisant le calcul homographique. Nos résultats sur des données synthétiques et réelles démontrent la faisabilité de cette approche.