996 resultados para Multiple sparse cameras


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A human-computer interface (HCI) system designed for use by people with severe disabilities is presented. People that are severely paralyzed or afflicted with diseases such as ALS (Lou Gehrig's disease) or multiple sclerosis are unable to move or control any parts of their bodies except for their eyes. The system presented here detects the user's eye blinks and analyzes the pattern and duration of the blinks, using them to provide input to the computer in the form of a mouse click. After the automatic initialization of the system occurs from the processing of the user's involuntary eye blinks in the first few seconds of use, the eye is tracked in real time using correlation with an online template. If the user's depth changes significantly or rapid head movement occurs, the system is automatically reinitialized. There are no lighting requirements nor offline templates needed for the proper functioning of the system. The system works with inexpensive USB cameras and runs at a frame rate of 30 frames per second. Extensive experiments were conducted to determine both the system's accuracy in classifying voluntary and involuntary blinks, as well as the system's fitness in varying environment conditions, such as alternative camera placements and different lighting conditions. These experiments on eight test subjects yielded an overall detection accuracy of 95.3%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilising cameras as a means to survey the surrounding environment is becoming increasingly popular in a number of different research areas and applications. Central to using camera sensors as input to a vision system, is the need to be able to manipulate and process the information captured in these images. One such application, is the use of cameras to monitor the quality of airport landing lighting at aerodromes where a camera is placed inside an aircraft and used to record images of the lighting pattern during the landing phase of a flight. The images are processed to determine a performance metric. This requires the development of custom software for the localisation and identification of luminaires within the image data. However, because of the necessity to keep airport operations functioning as efficiently as possible, it is difficult to collect enough image data to develop, test and validate any developed software. In this paper, we present a technique to model a virtual landing lighting pattern. A mathematical model is postulated which represents the glide path of the aircraft including random deviations from the expected path. A morphological method has been developed to localise and track the luminaires under different operating conditions. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce an application of matrix factorization to produce corpus-derived, distributional
models of semantics that demonstrate cognitive plausibility. We find that word representations
learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,
effective, and highly interpretable. To the best of our knowledge, this is the first approach which
yields semantic representation of words satisfying these three desirable properties. Though extensive
experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority
of semantic models learned by NNSE over other state-of-the-art baselines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a compact gamma camera with high spatial resolution is of great interest in Nuclear Medicine as a means to increase the sensitivity of scintigraphy exams and thus allow the early detection of small tumours. Following the introduction of the wavelength-shifting fibre (WSF) gamma camera by Soares et al. and evolution of photodiodes into highly sensitive silicon photomultipliers (SiPMs), this thesis explores the development of a WSF gamma camera using SiPMs to obtain the position information of scintillation events in a continuous CsI(Na) crystal. The design is highly flexible, allowing the coverage of different areas and the development of compact cameras, with very small dead areas at the edges. After initial studies which confirmed the feasibility of applying SiPMs, a prototype with 5 5 cm2 was assembled and tested at room temperature, in an active field-of-view of 10 10 mm2. Calibration and characterisation of intrinsic properties of this prototype were done using 57Co, while extrinsic measurements were performed using a high-resolution parallel-hole collimator and 99mTc. In addition, a small mouse injected with a radiopharmaceutical was imaged with the developed prototype. Results confirm the great potential of SiPMs when applied in a WSF gamma camera, achieving spatial resolution performance superior to the traditional Anger camera. Furthermore, performance can be improved by an optimisation of experimental conditions, in order to minimise and control the undesirable effects of thermal noise and non-uniformity of response of multiple SiPMs. The development and partial characterisation of a larger SiPM WSF gamma camera with 10 10 cm2 for clinical application are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire s'intéresse à la reconstruction d'un modèle 3D à partir de plusieurs images. Le modèle 3D est élaboré avec une représentation hiérarchique de voxels sous la forme d'un octree. Un cube englobant le modèle 3D est calculé à partir de la position des caméras. Ce cube contient les voxels et il définit la position de caméras virtuelles. Le modèle 3D est initialisé par une enveloppe convexe basée sur la couleur uniforme du fond des images. Cette enveloppe permet de creuser la périphérie du modèle 3D. Ensuite un coût pondéré est calculé pour évaluer la qualité de chaque voxel à faire partie de la surface de l'objet. Ce coût tient compte de la similarité des pixels provenant de chaque image associée à la caméra virtuelle. Finalement et pour chacune des caméras virtuelles, une surface est calculée basée sur le coût en utilisant la méthode de SGM. La méthode SGM tient compte du voisinage lors du calcul de profondeur et ce mémoire présente une variation de la méthode pour tenir compte des voxels précédemment exclus du modèle par l'étape d'initialisation ou de creusage par une autre surface. Par la suite, les surfaces calculées sont utilisées pour creuser et finaliser le modèle 3D. Ce mémoire présente une combinaison innovante d'étapes permettant de créer un modèle 3D basé sur un ensemble d'images existant ou encore sur une suite d'images capturées en série pouvant mener à la création d'un modèle 3D en temps réel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The biggest challenge in conservation biology is breaking down the gap between research and practical management. A major obstacle is the fact that many researchers are unwilling to tackle projects likely to produce sparse or messy data because the results would be difficult to publish in refereed journals. The obvious solution to sparse data is to build up results from multiple studies. Consequently, we suggest that there needs to be greater emphasis in conservation biology on publishing papers that can be built on by subsequent research rather than on papers that produce clear results individually. This building approach requires: (1) a stronger theoretical framework, in which researchers attempt to anticipate models that will be relevant in future studies and incorporate expected differences among studies into those models; (2) use of modern methods for model selection and multi-model inference, and publication of parameter estimates under a range of plausible models; (3) explicit incorporation of prior information into each case study; and (4) planning management treatments in an adaptive framework that considers treatments applied in other studies. We encourage journals to publish papers that promote this building approach rather than expecting papers to conform to traditional standards of rigor as stand-alone papers, and believe that this shift in publishing philosophy would better encourage researchers to tackle the most urgent conservation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical data are compared with EISCAT radar observations of multiple Naturally Enhanced Ion-Acoustic Line (NEIAL) events in the dayside cusp. This study uses narrow field of view cameras to observe small-scale, short-lived auroral features. Using multiple-wavelength optical observations, a direct link between NEIAL occurrences and low energy (about 100 eV) optical emissions is shown. This is consistent with the Langmuir wave decay interpretation of NEIALs being driven by streams of low-energy electrons. Modelling work connected with this study shows that, for the measured ionospheric conditions and precipitation characteristics, growth of unstable Langmuir (electron plasma) waves can occur, which decay into ion-acoustic wave modes. The link with low energy optical emissions shown here, will enable future studies of the shape, extent, lifetime, grouping and motions of NEIALs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method to locate and track people by combining evidence from multiple cameras using the homography constraint. The proposed method use foreground pixels from simple background subtraction to compute evidence of the location of people on a reference ground plane. The algorithm computes the amount of support that basically corresponds to the ""foreground mass"" above each pixel. Therefore, pixels that correspond to ground points have more support. The support is normalized to compensate for perspective effects and accumulated on the reference plane for all camera views. The detection of people on the reference plane becomes a search for regions of local maxima in the accumulator. Many false positives are filtered by checking the visibility consistency of the detected candidates against all camera views. The remaining candidates are tracked using Kalman filters and appearance models. Experimental results using challenging data from PETS`06 show good performance of the method in the presence of severe occlusion. Ground truth data also confirms the robustness of the method. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated tracking of objects through a sequence of images has remained one of the difficult problems in computer vision. Numerous algorithms and techniques have been proposed for this task. Some algorithms perform well in restricted environments, such as tracking using stationary cameras, but a general solution is not currently available. A frequent problem is that when an algorithm is refined for one application, it becomes unsuitable for other applications. This paper proposes a general tracking system based on a different approach. Rather than refine one algorithm for a specific tracking task, two tracking algorithms are employed, and used to correct each other during the tracking task. By choosing the two algorithms such that they have complementary failure modes, a robust algorithm is created without increased specialisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a distributed surveillance system that uses multiple cheap static cameras to track multiple people in indoor environments. The system has a set of Camera Processing Modules and a Central Module to coordinate the tracking tasks among the cameras. Since each object in the scene can be tracked by a number of cameras, the problem is how to choose the most appropriate camera for each object. This is important given the need to deal with limited resources (CPU, power etc.). We propose a novel algorithm to allocate objects to cameras using the object-to-camera distance while taking into account occlusion. The algorithm attempts to assign objects in the overlapping field of views to the nearest camera, which can see the object without occlusion. Experimental results show that the system can coordinate cameras to track people and can deal well with occlusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Super-resolution is a method of post-processing image enhancement that increases the spatial resolution of video or images. Existing super-resolution techniques apply only to images captured of a planar scene. This paper aims to extend super-resolution concepts from the 2D domain to the 3D domain, drawing on ideas from both superresolution and multi-view geometry, two fields of research that until now have predominantly been studied in isolation. 2D super-resolution methods are not without their complexities and limitations. However, once multiple views of a scene are considered within a super-resolution framework, a new range of issues arise that must also be resolved. For example, when input images of a scene with variation in depth are considered, it is no longer clear how and where the images should be registered. This paper describes the use of sparse 3D reconstruction in order to ‘register’ the input images, which are then transferred to a novel image plane and combined to increase the perceived detail in the scene. Experimental results using real images captured from generally positioned input cameras are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method for document clustering using sparse representation of documents in conjunction with spectral clustering. An ℓ1-norm optimization formulation is posed to learn the sparse representation of each document, allowing us to characterize the affinity between documents by considering the overall information instead of traditional pair wise similarities. This document affinity is encoded through a graph on which spectral clustering is performed. The decomposition into multiple subspaces allows documents to be part of a sub-group that shares a smaller set of similar vocabulary, thus allowing for cleaner clusters. Extensive experimental evaluations on two real-world datasets from Reuters-21578 and 20Newsgroup corpora show that our proposed method consistently outperforms state-of-the-art algorithms. Significantly, the performance improvement over other methods is prominent for this datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geometric object detection has many applications, such as in tracking. Particle tracking microrheology is a technique for studying mechanical properties by accurately tracking the motion of the immersed particles undergoing Brownian motion. Since particles are carried along by these random undulations of the medium, they can move in and out of the microscope's depth of focus, which results in halos (lower intensity). Two-point particle tracking microrheology (TPM) uses a threshold to find those particles with peak, which leads to the broken trajectory of the particles. The halos of those particles which are out of focus are circles and the centres can be accurately tracked in most cases. When the particles are sparse, TPM will lose certain useful information. Thus, it may cause inaccurate microrheology. An efficient algorithm to detect the centre of those particles will increase the accuracy of the Brownian motion. In this paper, a hybrid approach is proposed which combines the steps of TPM for particles in focus with a circle detection step using circular Hough transform for particles with halos. As a consequence, it not only detects more particles in each frame but also dramatically extends the trajectories with satisfactory accuracy. Experiments over a video microscope data set of polystyrene spheres suspended in water undergoing Brownian motion confirmed the efficiency of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of image retrieval depends critically on the semantic representation and the distance function used to estimate the similarity of two images. A good representation should integrate multiple visual and textual (e.g., tag) features and offer a step closer to the true semantics of interest (e.g., concepts). As the distance function operates on the representation, they are interdependent, and thus should be addressed at the same time. We propose a probabilistic solution to learn both the representation from multiple feature types and modalities and the distance metric from data. The learning is regularised so that the learned representation and information-theoretic metric will (i) preserve the regularities of the visual/textual spaces, (ii) enhance structured sparsity, (iii) encourage small intra-concept distances, and (iv) keep inter-concept images separated. We demonstrate the capacity of our method on the NUS-WIDE data. For the well-studied 13 animal subset, our method outperforms state-of-the-art rivals. On the subset of single-concept images, we gain 79:5% improvement over the standard nearest neighbours approach on the MAP score, and 45.7% on the NDCG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic Medical Records (EMR) are increasingly used for risk prediction. EMR analysis is complicated by missing entries. There are two reasons - the “primary reason for admission” is included in EMR, but the co-morbidities (other chronic diseases) are left uncoded, and, many zero values in the data are accurate, reflecting that a patient has not accessed medical facilities. A key challenge is to deal with the peculiarities of this data - unlike many other datasets, EMR is sparse, reflecting the fact that patients have some, but not all diseases. We propose a novel model to fill-in these missing values, and use the new representation for prediction of key hospital events. To “fill-in” missing values, we represent the feature-patient matrix as a product of two low rank factors, preserving the sparsity property in the product. Intuitively, the product regularization allows sparse imputation of patient conditions reflecting common comorbidities across patients. We develop a scalable optimization algorithm based on Block coordinate descent method to find an optimal solution. We evaluate the proposed framework on two real world EMR cohorts: Cancer (7000 admissions) and Acute Myocardial Infarction (2652 admissions). Our result shows that the AUC for 3 months admission prediction is improved significantly from (0.741 to 0.786) for Cancer data and (0.678 to 0.724) for AMI data. We also extend the proposed method to a supervised model for predicting of multiple related risk outcomes (e.g. emergency presentations and admissions in hospital over 3, 6 and 12 months period) in an integrated framework. For this model, the AUC averaged over outcomes is improved significantly from (0.768 to 0.806) for Cancer data and (0.685 to 0.748) for AMI data.