915 resultados para Processing image


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide an algorithm that achieves the optimal regret rate in an unknown weakly communicating Markov Decision Process (MDP). The algorithm proceeds in episodes where, in each episode, it picks a policy using regularization based on the span of the optimal bias vector. For an MDP with S states and A actions whose optimal bias vector has span bounded by H, we show a regret bound of ~ O(HS p AT ). We also relate the span to various diameter-like quantities associated with the MDP, demonstrating how our results improve on previous regret bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of allocating stocks to dark pools. We propose and analyze an optimal approach for allocations, if continuous-valued allocations are allowed. We also propose a modification for the case when only integer-valued allocations are possible. We extend the previous work on this problem to adversarial scenarios, while also improving on their results in the iid setup. The resulting algorithms are efficient, and perform well in simulations under stochastic and adversarial inputs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Autonomous development of sensorimotor coordination enables a robot to adapt and change its action choices to interact with the world throughout its lifetime. The Experience Network is a structure that rapidly learns coordination between visual and haptic inputs and motor action. This paper presents methods which handle the high dimensionality of the network state-space which occurs due to the simultaneous detection of multiple sensory features. The methods provide no significant increase in the complexity of the underlying representations and also allow emergent, task-specific, semantic information to inform action selection. Experimental results show rapid learning in a real robot, beginning with no sensorimotor mappings, to a mobile robot capable of wall avoidance and target acquisition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparse coefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a scene invariant crowd counting algorithm that uses local features to monitor crowd size. Unlike previous algorithms that require each camera to be trained separately, the proposed method uses camera calibration to scale between viewpoints, allowing a system to be trained and tested on different scenes. A pre-trained system could therefore be used as a turn-key solution for crowd counting across a wide range of environments. The use of local features allows the proposed algorithm to calculate local occupancy statistics, and Gaussian process regression is used to scale to conditions which are unseen in the training data, also providing confidence intervals for the crowd size estimate. A new crowd counting database is introduced to the computer vision community to enable a wider evaluation over multiple scenes, and the proposed algorithm is tested on seven datasets to demonstrate scene invariance and high accuracy. To the authors' knowledge this is the first system of its kind due to its ability to scale between different scenes and viewpoints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Micro aerial vehicles (MAVs) are a rapidly growing area of research and development in robotics. For autonomous robot operations, localization has typically been calculated using GPS, external camera arrays, or onboard range or vision sensing. In cluttered indoor or outdoor environments, onboard sensing is the only viable option. In this paper we present an appearance-based approach to visual SLAM on a flying MAV using only low quality vision. Our approach consists of a visual place recognition algorithm that operates on 1000 pixel images, a lightweight visual odometry algorithm, and a visual expectation algorithm that improves the recall of place sequences and the precision with which they are recalled as the robot flies along a similar path. Using data gathered from outdoor datasets, we show that the system is able to perform visual recognition with low quality, intermittent visual sensory data. By combining the visual algorithms with the RatSLAM system, we also demonstrate how the algorithms enable successful SLAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling events in densely crowded environments remains challenging, due to the diversity of events and the noise in the scene. We propose a novel approach for anomalous event detection in crowded scenes using dynamic textures described by the Local Binary Patterns from Three Orthogonal Planes (LBP-TOP) descriptor. The scene is divided into spatio-temporal patches where LBP-TOP based dynamic textures are extracted. We apply hierarchical Bayesian models to detect the patches containing unusual events. Our method is an unsupervised approach, and it does not rely on object tracking or background subtraction. We show that our approach outperforms existing state of the art algorithms for anomalous event detection in UCSD dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies that the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Some games have included agents with a basic awareness of other agents, but they are still unaware of important game events or environmental conditions. This paper presents an agent design we have developed, which combines cellular automata for environmental modeling with influence maps for agent decision-making. The agents were implemented into a 3D game environment we have developed, the EmerGEnT system, and tuned through three experiments. The result is simple, flexible game agents that are able to respond to natural phenomena (e.g. rain or fire), while pursuing a goal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper defines and discusses two contrasting approaches to designing game environments. The first, referred to as scripting, requires developers to anticipate, hand-craft and script specific game objects, events and player interactions. The second, known as emergence, involves defining general, global rules that interact to give rise to emergent gameplay. Each of these approaches is defined, discussed and analyzed with respect to the considerations and affects for game developers and game players. Subsequently, various techniques for implementing these design approaches are identified and discussed. It is concluded that scripting and emergence are two extremes of the same continuum, neither of which are ideal for game development. Rather, there needs to be a compromise in which the boundaries of action (such as story and game objectives) can be hardcoded and non-scripted behavior (such as interactions and strategies) are able to emerge within these boundaries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The future direction of game development is towards more flexible, realistic, and interactive game worlds. However, current methods of game design do not allow for anything other than pre-scripted player exchanges and static objects and environments. An emergent approach to game development involves the creation of a globally designed game system that provides rules and boundaries for player interactions, rather than prescribed paths. Emergence in Games provides a detailed foundation for applying the theory and practice of emergence in games to game design. Emergent narrative, characters and agents, and game worlds are covered and a hands-on tutorial and case study allow the reader to the put the skills and ideas presented into practice.