790 resultados para Multi-Dimensional Random Walk
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
The concept of random lasers making use of multiple scattering in amplifying disordered media to generate coherent light has attracted a great deal of attention in recent years. Here, we demonstrate a fibre laser with a mirrorless open cavity that operates via Rayleigh scattering, amplified through the Raman effect. The fibre waveguide geometry provides transverse confinement and effectively one-dimensional random distributed feedback, leading to the generation of a stationary near-Gaussian beam with a narrow spectrum, and with efficiency and performance comparable to regular lasers. Rayleigh scattering due to inhomogeneities within the glass structure of the fibre is extremely weak, making the operation and properties of the proposed random distributed feedback lasers profoundly different from those of both traditional random lasers and conventional fibre lasers.
Resumo:
An optical fiber is treated as a natural one-dimensional random system where lasing is possible due to a combination of Rayleigh scattering by refractive index inhomogeneities and distributed amplification through the Raman effect. We present such a random fiber laser that is tunable over a broad wavelength range with uniquely flat output power and high efficiency, which outperforms traditional lasers of the same category. Outstanding characteristics defined by deep underlying physics and the simplicity of the scheme make the demonstrated laser a very attractive light source both for fundamental science and practical applications.
Resumo:
As shown recently, a long telecommunication fibre may be treated as a natural one-dimensional random system, where lasing is possible due to a combination of random distributed feedback via Rayleigh scattering by natural refractive index inhomogeneities and distributed amplification through the Raman effect. Here we present a new type of a random fibre laser with a narrow (∼1 nm) spectrum tunable over a broad wavelength range (1535-1570 nm) with a uniquely flat (∼0.1 dB) and high (>2 W) output power and prominent (>40 %) differential efficiency, which outperforms traditional fibre lasers of the same category, e.g. a conventional Raman laser with a linear cavity formed in the same fibre by adding point reflectors. Analytical model is proposed that explains quantitatively the higher efficiency and the flatter tuning curve of the random fiber laser compared to conventional one. The other important features of the random fibre laser like "modeless" spectrum of specific shape and corresponding intensity fluctuations as well as the techniques of controlling its output characteristics are discussed. Outstanding characteristics defined by new underlying physics and the simplicity of the scheme implemented in standard telecom fibre make the demonstrated tunable random fibre laser a very attractive light source both for fundamental science and practical applications such as optical communication, sensing and secure transmission. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).
Resumo:
Mathematics Subject Classification: 26A33, 45K05, 60J60, 60G50, 65N06, 80-99.
Resumo:
In this paper, we investigate the use of manifold learning techniques to enhance the separation properties of standard graph kernels. The idea stems from the observation that when we perform multidimensional scaling on the distance matrices extracted from the kernels, the resulting data tends to be clustered along a curve that wraps around the embedding space, a behavior that suggests that long range distances are not estimated accurately, resulting in an increased curvature of the embedding space. Hence, we propose to use a number of manifold learning techniques to compute a low-dimensional embedding of the graphs in an attempt to unfold the embedding manifold, and increase the class separation. We perform an extensive experimental evaluation on a number of standard graph datasets using the shortest-path (Borgwardt and Kriegel, 2005), graphlet (Shervashidze et al., 2009), random walk (Kashima et al., 2003) and Weisfeiler-Lehman (Shervashidze et al., 2011) kernels. We observe the most significant improvement in the case of the graphlet kernel, which fits with the observation that neglecting the locational information of the substructures leads to a stronger curvature of the embedding manifold. On the other hand, the Weisfeiler-Lehman kernel partially mitigates the locality problem by using the node labels information, and thus does not clearly benefit from the manifold learning. Interestingly, our experiments also show that the unfolding of the space seems to reduce the performance gap between the examined kernels.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
2010 Mathematics Subject Classification: 62J99.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.
Resumo:
Simulations of droplet dispersion behind cylinder wakes and downstream of icing tunnel spray bars were conducted. In both cases, a range of droplet sizes were investigated numerically with a Lagrangian particle trajectory approach while the turbulent air flow was investigated with a hybrid Reynolds-Averaged Navier-Stokes/Large-Eddy Simulations approach scheme. In the first study, droplets were injected downstream of a cylinder at sub-critical conditions (i.e. with laminar boundary layer separation). A stochastic continuous random walk (CRW) turbulence model was used to capture the effects of sub-grid turbulence. Small inertia droplets (characterized by small Stokes numbers) were affected by both the large-scale and small-scale vortex structures and closely followed the air flow, while exhibiting a dispersion consistent with that of a scalar flow field. Droplets with intermediate Stokes numbers were centrifuged by the vortices to the outer edges of the wake, yielding an increased dispersion. Large Stokes number droplets were found to be less responsive to the vortex structures and exhibited the least dispersion. Particle concentration was also correlated with vorticity distribution which yielded preferential bias effects as a function of different particle sizes. This trend was qualitatively similar to results seen in homogenous isotropic turbulence, though the influence of particle inertia was less pronounced for the cylinder wake case. A similar study was completed for droplet dispersion within the Icing Research Tunnel (IRT) at the NASA Glenn Research Center, where it is important to obtain a nearly uniform liquid water content (LWC) distribution in the test section (to recreate atmospheric icing conditions).. For this goal, droplets are diffused by the mean and turbulent flow generated from the nozzle air jets, from the upstream spray bars, and from the vertical strut wakes. To understand the influence of these three components, a set of simulations was conducted with a sequential inclusion of these components. Firstly, a jet in an otherwise quiescent airflow was simulated to capture the impact of the air jet on flow turbulence and droplet distribution, and the predictions compared well with experimental results. The effects of the spray bar wake and vertical strut wake were then included with two more simulation conditions, for which it was found that the air jets were the primary driving force for droplet dispersion, i.e. that the spray bar and vertical strut wake effects were secondary.
Resumo:
The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.