909 resultados para discrete kernel
Resumo:
Asymmetric discrete triangular distributions are introduced in order to extend the symmetric ones serving for discrete associated kernels in the nonparametric estimation for discrete functions. The extension from one to two orders around the mode provides a large family of discrete distributions having a finite support. Establishing a bridge between Dirac and discrete uniform distributions, some different shapes are also obtained and their properties are investigated. In particular, the mean and variance are pointed out. Applications to discrete kernel estimators are given with a solution to a boundary bias problem. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A neighbourhood assignment in a space X is a family O = {O-x: x is an element of X} of open subsets of X such that X is an element of O-x for any x is an element of X. A set Y subset of X is a kernel of O if O(Y) = U{O-x: x is an element of Y} = X. We obtain some new results concerning dually discrete spaces, being those spaces for which every neighbourhood assignment has a discrete kernel. This is a strictly larger class than the class of D-spaces of [E.K. van Douwen, W.F. Pfeffer, Some properties of the Sorgenfrey line and related spaces, Pacific J. Math. 81 (2) (1979) 371-377]. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we propose a new edge-based matching kernel for graphs by using discrete-time quantum walks. To this end, we commence by transforming a graph into a directed line graph. The reasons of using the line graph structure are twofold. First, for a graph, its directed line graph is a dual representation and each vertex of the line graph represents a corresponding edge in the original graph. Second, we show that the discrete-time quantum walk can be seen as a walk on the line graph and the state space of the walk is the vertex set of the line graph, i.e., the state space of the walk is the edges of the original graph. As a result, the directed line graph provides an elegant way of developing new edge-based matching kernel based on discrete-time quantum walks. For a pair of graphs, we compute the h-layer depth-based representation for each vertex of their directed line graphs by computing entropic signatures (computed from discrete-time quantum walks on the line graphs) on the family of K-layer expansion subgraphs rooted at the vertex, i.e., we compute the depth-based representations for edges of the original graphs through their directed line graphs. Based on the new representations, we define an edge-based matching method for the pair of graphs by aligning the h-layer depth-based representations computed through the directed line graphs. The new edge-based matching kernel is thus computed by counting the number of matched vertices identified by the matching method on the directed line graphs. Experiments on standard graph datasets demonstrate the effectiveness of our new kernel.
Resumo:
In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider the existence of the maximal and mean square stabilizing solutions for a set of generalized coupled algebraic Riccati equations (GCARE for short) associated to the infinite-horizon stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a sufficient condition, based only on some positive semi-definite and kernel restrictions on some matrices, under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution fir the GCARE. We also present a solution for the discounted and long run average cost problems when the performance criterion is assumed be composed by a linear combination of an indefinite quadratic part and a linear part in the state and control variables. The paper is concluded with a numerical example for pension fund with regime switching.
Resumo:
In this paper, we study several tests for the equality of two unknown distributions. Two are based on empirical distribution functions, three others on nonparametric probability density estimates, and the last ones on differences between sample moments. We suggest controlling the size of such tests (under nonparametric assumptions) by using permutational versions of the tests jointly with the method of Monte Carlo tests properly adjusted to deal with discrete distributions. We also propose a combined test procedure, whose level is again perfectly controlled through the Monte Carlo test technique and has better power properties than the individual tests that are combined. Finally, in a simulation experiment, we show that the technique suggested provides perfect control of test size and that the new tests proposed can yield sizeable power improvements.
Resumo:
A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.
Resumo:
Para entender nuestro proyecto, debemos comprender DEVS. Dentro de los formalismos más populares de representación de sistemas de eventos discretos se encuentra DES. En la década de los 70, el matemático Bernard Zeigler propuso un formalismo general para la representación de dichos sistemas. Este formalismo denominado DEVS (Discrete EVent System Specification) es el formalismo más general para el tratamiento de DES. DEVS permite representar todos aquellos sistemas cuyo comportamiento pueda describirse mediante una secuencia de eventos discretos. Estos eventos se caracterizan por un tiempo base en el que solo un número de eventos finitos puede ocurrir. DEVS Modelado y Simulación tiene múltiples implementaciones en varios lenguajes de programación como por ejemplo en Java, C# o C++. Pero surge la necesidad de implementar una plataforma distribuida estable para proporcionar la mecánica de interoperabilidad e integrar modelos DEVS diversificados. En este proyecto, se nos dará como código base el core de xDEVS en java, aplicado de forma secuencial y paralelizada. Nuestro trabajo será implementar el core de manera distribuida de tal forma que se pueda dividir un sistema DEVS en diversas máquinas. Para esto hemos utilizado sockets de java para hacer la transmisión de datos lo más eficiente posible. En un principio deberemos especificar el número de máquinas que se conectarán al servidor. Una vez estas se hayan conectado se les enviará el trabajo específico que deberán simular. Cabe destacar que hay dos formas de dividir un sistema DEVS las cuales están implementadas en nuestro proyecto. La primera es dividirlo en módulos atómicos los cuales son subsistemas indivisibles en un sistema DEVS. Y la segunda es dividir las funciones de todos los subsistemas en grupos y repartirlos entre las máquinas. En resumen el funcionamiento de nuestro sistema distribuido será comenzar ejecutando el trabajo asignado al primer cliente, una vez finalizado actualizará la información del servidor y este mandara la orden al siguiente y así sucesivamente.
Resumo:
In this paper, we develop a new family of graph kernels where the graph structure is probed by means of a discrete-time quantum walk. Given a pair of graphs, we let a quantum walk evolve on each graph and compute a density matrix with each walk. With the density matrices for the pair of graphs to hand, the kernel between the graphs is defined as the negative exponential of the quantum Jensen–Shannon divergence between their density matrices. In order to cope with large graph structures, we propose to construct a sparser version of the original graphs using the simplification method introduced in Qiu and Hancock (2007). To this end, we compute the minimum spanning tree over the commute time matrix of a graph. This spanning tree representation minimizes the number of edges of the original graph while preserving most of its structural information. The kernel between two graphs is then computed on their respective minimum spanning trees. We evaluate the performance of the proposed kernels on several standard graph datasets and we demonstrate their effectiveness and efficiency.
Resumo:
This paper proposes a new design methodology for discrete multi-pumped Raman amplifier. In a multi-objective optimization scenario, in a first step the whole solution-space is inspected by a CW analytical formulation. Then, the most promising solutions are fully investigated by a rigorous numerical treatment and the Raman amplification performance is thus determined by the combination of analytical and numerical approaches. As an application of our methodology we designed an photonic crystal fiber Raman amplifier configuration which provides low ripple, high gain, clear eye opening and a low power penalty. The amplifier configuration also enables to fully compensate the dispersion introduced by a 70-km singlemode fiber in a 10 Gbit/s system. We have successfully obtained a configuration with 8.5 dB average gain over the C-band and 0.71 dB ripple with almost zero eye-penalty using only two pump lasers with relatively low pump power. (C) 2009 Optical Society of America
Resumo:
The main goal of this paper is to establish some equivalence results on stability, recurrence, and ergodicity between a piecewise deterministic Markov process ( PDMP) {X( t)} and an embedded discrete-time Markov chain {Theta(n)} generated by a Markov kernel G that can be explicitly characterized in terms of the three local characteristics of the PDMP, leading to tractable criterion results. First we establish some important results characterizing {Theta(n)} as a sampling of the PDMP {X( t)} and deriving a connection between the probability of the first return time to a set for the discrete-time Markov chains generated by G and the resolvent kernel R of the PDMP. From these results we obtain equivalence results regarding irreducibility, existence of sigma-finite invariant measures, and ( positive) recurrence and ( positive) Harris recurrence between {X( t)} and {Theta(n)}, generalizing the results of [ F. Dufour and O. L. V. Costa, SIAM J. Control Optim., 37 ( 1999), pp. 1483-1502] in several directions. Sufficient conditions in terms of a modified Foster-Lyapunov criterion are also presented to ensure positive Harris recurrence and ergodicity of the PDMP. We illustrate the use of these conditions by showing the ergodicity of a capacity expansion model.
Resumo:
Over the last couple of decades, many methods for synchronizing chaotic systems have been proposed with communications applications in view. Yet their performance has proved disappointing in face of the nonideal character of usual channels linking transmitter and receiver, that is, due to both noise and signal propagation distortion. Here we consider a discrete-time master-slave system that synchronizes despite channel bandwidth limitations and an allied communication system. Synchronization is achieved introducing a digital filter that limits the spectral content of the feedback loop responsible for producing the transmitted signal. Copyright (C) 2009 Marcio Eisencraft et al.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.