909 resultados para Computational complexity.
Resumo:
We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.
Resumo:
Few-mode fiber transmission systems are typically impaired by mode-dependent loss (MDL). In an MDL-impaired link, maximum-likelihood (ML) detection yields a significant advantage in system performance compared to linear equalizers, such as zero-forcing and minimum-mean square error equalizers. However, the computational effort of the ML detection increases exponentially with the number of modes and the cardinality of the constellation. We present two methods that allow for near-ML performance without being afflicted with the enormous computational complexity of ML detection: improved reduced-search ML detection and sphere decoding. Both algorithms are tested regarding their performance and computational complexity in simulations of three and six spatial modes with QPSK and 16QAM constellations.
Resumo:
Digital back-propagation (DBP) has recently been proposed for the comprehensive compensation of channel nonlinearities in optical communication systems. While DBP is attractive for its flexibility and performance, it poses significant challenges in terms of computational complexity. Alternatively, phase conjugation or spectral inversion has previously been employed to mitigate nonlinear fibre impairments. Though spectral inversion is relatively straightforward to implement in optical or electrical domain, it requires precise positioning and symmetrised link power profile in order to avail the full benefit. In this paper, we directly compare ideal and low-precision single-channel DBP with single-channel spectral-inversion both with and without symmetry correction via dispersive chirping. We demonstrate that for all the dispersion maps studied, spectral inversion approaches the performance of ideal DBP with 40 steps per span and exceeds the performance of electronic dispersion compensation by ~3.5 dB in Q-factor, enabling up to 96% reduction in complexity in terms of required DBP stages, relative to low precision one step per span based DBP. For maps where quasi-phase matching is a significant issue, spectral inversion significantly outperforms ideal DBP by ~3 dB.
Resumo:
This thesis considers sparse approximation of still images as the basis of a lossy compression system. The Matching Pursuit (MP) algorithm is presented as a method particularly suited for application in lossy scalable image coding. Its multichannel extension, capable of exploiting inter-channel correlations, is found to be an efficient way to represent colour data in RGB colour space. Known problems with MP, high computational complexity of encoding and dictionary design, are tackled by finding an appropriate partitioning of an image. The idea of performing MP in the spatio-frequency domain after transform such as Discrete Wavelet Transform (DWT) is explored. The main challenge, though, is to encode the image representation obtained after MP into a bit-stream. Novel approaches for encoding the atomic decomposition of a signal and colour amplitudes quantisation are proposed and evaluated. The image codec that has been built is capable of competing with scalable coders such as JPEG 2000 and SPIHT in terms of compression ratio.
Resumo:
We introduce a flexible visual data mining framework which combines advanced projection algorithms from the machine learning domain and visual techniques developed in the information visualization domain. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection algorithms, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates and billboarding, to provide a visual data mining framework. Results on a real-life chemoinformatics dataset using GTM are promising and have been analytically compared with the results from the traditional projection methods. It is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework. Copyright 2006 ACM.
Resumo:
A number of critical issues for dual-polarization single- and multi-band optical orthogonal-frequency division multiplexing (DPSB/ MB-OFDM) signals are analyzed in dispersion compensation fiber (DCF)-free long-haul links. For the first time, different DP crosstalk removal techniques are compared, the maximum transmission-reach is investigated, and the impact of subcarrier number and high-level modulation formats are explored thoroughly. It is shown, for a bit-error-rate (BER) of 10-3, 2000 km of quaternary phase-shift keying (QPSK) DP-MBOFDM transmission is feasible. At high launched optical powers (LOP), maximum-likelihood decoding can extend the LOP of 40 Gb/s QPSK DPSB- OFDM at 2000 km by 1.5 dB compared to zero-forcing. For a 100 Gb/s DP-MB-OFDM system, a high number of subcarriers contribute to improved BER but at the cost of digital signal processing computational complexity, whilst by adapting the cyclic prefix length the BER can be improved for a low number of subcarriers. In addition, when 16-quadrature amplitude modulation (16QAM) is employed the digital-toanalogue/ analogue-to-digital converter (DAC/ADC) bandwidth is relaxed with a degraded BER; while the 'circular' 8QAM is slightly superior to its 'rectangular' form. Finally, the transmission of wavelength-division multiplexing DP-MB-OFDM and single-carrier DP-QPSK is experimentally compared for up to 500 Gb/s showing great potential and similar performance at 1000 km DCF-free G.652 line. © 2014 Optical Society of America.
Resumo:
An improved digital backward propagation (DBP) is proposed to compensate inter-nonlinear effects and dispersion jointly in WDM systems based on an advanced perturbation technique (APT). A non-iterative weighted concept is presented to replace the iterative in analytical recursion expression, which can dramatically simplify the complexity and improve accuracy compared to the traditional perturbation technique (TPT). Furthermore, an analytical recursion expression of the output after backward propagation is obtained initially. Numerical simulations are executed for various parameters of the transmission system. The results indicate that the advanced perturbation technique will relax the step size requirements and reduce the oversampling factor when launch power is higher than -2 dBm. We estimate this technique will reduce computational complexity by a factor of around seven with respect to the conventional DBP. © 2013 Optical Society of America.
Resumo:
This paper presents an up to date review of digital watermarking (WM) from a VLSI designer point of view. The reader is introduced to basic principles and terms in the field of image watermarking. It goes through a brief survey on WM theory, laying out common classification criterions and discussing important design considerations and trade-offs. Elementary WM properties such as robustness, computational complexity and their influence on image quality are discussed. Common attacks and testing benchmarks are also briefly mentioned. It is shown that WM design must take the intended application into account. The difference between software and hardware implementations is explained through the introduction of a general scheme of a WM system and two examples from previous works. A versatile methodology to aid in a reliable and modular design process is suggested. Relating to mixed-signal VLSI design and testing, the proposed methodology allows an efficient development of a CMOS image sensor with WM capabilities.
Resumo:
This paper is part of a work in progress whose goal is to construct a fast, practical algorithm for the vertex separation (VS) of cactus graphs. We prove a \main theorem for cacti", a necessary and sufficient condition for the VS of a cactus graph being k. Further, we investigate the ensuing ramifications that prevent the construction of an algorithm based on that theorem only.
Resumo:
We investigate the NP-complete problem Vertex Separation (VS) on Maximal Outerplanar Graphs (mops). We formulate and prove a “main theorem for mops”, a necessary and sufficient condition for the vertex separation of a mop being k. The main theorem reduces the vertex separation of mops to a special kind of stretchability, one that we call affixability, of submops.
Resumo:
In the paper, an ontogenic artificial neural network (ANNs) is proposed. The network uses orthogonal activation functions that allow significant reducing of computational complexity. Another advantage is numerical stability, because the system of activation functions is linearly independent by definition. A learning procedure for proposed ANN with guaranteed convergence to the global minimum of error function in the parameter space is developed. An algorithm for structure network structure adaptation is proposed. The algorithm allows adding or deleting a node in real-time without retraining of the network. Simulation results confirm the efficiency of the proposed approach.
Resumo:
In this paper, we use the quantum Jensen-Shannon divergence as a means of measuring the information theoretic dissimilarity of graphs and thus develop a novel graph kernel. In quantum mechanics, the quantum Jensen-Shannon divergence can be used to measure the dissimilarity of quantum systems specified in terms of their density matrices. We commence by computing the density matrix associated with a continuous-time quantum walk over each graph being compared. In particular, we adopt the closed form solution of the density matrix introduced in Rossi et al. (2013) [27,28] to reduce the computational complexity and to avoid the cumbersome task of simulating the quantum walk evolution explicitly. Next, we compare the mixed states represented by the density matrices using the quantum Jensen-Shannon divergence. With the quantum states for a pair of graphs described by their density matrices to hand, the quantum graph kernel between the pair of graphs is defined using the quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets from both bioinformatics and computer vision. The experimental results demonstrate the effectiveness of the proposed quantum graph kernel.
Resumo:
2002 Mathematics Subject Classification: 65C05.
Resumo:
A number of recent studies have investigated the introduction of decoherence in quantum walks and the resulting transition to classical random walks. Interestingly,it has been shown that algorithmic properties of quantum walks with decoherence such as the spreading rate are sometimes better than their purely quantum counterparts. Not only quantum walks with decoherence provide a generalization of quantum walks that naturally encompasses both the quantum and classical case, but they also give rise to new and different probability distribution. The application of quantum walks with decoherence to large graphs is limited by the necessity of evolving state vector whose sizes quadratic in the number of nodes of the graph, as opposed to the linear state vector of the purely quantum (or classical) case. In this technical report,we show how to use perturbation theory to reduce the computational complexity of evolving a continuous-time quantum walk subject to decoherence. More specifically, given a graph over n nodes, we show how to approximate the eigendecomposition of the n2×n2 Lindblad super-operator from the eigendecomposition of the n×n graph Hamiltonian.
Resumo:
The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.