965 resultados para Quantum computational complexity


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital back-propagation (DBP) has recently been proposed for the comprehensive compensation of channel nonlinearities in optical communication systems. While DBP is attractive for its flexibility and performance, it poses significant challenges in terms of computational complexity. Alternatively, phase conjugation or spectral inversion has previously been employed to mitigate nonlinear fibre impairments. Though spectral inversion is relatively straightforward to implement in optical or electrical domain, it requires precise positioning and symmetrised link power profile in order to avail the full benefit. In this paper, we directly compare ideal and low-precision single-channel DBP with single-channel spectral-inversion both with and without symmetry correction via dispersive chirping. We demonstrate that for all the dispersion maps studied, spectral inversion approaches the performance of ideal DBP with 40 steps per span and exceeds the performance of electronic dispersion compensation by ~3.5 dB in Q-factor, enabling up to 96% reduction in complexity in terms of required DBP stages, relative to low precision one step per span based DBP. For maps where quasi-phase matching is a significant issue, spectral inversion significantly outperforms ideal DBP by ~3 dB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis considers sparse approximation of still images as the basis of a lossy compression system. The Matching Pursuit (MP) algorithm is presented as a method particularly suited for application in lossy scalable image coding. Its multichannel extension, capable of exploiting inter-channel correlations, is found to be an efficient way to represent colour data in RGB colour space. Known problems with MP, high computational complexity of encoding and dictionary design, are tackled by finding an appropriate partitioning of an image. The idea of performing MP in the spatio-frequency domain after transform such as Discrete Wavelet Transform (DWT) is explored. The main challenge, though, is to encode the image representation obtained after MP into a bit-stream. Novel approaches for encoding the atomic decomposition of a signal and colour amplitudes quantisation are proposed and evaluated. The image codec that has been built is capable of competing with scalable coders such as JPEG 2000 and SPIHT in terms of compression ratio.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a flexible visual data mining framework which combines advanced projection algorithms from the machine learning domain and visual techniques developed in the information visualization domain. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection algorithms, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates and billboarding, to provide a visual data mining framework. Results on a real-life chemoinformatics dataset using GTM are promising and have been analytically compared with the results from the traditional projection methods. It is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework. Copyright 2006 ACM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of critical issues for dual-polarization single- and multi-band optical orthogonal-frequency division multiplexing (DPSB/ MB-OFDM) signals are analyzed in dispersion compensation fiber (DCF)-free long-haul links. For the first time, different DP crosstalk removal techniques are compared, the maximum transmission-reach is investigated, and the impact of subcarrier number and high-level modulation formats are explored thoroughly. It is shown, for a bit-error-rate (BER) of 10-3, 2000 km of quaternary phase-shift keying (QPSK) DP-MBOFDM transmission is feasible. At high launched optical powers (LOP), maximum-likelihood decoding can extend the LOP of 40 Gb/s QPSK DPSB- OFDM at 2000 km by 1.5 dB compared to zero-forcing. For a 100 Gb/s DP-MB-OFDM system, a high number of subcarriers contribute to improved BER but at the cost of digital signal processing computational complexity, whilst by adapting the cyclic prefix length the BER can be improved for a low number of subcarriers. In addition, when 16-quadrature amplitude modulation (16QAM) is employed the digital-toanalogue/ analogue-to-digital converter (DAC/ADC) bandwidth is relaxed with a degraded BER; while the 'circular' 8QAM is slightly superior to its 'rectangular' form. Finally, the transmission of wavelength-division multiplexing DP-MB-OFDM and single-carrier DP-QPSK is experimentally compared for up to 500 Gb/s showing great potential and similar performance at 1000 km DCF-free G.652 line. © 2014 Optical Society of America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An improved digital backward propagation (DBP) is proposed to compensate inter-nonlinear effects and dispersion jointly in WDM systems based on an advanced perturbation technique (APT). A non-iterative weighted concept is presented to replace the iterative in analytical recursion expression, which can dramatically simplify the complexity and improve accuracy compared to the traditional perturbation technique (TPT). Furthermore, an analytical recursion expression of the output after backward propagation is obtained initially. Numerical simulations are executed for various parameters of the transmission system. The results indicate that the advanced perturbation technique will relax the step size requirements and reduce the oversampling factor when launch power is higher than -2 dBm. We estimate this technique will reduce computational complexity by a factor of around seven with respect to the conventional DBP. © 2013 Optical Society of America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an up to date review of digital watermarking (WM) from a VLSI designer point of view. The reader is introduced to basic principles and terms in the field of image watermarking. It goes through a brief survey on WM theory, laying out common classification criterions and discussing important design considerations and trade-offs. Elementary WM properties such as robustness, computational complexity and their influence on image quality are discussed. Common attacks and testing benchmarks are also briefly mentioned. It is shown that WM design must take the intended application into account. The difference between software and hardware implementations is explained through the introduction of a general scheme of a WM system and two examples from previous works. A versatile methodology to aid in a reliable and modular design process is suggested. Relating to mixed-signal VLSI design and testing, the proposed methodology allows an efficient development of a CMOS image sensor with WM capabilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is part of a work in progress whose goal is to construct a fast, practical algorithm for the vertex separation (VS) of cactus graphs. We prove a \main theorem for cacti", a necessary and sufficient condition for the VS of a cactus graph being k. Further, we investigate the ensuing ramifications that prevent the construction of an algorithm based on that theorem only.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the NP-complete problem Vertex Separation (VS) on Maximal Outerplanar Graphs (mops). We formulate and prove a “main theorem for mops”, a necessary and sufficient condition for the vertex separation of a mop being k. The main theorem reduces the vertex separation of mops to a special kind of stretchability, one that we call affixability, of submops.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the paper, an ontogenic artificial neural network (ANNs) is proposed. The network uses orthogonal activation functions that allow significant reducing of computational complexity. Another advantage is numerical stability, because the system of activation functions is linearly independent by definition. A learning procedure for proposed ANN with guaranteed convergence to the global minimum of error function in the parameter space is developed. An algorithm for structure network structure adaptation is proposed. The algorithm allows adding or deleting a node in real-time without retraining of the network. Simulation results confirm the efficiency of the proposed approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 65C05.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance of unrepeatered transmission of a seven Nyquist-spaced 10 GBd PDM-16QAM superchannel using full signal band coherent detection and multi-channel digital back propagation (MC-DBP) to mitigate nonlinear effects is analysed. For the first time in unrepeatered transmission, the performance of two amplification systems is investigated and directly compared in terms of achievable information rates (AIRs): 1) erbium-doped fibre amplifier (EDFA) and 2) second-order bidirectional Raman pumped amplification. The experiment is performed over different span lengths, demonstrating that, for an AIR of 6.8 bit/s/Hz, the Raman system enables an increase of 93 km (36 %) in span length. Further, at these distances, MC-DBP gives an improvement in AIR of 1 bit/s/Hz (to 7.8 bit/s/Hz) for both amplification schemes. The theoretical AIR gains for Raman and MC-DBP are shown to be preserved when considering low-density parity-check codes. Additionally, MC-DBP algorithms for both amplification schemes are compared in terms of performance and computational complexity. It is shown that to achieve the maximum MC-DBP gain, the Raman system requires approximately four times the computational complexity due to the distributed impact of fibre nonlinearity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.