81 resultados para All-optical signal processing
Resumo:
Nonlinear optical transmission through periodically nanostructured metal films (surface-plasmon polaritonic crystals) has been studied. The surface polaritonic crystals have been coated with a nonlinear polymer. The optical transmission of such nanostructures has been shown to depend on the control-light illumination conditions. The resonant transmission exhibits bistable behavior with the control-light intensity. The bistability is different at different resonant signal wavelengths and for different wavelengths of the control light. The effect is explained by the strong sensitivity of the surface-plasmon mode resonances at the signal wavelength to the surrounding dielectric environment and the electromagnetic field enhancement due to plasmonic excitations at the controlled light wavelengths.
Resumo:
The Field Programmable Gate Array (FPGA) implementation of the commonly used Histogram of Oriented Gradients (HOG) algorithm is explored. The HOG algorithm is employed to extract features for object detection. A key focus has been to explore the use of a new FPGA-based processor which has been targeted at image processing. The paper gives details of the mapping and scheduling factors that influence the performance and the stages that were undertaken to allow the algorithm to be deployed on FPGA hardware, whilst taking into account the specific IPPro architecture features. We show that multi-core IPPro performance can exceed that of against state-of-the-art FPGA designs by up to 3.2 times with reduced design and implementation effort and increased flexibility all on a low cost, Zynq programmable system.
Resumo:
With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.
Resumo:
Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect of substantial applications, notably in human-computer interaction. Progress in the area relies heavily on the development of appropriate databases. This paper addresses the issues that need to be considered in developing databases of emotional speech, and shows how the challenge of developing apropriate databases is being addressed in three major recent projects - the Belfast project, the Reading-Leeds project and the CREST-ESP project. From these and other studies the paper draws together the tools and methods that have been developed, addresses the problems that arise and indicates the future directions for the development of emotional speech databases.
Resumo:
In previous papers, we have presented a logic-based framework based on fusion rules for merging structured news reports. Structured news reports are XML documents, where the textentries are restricted to individual words or simple phrases, such as names and domain-specific terminology, and numbers and units. We assume structured news reports do not require natural language processing. Fusion rules are a form of scripting language that define how structured news reports should be merged. The antecedent of a fusion rule is a call to investigate the information in the structured news reports and the background knowledge, and the consequent of a fusion rule is a formula specifying an action to be undertaken to form a merged report. It is expected that a set of fusion rules is defined for any given application. In this paper we extend the approach to handling probability values, degrees of beliefs, or necessity measures associated with textentries in the news reports. We present the formal definition for each of these types of uncertainty and explain how they can be handled using fusion rules. We also discuss the methods of detecting inconsistencies among sources.
Resumo:
An entangled two-mode coherent state is studied within the framework of 2 x 2-dimensional Hilbert space. An entanglement concentration scheme based on joint Bell-state measurements is worked out. When the entangled coherent state is embedded in vacuum environment, its entanglement is degraded but not totally lost. It is found that the larger the initial coherent amplitude, the faster entanglement decreases. We investigate a scheme to teleport a coherent superposition state while considering a mixed quantum channel. We find that the decohered entangled coherent state may be useless for quantum teleportation as it gives the optimal fidelity of teleportation less than the classical limit 2/3.
Resumo:
In a typical shoeprint classification and retrieval system, the first step is to segment meaningful basic shapes and patterns in a noisy shoeprint image. This step has significant influence on shape descriptors and shoeprint indexing in the later stages. In this paper, we extend a recently developed denoising technique proposed by Buades, called non-local mean filtering, to give a more general model. In this model, the expected result of an operation on a pixel can be estimated by performing the same operation on all of its reference pixels in the same image. A working pixel’s reference pixels are those pixels whose neighbourhoods are similar to the working pixel’s neighbourhood. Similarity is based on the correlation between the local neighbourhoods of the working pixel and the reference pixel. We incorporate a special instance of this general case into thresholding a very noisy shoeprint image. Visual and quantitative comparisons with two benchmarking techniques, by Otsu and Kittler, are conducted in the last section, giving evidence of the effectiveness of our method for thresholding noisy shoeprint images.
Resumo:
This paper presents a new method for calculating the individual generators’ shares in line flows, line losses and loads. The method is described and illustrated on active power flows, but it can be applied in the same way to reactive power flows. Starting from a power flow solution, the line flow matrix is formed. This matrix is used for identifying node types, tracing the power flow from generators downstream to loads, and to determine generators’ participation factors to lines and loads. Neither exhaustive search nor matrix inversion is required. Hence, the method is claimed to be the least computationally demanding amongst all of the similar methods.
Resumo:
The technical challenges in the design and programming of signal processors for multimedia communication are discussed. The development of terminal equipment to meet such demand presents a significant technical challenge, considering that it is highly desirable that the equipment be cost effective, power efficient, versatile, and extensible for future upgrades. The main challenges in the design and programming of signal processors for multimedia communication are, general-purpose signal processor design, application-specific signal processor design, operating systems and programming support and application programming. The size of FFT is programmable so that it can be used for various OFDM-based communication systems, such as digital audio broadcasting (DAB), digital video broadcasting-terrestrial (DVB-T) and digital video broadcasting-handheld (DVB-H). The clustered architecture design and distributed ping-pong register files in the PAC DSP raise new challenges of code generation.
Resumo:
Audio scrambling can be employed to ensure confidentiality in audio distribution. We first describe scrambling for raw audio using the discrete wavelet transform (DWT) first and then focus on MP3 audio scrambling. We perform scrambling based on a set of keys which allows for a set of audio outputs having different qualities. During descrambling, the number of keys provided and the number of rounds of descrambling performed will decide the audio output quality. We also perform scrambling by using multiple keys on the MP3 audio format. With a subset of keys, we can descramble to obtain a low quality audio. However, we can obtain the original quality audio by using all of the keys. Our experiments show that the proposed algorithms are effective, fast, simple to implement while providing flexible control over the progressive quality of the audio output. The security level provided by the scheme is sufficient for protecting MP3 music content.
Resumo:
The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The authors propose a three-node full diversity cooperative protocol, which allows the retransmission of all symbols. By allowing multiple nodes to transmit simultaneously, relaying transmission only consumes limited bandwidth resource. To facilitate the performance analysis of the proposed cooperative protocol, the lower and upper bounds of the outage probability are first developed, and then the high signal-to-noise ratio behaviour is studied. Our analytical results show that the proposed strategy can achieve full diversity. To achieve the performance gain promised by the cooperative diversity, at the relays decode-and-forward strategy is adopted and an iterative soft-interference-cancellation minimum mean-squared error equaliser is developed. The simulation results compare the bit-error-rate performance of the proposed protocol with the non-cooperative scheme and the scheme presented by Azarian et al. ( 2005).
Resumo:
For a digital echo canceller it is desirable to reduce the adaptation time, during which the transmission of useful data is not possible. LMS is a non-optimal algorithm in this case as the signals involved are statistically non-Gaussian. Walach and Widrow (IEEE Trans. Inform. Theory 30 (2) (March 1984) 275-283) investigated the use of a power of 4, while other research established algorithms with arbitrary integer (Pei and Tseng, IEEE J. Selected Areas Commun. 12(9)(December 1994) 1540-1547) or non-quadratic power (Shah and Cowan, IEE.Proc.-Vis. Image Signal Process. 142 (3) (June 1995) 187-191). This paper suggests that continuous and automatic, adaptation of the error exponent gives a more satisfactory result. The family of cost function adaptation (CFA) stochastic gradient algorithm proposed allows an increase in convergence rate and, an improvement of residual error. As special case the staircase CFA algorithm is first presented, then the smooth CFA is developed. Details of implementations are also discussed. Results of simulation are provided to show the properties of the proposed family of algorithms. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents a systematic measurement campaign of diversity reception techniques for use in multiple-antenna wearable systems operating at 868 MHz. The experiments were performed using six time-synchronized bodyworn receivers and considered mobile off-body communications in an anechoic chamber, open office area and a hallway. The cross-correlation coefficient between the signal fading measured by bodyworn receivers was dependent upon the local environment and typically below 0.7. All received signal envelopes were combined in post-processing to study the potential benefits of implementing receiver diversity based upon selection combination, equal-gain and maximal-ratio combining. It is shown that, in an open office area, the 5.7 dB diversity gain obtained using a dual-branch bodyworn maximal-ratio diversity system may be further improved to 11.1 dB if a six-branch system was used. First-and second-order theoretical equations for diversity reception techniques operating in Nakagami fading conditions were used to model the postdetection combined envelopes. Maximum likelihood estimates of the Nakagami-parameter suggest that the fading conditions encountered in this study were generally less severe than Rayleigh. The paper also describes an algorithm that may be used to simulate the measured output of an M-branch diversity combiner operating in independent and identically-distributed Nakagami fading environments.