44 resultados para Global Processing Speed
Resumo:
Results of full numerical simulations of a guiding-centre soliton system with randomly birefringent SMF fibre are shown and analysed. It emerges that the soliton system becomes unstable even for small amounts of PMD.
Resumo:
Although reading ability has been related to the processing of simple pitch features such as isolated transitions or continuous modulation spoken language also contains complex patterns of pitch changes that are important for establishing stress location and for segmenting the speech stream. These aspects of spoken language processing depend critically on pitch pattern (global structure) rather than on absolute pitch values (local structure). Here we show that the detection of global structure, and not local structure, is predictive of performance on measures of phonological skill and reading ability, which supports a critical importance of pitch contour processing in the acquisition of literacy.
Resumo:
All-optical data processing is expected to play a major role in future optical communications. The fiber nonlinear optical loop mirror (NOLM) is a valuable tool in optical signal processing applications. This paper presents an overview of our recent advances in developing NOLM-based all-optical processing techniques for application in fiber-optic communications. The use of in-line NOLMs as a general technique for all-optical passive 2R (reamplification, reshaping) regeneration of return-to-zero (RZ) on-off keyed signals in both high-speed, ultralong-distance transmission systems and terrestrial photonic networks is reviewed. In this context, a theoretical model enabling the description of the stable propagation of carrier pulses with periodic all-optical self-regeneration in fiber systems with in-line deployment of nonlinear optical devices is presented. A novel, simple pulse processing scheme using nonlinear broadening in normal dispersion fiber and loop mirror intensity filtering is described, and its employment is demonstrated as an optical decision element at a RZ receiver as well as an in-line device to realize a transmission technique of periodic all-optical RZ-nonreturn-to-zero-like format conversion. The important issue of phase-preserving regeneration of phase-encoded signals is also addressed by presenting a new design of NOLM based on distributed Raman amplification in the loop fiber. © 2008 Elsevier Inc. All rights reserved.
Resumo:
The following thesis presents results obtained from both numerical simulation and laboratory experimentation (both of which were carried out by the author). When data is propagated along an optical transmission line some timing irregularities can occur such as timing jitter and phase wander. Traditionally these timing problems would have been corrected by converting the optical signal into the electrical domain and then compensating for the timing irregularity before converting the signal back into the optical domain. However, this thesis posses a potential solution to the problem by remaining completely in the optical domain, eliminating the need for electronics. This is desirable as not only does optical processing reduce the latency effect that their electronic counterpart have, it also holds the possibility of an increase in overall speed. A scheme was proposed which utilises the principle of wavelength conversion to dynamically convert timing irregularities (timing jitter and phase wander) into a change in wavelength (this occurs on a bit-by-bit level and so timing jitter and phase wander can be compensated for simultaneously). This was achieved by optically sampling a linearly chirped, locally generated clock source (the sampling function was achieved using a nonlinear optical loop mirror). The data, now with each bit or code word having a unique wavelength, is then propagated through a dispersion compensation module. The dispersion compensation effectively re-aligns the data in time and so thus, the timing irregularities are removed. The principle of operation was tested using computer simulation before being re-tested in a laboratory environment. A second stage was added to the device to create 3R regeneration. The second stage is used to simply convert the timing suppressed data back into a single wavelength. By controlling the relative timing displacement between stage one and stage two, the wavelength that is finally produced can be controlled.
Resumo:
Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd.
Resumo:
We investigated the role of local and global information on perceptual encoding of faces in patient HJA, who shows prosopagnosia and visual agnosia following occipito-temporal damage. HJA and an age-matched control were tested in a simultaneous matching task which focused on detection of local changes in faces: the inversion of central parts (eyes and mouth) relative to their context (as in the Thatcher illusion). Same-different judgements were made to normal, “thatcherised” and mixed type face pairs. Whole faces (Experiment 1), or face parts (Experiment 2), were presented in upright and inverted orientations. Compared to the control, HJA was severely impaired at matching whole faces, but he improved dramatically when face parts were presented in isolation. This suggests an inhibitory influence of face context on HJAs processing of local parts and a relatively intact ability to process part-based information from a face (when context cannot interfere). Face inversion did not affect HJAs performance. A control experiment (Experiment 3) with non-face stimuli (houses) suggested that the inhibitory influence of context on HJAs performance was restricted to faces. These results indicate that contextual information in a face can have an adverse influence on the processing of local part-based information in prosopagnosia.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
The thesis describes an investigation into methods for the design of flexible high-speed product processing machinery, consisting of independent electromechanically actuated machine functions which operate under software coordination and control. An analysis is made of the elements of traditionally designed cam-actuated, mechanically coupled machinery, so that the operational functions and principal performance limitations of the separate machine elements may be identified. These are then used to define the requirements for independent actuators machinery, with a discussion of how this type of design approach is more suited to modern manufacturing trends. A distributed machine controller topology is developed which is a hybrid of hierarchical and pipeline control. An analysis is made, with the aid of dynamic simulation modelling, which confirms the suitability of the controller for flexible machinery control. The simulations include complex models of multiple independent actuators systems, which enable product flow and failure analyses to be performed. An analysis is made of high performance brushless d.c. servomotors and their suitability for actuating machine motions is assessed. Procedures are developed for the selection of brushless servomotors for intermittent machine motions. An experimental rig is described which has enabled the actuation and control methods developed to be implemented. With reference to this, an evaluation is made of the suitability of the machine design method and a discussion is given of the developments which are necessary for operational independent actuators machinery to be attained.
Resumo:
In previous sea-surface variability studies, researchers have failed to utilise the full ERS-1 mission due to the varying orbital characteristics in each mission phase, and most have simply ignored the Ice and Geodetic phases. This project aims to introduce a technique which will allow the straightforward use of all orbital phases, regardless of orbit type. This technique is based upon single satellite crossovers. Unfortunately the ERS-1 orbital height is still poorly resolved (due to higher air drag and stronger gravitational effects) when compared with that of TOPEX/Poseidon (T/P), so to make best use of the ERS-1 crossover data corrections to the ERS-1 orbital heights are calculated by fitting a cubic-spline to dual-crossover residuals with T/P. This correction is validated by comparison of dual satellite crossovers with tide gauge data. The crossover processing technique is validated by comparing the extracted sea-surface variability information with that from T/P repeat pass data. The two data sets are then combined into a single consistent data set for analysis of sea-surface variability patterns. These patterns are simplified by the use of an empirical orthogonal function decomposition which breaks the signals into spatial modes which are then discussed separately. Further studies carried out on these data include an analysis of the characteristics of the annual signal, discussion of evidence for Rossby wave propagation on a global basis, and finally analysis of the evidence for global mean sea level rise.
Resumo:
The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
The current optical communications network consists of point-to-point optical transmission paths interconnected with relatively low-speed electronic switching and routing devices. As the demand for capacity increases, then higher speed electronic devices will become necessary. It is however hard to realise electronic chip-sets above 10 Gbit/s, and therefore to increase the achievable performance of the network, electro-optic and all-optic switching and routing architectures are being investigated. This thesis aims to provide a detailed experimental analysis of high-speed optical processing within an optical time division multiplexed (OTDM) network node. This includes the functions of demultiplexing, 'drop and insert' multiplexing, data regeneration, and clock recovery. It examines the possibilities of combining these tasks using a single device. Two optical switching technologies are explored. The first is an all-optical device known as 'semiconductor optical amplifier-based nonlinear optical loop mirror' (SOA-NOLM). Switching is achieved by using an intense 'control' pulse to induce a phase shift in a low-intensity signal propagating through an interferometer. Simultaneous demultiplexing, data regeneration and clock recovery are demonstrated for the first time using a single SOA-NOLM. The second device is an electroabsorption (EA) modulator, which until this thesis had been used in a uni-directional configuration to achieve picosecond pulse generation, data encoding, demultiplexing, and 'drop and insert' multiplexing. This thesis presents results on the use of an EA modulator in a novel bi-directional configuration. Two independent channels are demultiplexed from a high-speed OTDM data stream using a single device. Simultaneous demultiplexing with stable, ultra-low jitter clock recovery is demonstrated, and then used in a self-contained 40 Gbit/s 'drop and insert' node. Finally, a 10 GHz source is analysed that exploits the EA modulator bi-directionality to increase the pulse extinction ratio to a level where it could be used in an 80 Gbit/s OTDM network.
Resumo:
Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.