969 resultados para electronic invoice processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents experimental investigations of the use of semiconductor optical amplifiers in a nonlinear loop mirror (SOA-NOLM) and its application in all-optical processing. The techniques used are mainly experimental and are divided into three major applications. Initially the semiconductor optical amplifier, SOA, is experimentally characterised and the optimum operating condition is identified. An interferometric switch based on a Sagnac loop with the SOA as the nonlinear element is employed to realise all-optical switching. All-optical switching is a very attractive alternative to optoelectronic conversion because it avoids the conversion from the optical to the electronic domain and back again. The first major investigation involves a carrier suppressed return to zero, CSRZ, format conversion and transmission. This study is divided into single channel and four channel WDM respectively. The optical bandwidth which limits the conversion is investigated. The improvement of the nonlinear tolerance in the CSRZ transmission is shown which shows the suitability of this format for enhancing system performance. Second, a symmetrical switching window is studied in the SOA-NOLM where two similar control pulses are injected into the SOA from opposite directions. The switching window is symmetric when these two control pulses have the same power and arrive at the same time in the SOA. Finally, I study an all-optical circulating shift register with an inverter. The detailed behaviour of the blocks of zeros and ones has been analysed in terms of their transient measurement. Good agreement with a simple model of the shift register is obtained. The transient can be reduced but it will affect the extinction ratio of the pulses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We experimentally demonstrate the use of full-field electronic dispersion compensation (EDC) to achieve a bit error rate of 5 x 10(-5) at 22.3 dB optical signal-to-noise ratio for single-channel 10 Gbit/s on-off keyed signal after transmission over 496 km field-installed single-mode fibre with an amplifier spacing of 124 km. This performance is achieved by designing the EDC so as to avoid electronic amplification of the noise content of the signal during full-field reconstruction. We also investigate the tolerance of the system to key signal processing parameters, and numerically demonstrate that single-channel 2160 km single mode fibre transmission without in-line optical dispersion compensation can be achieved using this technique with 80 km amplifier spacing and optimized system parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the pattern-dependent decoding failures in full-field electronic dispersion compensation (EDC) by offline processing of experimental signals, and find that the performance of such an EDC receiver may be degraded by an isolated "1" bit surrounded by long strings of consecutive "0s". By reducing the probability of occurrence of this kind of isolated "1" and using a novel adaptive threshold decoding method, we greatly improve the compensation performance to achieve 10-Gb/s on-off keyed signal transmission over 496-km field-installed single-mode fiber without optical dispersion compensation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photonic signal processing is used to implement common mode signal cancellation across a very wide bandwidth utilising phase modulation of radio frequency (RF) signals onto a narrow linewidth laser carrier. RF spectra were observed using narrow-band, tunable optical filtering using a scanning Fabry Perot etalon. Thus functions conventionally performed using digital signal processing techniques in the electronic domain have been replaced by analog techniques in the photonic domain. This technique was able to observe simultaneous cancellation of signals across a bandwidth of 1400 MHz, limited only by the free spectral range of the etalon. © 2013 David M. Benton.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to investigate the technological development of electronic inventory solutions from perspective of patent analysis. We first applied the international patent classification to classify the top categories of data processing technologies and their corresponding top patenting countries. Then we identified the core technologies by the calculation of patent citation strength and standard deviation criterion for each patent. To eliminate those core innovations having no reference relationships with the other core patents, relevance strengths between core technologies were evaluated also. Our findings provide market intelligence not only for the research and development community, but for the decision making of advanced inventory solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the pattern-dependent decoding failures in full-field electronic dispersion compensation (EDC) by offline processing of experimental signals, and find that the performance of such an EDC receiver may be degraded by an isolated "1" bit surrounded by long strings of consecutive "0s". By reducing the probability of occurrence of this kind of isolated "1" and using a novel adaptive threshold decoding method, we greatly improve the compensation performance to achieve 10-Gb/s on-off keyed signal transmission over 496-km field-installed single-mode fiber without optical dispersion compensation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whereas previous research has demonstrated that trait ratings of faces at encoding leads to enhanced recognition accuracy as compared to feature ratings, this set of experiments examines whether ratings given after encoding and just prior to recognition influence face recognition accuracy. In Experiment 1 subjects who made feature ratings just prior to recognition were significantly less accurate than subjects who made no ratings or trait ratings. In Experiment 2 ratings were manipulated at both encoding and retrieval. The retrieval effect was smaller and nonsignificant, but a combined probability analysis showed that it was significant when results from both experiments are considered jointly. In a third experiment exposure duration at retrieval, a potentially confounding factor in Experiments 1 and 2, had a nonsignificant effect on recognition accuracy, suggesting that it probably does not explain the results from Experiments 1 and 2. These experiments demonstrate that face recognition accuracy can be influenced by processing instructions at retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Communication has become an essential function in our civilization. With the increasing demand for communication channels, it is now necessary to find ways to optimize the use of their bandwidth. One way to achieve this is by transforming the information before it is transmitted. This transformation can be performed by several techniques. One of the newest of these techniques is the use of wavelets. Wavelet transformation refers to the act of breaking down a signal into components called details and trends by using small waveforms that have a zero average in the time domain. After this transformation the data can be compressed by discarding the details, transmitting the trends. In the receiving end, the trends are used to reconstruct the image. In this work, the wavelet used for the transformation of an image will be selected from a library of available bases. The accuracy of the reconstruction, after the details are discarded, is dependent on the wavelets chosen from the wavelet basis library. The system developed in this thesis takes a 2-D image and decomposes it using a wavelet bank. A digital signal processor is used to achieve near real-time performance in this transformation task. A contribution of this thesis project is the development of DSP-based test bed for the future development of new real-time wavelet transformation algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Postprint

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.