41 resultados para active and passive quantum error correction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a quantum algorithm to measure the similarity between a pair of unattributed graphs. We design an experiment where the two graphs are merged by establishing a complete set of connections between their nodes and the resulting structure is probed through the evolution of continuous-time quantum walks. In order to analyze the behavior of the walks without causing wave function collapse, we base our analysis on the recently introduced quantum Jensen-Shannon divergence. In particular, we show that the divergence between the evolution of two suitably initialized quantum walks over this structure is maximum when the original pair of graphs is isomorphic. We also prove that under special conditions the divergence is minimum when the sets of eigenvalues of the Hamiltonians associated with the two original graphs have an empty intersection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quantum Jensen-Shannon divergence kernel [1] was recently introduced in the context of unattributed graphs where it was shown to outperform several commonly used alternatives. In this paper, we study the separability properties of this kernel and we propose a way to compute a low-dimensional kernel embedding where the separation of the different classes is enhanced. The idea stems from the observation that the multidimensional scaling embeddings on this kernel show a strong horseshoe shape distribution, a pattern which is known to arise when long range distances are not estimated accurately. Here we propose to use Isomap to embed the graphs using only local distance information onto a new vectorial space with a higher class separability. The experimental evaluation shows the effectiveness of the proposed approach. © 2013 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using prescription analyses and questionnaires, the way drug information was used by general medical practitioners during the drug adoption process was studied. Three new drugs were considered; an innovation and two 'me-too' products. The innovation was accepted by general practitioners via a contagion process, information passing among doctors. The 'me-too' preparations were accepted more slowly and by a process which did not include the contagion effect. 'Industrial' information such as direct mail was used more at the 'awareness' stage of the adoption process while 'professional' sources of information such as articles in medical journals were used more to evaluate a new product. It was shown that 'industrial' information was preferred by older single practice doctors who did not specialise, had a first degree only and who did not dispense their own prescriptions. Doctors were divided into early and late-prescribers by using the date they first prescribed the innovatory drug. Their approach to drug information sources was further studied and it was shown that the early-prescriber issued slightly more prescriptions per month, had a larger list size, read fewer journals and generally rated industrial sources of information more highly than late-prescribers. The prescribing habits of three consultant rheumatologists were analysed and compared with those of the general practitioners in the community which they served. Very little association was noted and the influence of the consultant on the prescribing habits of general practitioners was concluded to be low. The consultants influence was suggested to be of two components, active and passive; the active component being the most influential. Journal advertising and advertisement placement were studied for one of the 'me-too' drugs. It was concluded that advertisement placement should be based on the reading patterns of general practitioners and not on ad-hoc data gathered by representatives as was the present practice. A model was proposed relating the 'time to prescribe' a new drug to the variables suggested throughout this work. Four of these variables were shown to be significant. These were, the list size, the medical age of the prescriber, the number of new preparations prescribed in a given time and the number of partners in the practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses, and on the applications of advanced pulse waveforms in all-optical signal processing. Among other topics, we will discuss ultrahigh repetition-rate pulse sources, the generation of parabolic-shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses, and on the applications of advanced pulse waveforms in all-optical signal processing. Among other topics, we will discuss ultrahigh repetition-rate pulse sources, the generation of parabolic-shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we experimentally demonstrate a 10 Mb/s error free visible light communications (VLC) system using polymer light-emitting diodes (PLEDs) for the first time. The PLED under test is a blue emitter with ∼600 kHz bandwidth. Having such a low bandwidth means the introduction of an intersymbol interference (ISI) induced penalty at higher transmission speeds and thus the requirement for an equalizer. In this work we improve on previous literature by implementing a decision feedback equalizer, rather than a linear equalizer. Considering 7% and 20% forward error correction codes, transmission speeds up to ∼12 Mb/s can be supported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinearity plays a critical role in the intra-cavity dynamics of high-pulse energy fiber lasers. Management of the intra-cavity nonlinear dynamics is the key to increase the output pulse energy in such laser systems. Here, we examine the impact of the order of the intra-cavity elements on the energy of generated pulses in the all-normal dispersion mode-locked ring fiber laser cavity. In mathematical terms, the nonlinear light dynamics in resonator makes operators corresponding to the action of laser elements (active and passive fiber, out-coupler, saturable absorber) non-commuting and the order of their appearance in a cavity important. For the simple design of all-normal dispersion ring fiber laser with varying cavity length, we found the order of the cavity elements, leading to maximum output pulse energy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. Whereas many previous studies have identified the association between sustained near work and myopia, few have assessed the influence of concomitant levels of cognitive effort. This study investigates the effect of cognitive effort on near-work induced transient myopia (NITM). Methods. Subjects comprised of six early onset myopes (EOM; mean age 23.7 yrs; mean onset 10.8 yrs), six late-onset myopes (LOM; mean age 23.2 yrs; mean onset 20.0 yrs) and six emmetropes (EMM; mean age 23.8 yrs). Dynamic, monocular, ocular accommodation was measured with the Shin-Nippon SRW-5000 autorefractor. Subjects engaged passively or actively in a 5 minute arithmetic sum checking task presented monocularly on an LCD monitor via a Badal optical system. In all conditions the task was initially located at near (4.50 D) and immediately following the task instantaneously changed to far (0.00 D) for a further 5 minutes. The combinations of active (A) and passive (P) cognition were randomly allocated as P:P; A:P; A:A; P:A. Results. For the initial near task, LOMs were shown to have a significantly less accurate accommodative response than either EOMs or EMMs (p < 0.001). For the far task, post hoc analyses for refraction identified EOMs as demonstrating significant NITM compared to LOMs (p < 0.05), who in turn showed greater NITM than EMMs (p < 0.001). The data show that for EOMs the level of cognitive activity operating during the near and far tasks determines the persistence of NITM; persistence being maximal when active cognition at near is followed by passive cognition at far. Conclusions. Compared with EMMs, EOMs and LOMs are particularly susceptible to NITM such that sustained near vision reduces subsequent accommodative accuracy for far vision. It is speculated that the marked NITM found in EOM may be a consequence of the crystalline lens thinning shown to be a developmental feature of EOM. Whereas the role of small amounts of retinal defocus in myopigenesis remains equivocal, the results show that account needs to be taken of cognitive demand in assessing phenomena such as NITM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a free space quantum cryptography system which is designed to allow continuous unattended key exchanges for periods of several days, and over ranges of a few kilometres. The system uses a four-laser faint-pulse transmission system running at a pulse rate of 10MHz to generate the required four alternative polarization states. The receiver module similarly automatically selects a measurement basis and performs polarization measurements with four avalanche photodiodes. The controlling software can implement the full key exchange including sifting, error correction, and privacy amplification required to generate a secure key.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accuracy of altimetrically derived oceanographic and geophysical information is limited by the precision of the radial component of the satellite ephemeris. A non-dynamic technique is proposed as a method of reducing the global radial orbit error of altimetric satellites. This involves the recovery of each coefficient of an analytically derived radial error correction through a refinement of crossover difference residuals. The crossover data is supplemented by absolute height measurements to permit the retrieval of otherwise unobservable geographically correlated and linearly combined parameters. The feasibility of the radial reduction procedure is established upon application to the three day repeat orbit of SEASAT. The concept of arc aggregates is devised as a means of extending the method to incorporate longer durations, such as the 35 day repeat period of ERS-1. A continuous orbit is effectively created by including the radial misclosure between consecutive long arcs as an infallible observation. The arc aggregate procedure is validated using a combination of three successive SEASAT ephemerides. A complete simulation of the 501 revolution per 35 day repeat orbit of ERS-1 is derived and the recovery of the global radial orbit error over the full repeat period is successfully accomplished. The radial reduction is dependent upon the geographical locations of the supplementary direct height data. Investigations into the respective influences of various sites proposed for the tracking of ERS-1 by ground-based transponders are carried out. The potential effectiveness on the radial orbital accuracy of locating future tracking sites in regions of high latitudinal magnitude is demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The introduction of a micro-electronic based technology to the workplace has had a far reaching and widespread effect on the numbers and content of jobs. The importance of the implications of new technology were recognised by the trade unions, leading to a plethora of advice and literature in the late 70s and early 80s, notably the TUC 'Technology and Employment ' report. However, studies into the union response have consistently found an overall lack of influence by unions in the introduction of technology. Whilst the advent of new technology has coincided with an industrial relations climate of unprecedented hostility to union activity in the post-war period, there are structural weaknesses in unions in coming to terms with the process of technological change. In particular was the identification of a lack of suitable technological expertise. Addressing itself to this perceived weakness of the union response, this thesis is the outcome of a collaborative project between a national union and an academic institution. The thesis is based on detailed case studies concerning technology bargaining in the Civil Service and the response of the Civil and Public Services Associations (CPSA), the union that represents lower grade white collar civil servants. It is demonstrated that the application of expertise to union negotiators is insufficient on its own to extend union influence and that for unions to effectively come to terms with technology and influence its development requires a re-assessment across all spheres of union activity. It is suggested that this has repercussions for not only the internal organisation and quality of union policy formation and the extent, form and nature of collective bargaining with employer representatives, but also in the relationship with consumer and interest groups outside the traditional collective bargaining forum. Three policy options are developed in the thesis with the 'adversarial' and 'co~operative' options representing the more traditional reactive and passive forms of involvement. These are contrasted with an 'independent participative' form of involvement which was a 'pro-active' policy option and utilised the expertise of the Author in the CPSA's response to technological change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The small intestine poses a major barrier to the efficient absorption of orally administered therapeutics. Intestinal epithelial cells are an extremely important site for extrahepatic clearance, primarily due to prominent P-glycoprotein-mediated active efflux and the presence of cytochrome P450s. We describe a physiologically based pharmacokinetic model which incorporates geometric variations, pH alterations and descriptions of the abundance and distribution of cytochrome 3A and P-glycoprotein along the length of the small intestine. Simulations using preclinical in vitro data for model drugs were performed to establish the influence of P-glycoprotein efflux, cytochrome 3A metabolism and passive permeability on drug available for absorption within the enterocytes. The fraction of drug escaping the enterocyte (F(G)) for 10 cytochrome 3A substrates with a range of intrinsic metabolic clearances were simulated. Following incorporation of P-glycoprotein in vitro efflux ratios all predicted F(G) values were within 20% of observed in vivo F(G). The presence of P-glycoprotein increased the level of cytochrome 3A drug metabolism by up to 12-fold in the distal intestine. F(G) was highly sensitive to changes in intrinsic metabolic clearance but less sensitive to changes in intestinal drug permeability. The model will be valuable for quantifying aspects of intestinal drug absorption and distribution.