32 resultados para k-Error linear complexity

em University of Queensland eSpace - Australia


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A quantum circuit implementing 5-qubit quantum-error correction on a linear-nearest-neighbor architecture is described. The canonical decomposition is used to construct fast and simple gates that incorporate the necessary swap operations allowing the circuit to achieve the same depth as the current least depth circuit. Simulations of the circuit's performance when subjected to discrete and continuous errors are presented. The relationship between the error rate of a physical qubit and that of a logical qubit is investigated with emphasis on determining the concatenated error correction threshold.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although the n-back task has been widely applied to neuroimagery investigations of working memory (WM), the role of practice effects on behavioural performance of this task has not yet been investigated. The current study aimed to investigate the effects of task complexity and familiarity on the n-back task. Seventy-seven participants (39 male, 38 female) completed a visuospatial n-back task four times, twice in two testing sessions separated by a week. Participants were required to remember either the first, second or third (n-back) most recent letter positions in a continuous sequence and to indicate whether the current item matched or did not match the remembered position. A control task, with no working memory requirements required participants to match to a predetermined stimulus position. In both testing sessions, reaction time (RT) and error rate increased with increasing WM load. An exponential slope for RTs in the first session indicated dual-task interference at the 3-back level. However, a linear slope in the second session indicated a reduction of dual-task interference. Attenuation of interference in the second session suggested a reduction in executive demands of the task with practice. This suggested that practice effects occur within the n-back ask and need to be controlled for in future neuroimagery research using the task.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the non-linear bending behaviour of functionally graded plates that are bonded with piezoelectric actuator layers and subjected to transverse loads and a temperature gradient based on Reddy's higher-order shear deformation plate theory. The von Karman-type geometric non-linearity, piezoelectric and thermal effects are included in mathematical formulations. The temperature change is due to a steady-state heat conduction through the plate thickness. The material properties are assumed to be graded in the thickness direction according to a power-law distribution in terms of the volume fractions of the constituents. The plate is clamped at two opposite edges, while the remaining edges can be free, simply supported or clamped. Differential quadrature approximation in the X-axis is employed to convert the partial differential governing equations and the associated boundary conditions into a set of ordinary differential equations. By choosing the appropriate functions as the displacement and stress functions on each nodal line and then applying the Galerkin procedure, a system of non-linear algebraic equations is obtained, from which the non-linear bending response of the plate is determined through a Picard iteration scheme. Numerical results for zirconia/aluminium rectangular plates are given in dimensionless graphical form. The effects of the applied actuator voltage, the volume fraction exponent, the temperature gradient, as well as the characteristics of the boundary conditions are also studied in detail. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proteome of bovine milk is dominated by just six gene products that constitute approximately 95% of milk protein. Nonetheless, over 150 protein spots can be readily detected following two-dimensional electrophoresis of whole milk. Many of these represent isoforms of the major gene products produced through extensive posttranslational modification. Peptide mass fingerprinting of in-gel tryptic digests (using matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) in reflectron mode with alpha-cyano-4-hydroxycinnamic acid as the matrix) identified 10 forms of K-casein with isoelectric point (pl) values from 4.47 to 5.81, but could not distinguish between them. MALDI-TOF MS in linear mode, using sinapinic acid as the matrix, revealed a large tryptic peptide (mass > 5990 Da) derived from the C-terminus that contained all the known sites of genetic variance, phosphorylation and glycosylation. Two genetic variants present as singly or doubly phosphorylated forms could be distinguished using mass data alone. Glycoforms containing a single acidic tetrasaccharide were also identified. The differences in electrophoretic mobility of these isoforms were consistent with the addition of the acidic groups. While more extensively glycosylated forms were also observed, substantial loss of N-acetylneuraminic acid from the glycosyl group was evident in the MALDI spectra such that ions corresponding to the intact glycopeptide were not observed and assignment of the glycoforms was not possible. However, by analysing the pl shifts observed on the two-dimensional gels in conjunction with the MS data, the number of N-acetylneuraminic acid residues, and hence the glycoforms present, could be determined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate a quantum error correction scheme that protects against accidental measurement, using a parity encoding where the logical state of a single qubit is encoded into two physical qubits using a nondeterministic photonic controlled-NOT gate. For the single qubit input states vertical bar 0 >, vertical bar 1 >, vertical bar 0 > +/- vertical bar 1 >, and vertical bar 0 > +/- i vertical bar 1 > our encoder produces the appropriate two-qubit encoded state with an average fidelity of 0.88 +/- 0.03 and the single qubit decoded states have an average fidelity of 0.93 +/- 0.05 with the original state. We are able to decode the two-qubit state (up to a bit flip) by performing a measurement on one of the qubits in the logical basis; we find that the 64 one-qubit decoded states arising from 16 real and imaginary single-qubit superposition inputs have an average fidelity of 0.96 +/- 0.03.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diffusion of hexane, heptane, octane, and decane in nanoporous MCM-41 silica at various temperatures is investigated by the zero-length-column method. The diffusion coefficients are derived by a complete-time-range analysis of desorption curves at different purge flow rates and temperatures. The results show that the calculated low-coverage diffusivity values decrease monotonically, and the derived Henry's law constants increase, as the carbon number of paraffins increases. The study reveals that transport is strongly influenced by intracrystalline diffusion and dominated by the sorbate-sorbent interaction. The diffusion activation energy and adsorption isosteric heat at zero loading increase monotonically with the carbon number of linear paraffins, but their ratio is essentially constant for each adsorbate compound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Specific cutting energy (SE) has been widely used to assess the rock cuttability for mechanical excavation purposes. Some prediction models were developed for SE through correlating rock properties with SE values. However, some of the textural and compositional rock parameters i.e. texture coefficient and feldspar, mafic, and felsic mineral contents were not considered. The present study is to investigate the effects of previously ignored rock parameters along with engineering rock properties on SE. Mineralogical and petrographic analyses, rock mechanics, and linear rock cutting tests were performed on sandstone samples taken from sites around Ankara, Turkey. Relationships between SE and rock properties were evaluated using bivariate correlation and linear regression analyses. The tests and subsequent analyses revealed that the texture coefficient and feldspar content of sandstones affected rock cuttability, evidenced by significant correlations between these parameters and SE at a 90% confidence level. Felsic and mafic mineral contents of sandstones did not exhibit any statistically significant correlation against SE. Cementation coefficient, effective porosity, and pore volume had good correlations against SE. Poisson's ratio, Brazilian tensile strength, Shore scleroscope hardness, Schmidt hammer hardness, dry density, and point load strength index showed very strong linear correlations against SE at confidence levels of 95% and above, all of which were also found suitable to be used in predicting SE individually, depending on the results of regression analysis, ANOVA, Student's t-tests, and R2 values. Poisson's ratio exhibited the highest correlation with SE and seemed to be the most reliable SE prediction tool in sandstones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very few empirically validated interventions for improving metacognitive skills (i.e., self-awareness and self-regulation) and functional outcomes have been reported. This single-case experimental study presents JM, a 36-year-old man with a very severe traumatic brain injury (TBI) who demonstrated long-term awareness deficits. Treatment at four years post-injury involved a metacognitive contextual intervention based on a conceptualization of neuro-cognitive, psychological, and socio-environmental factors contributing to his awareness deficits. The 16-week intervention targeted error awareness and self-correction in two real life settings: (a) cooking at home: and (b) volunteer work. Outcome measures included behavioral observation of error behavior and standardized awareness measures. Relative to baseline performance in the cooking setting, JM demonstrated a 44% reduction in error frequency and increased self-correction. Although no spontaneous generalization was evident in the volunteer work setting, specific training in this environment led to a 39% decrease in errors. JM later gained paid employment and received brief metacognitive training in his work environment. JM's global self-knowledge of deficits assessed by self-report was unchanged after the program. Overall, the study provides preliminary support for a metacognitive contextual approach to improve error awareness and functional Outcome in real life settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study reported in this article is a part of a large-scale study investigating syntactic complexity in second language (L2) oral data in commonly taught foreign languages (English, German, Japanese, and Spanish; Ortega, Iwashita, Rabie, & Norris, in preparation). In this article, preliminary findings of the analysis of the Japanese data are reported. Syntactic complexity, which is referred to as syntactic maturity or the use of a range of forms with degrees of sophistication (Ortega, 2003), has long been of interest to researchers in L2 writing. In L2 speaking, researchers have examined syntactic complexity in learner speech in the context of pedagogic intervention (e.g., task type, planning time) and the validation of rating scales. In these studies complexity is examined using measures commonly employed in L2 writing studies. It is assumed that these measures are valid and reliable, but few studies explain what syntactic complexity measures actually examine. The language studied is predominantly English, and little is known about whether the findings of such studies can be applied to languages that are typologically different from English. This study examines how syntactic complexity measures relate to oral proficiency in Japanese as a foreign language. An in-depth analysis of speech samples from 33 learners of Japanese is presented. The results of the analysis are compared across proficiency levels and cross-referenced with 3 other proficiency measures used in the study. As in past studies, the length of T-units and the number of clauses per T-unit is found to be the best way to predict learner proficiency; the measure also had a significant linear relation with independent oral proficiency measures. These results are discussed in light of the notion of syntactic complexity and the interfaces between second language acquisition and language testing. Adapted from the source document

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.