929 resultados para Quantum Error-correction
Resumo:
This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop
Resumo:
Dissertação de Mestrado, Oncobiologia - Mecanismos Moleculares do Cancro, Departamento de Ciências Biomédicas e Medicina, Universidade do Algarve, 2016
Resumo:
The first chapter provides evidence that aggregate Research and Development (R&D) investment drives a persistent component in productivity growth and that this embodies a risk priced in financial markets. In a semi-endogenous growth model, this component is identified by the R&D in excess of equilibrium levels and can be approximated by the Error Correction Term in the cointegration between R&D and Total Factor Productivity. Empirically, the component results being well defined and it satisfies all key theoretical predictions: it exhibits appropriate persistency, it forecasts productivity growth, and it is associated with a cross-sectional risk premium. CAPM is the most foundational model in financial economics, but is known to empirically underestimate expected returns of low-risk assets and overestimate those with high risk. The second chapter studies how risks omission and funding tightness jointly contribute to explaining this anomaly, with the former affecting the definition of assets’ riskiness and the latter affecting how risk is remunerated. Theoretically, the two effects are shown to counteract each other. Empirically, the spread related to binding leverage constraints is found to be significant at 2% yearly. Nonetheless, average returns of portfolios that exploit this anomaly are found to mostly reflect omitted risks, in contrast to their employment in previous literature. The third chapter studies how ‘sustainability’ of assets affect discount rates, which is intrinsically mediated by the risk profile of the assets themselves. This has implications for the assessment of the sustainability-related spread and for hedging changes in the sustainability concern. This mechanism is tested on the ESG-score dimension for US data, with inconclusive evidence regarding the existence of an ESG-related premium in the first place. Also, the risk profile of the long-short ESG portfolio is not likely to impact the sign of its average returns with respect to the sustainability-spread, for the time being.
Resumo:
The aim of this study was to evaluated the efficacy of the Old Way/New Way methodology (Lyndon, 1989/2000) with regard to the permanent correction of a consolidated and automated technical error experienced by a tennis athlete (who is 18 years old and has been engaged in practice mode for about 6 years) in the execution of serves. Additionally, the study assessed the impact of intervention on the athlete’s psychological skills. An individualized intervention was designed using strategies that aimed to produce a) a detailed analysis of the error using video images; b) an increased kinaesthetic awareness; c) a reactivation of memory error; d) the discrimination and generalization of the correct motor action. The athlete’s psychological skills were measured with a Portuguese version of the Psychological Skills Inventory for Sports (Cruz & Viana, 1993). After the intervention, the technical error was corrected with great efficacy and an increase in the athlete’s psychological skills was verified. This study demonstrates the methodology’s efficacy, which is consistent with the effects of this type of intervention in different contexts.
Resumo:
A primary interest of this thesis is to obtain a powerful tool for determining structural properties, electrical and reactivity of molecules. A second interest is the study of fundamental error based on complex overlay of bridge hydrogen. One way to correct this error, using Counterpoise correction proposed by Boys and Bernardi. Usually the Counterpoise correction is applied promptly on the geometries previously optimized. Our goal was to find areas of potential which had all the points fixed with CP. These surfaces have a minimum corresponding to a surface other than corrected, ie, the geometric parameters will be different. The curvature of this minimum will also be different, therefore the vibrational frequency will also change when they are corrected with BSSE. Once constructed these surfaces have been studied various complex. It has also been investigated as the method for calculating the error influenced on the basis superposition.
Resumo:
The scope of this study was to estimate calibrated values for dietary data obtained by the Food Frequency Questionnaire for Adolescents (FFQA) and illustrate the effect of this approach on food consumption data. The adolescents were assessed on two occasions, with an average interval of twelve months. In 2004, 393 adolescents participated, and 289 were then reassessed in 2005. Dietary data obtained by the FFQA were calibrated using the regression coefficients estimated from the average of two 24-hour recalls (24HR) of the subsample. The calibrated values were similar to the the 24HR reference measurement in the subsample. In 2004 and 2005 a significant difference was observed between the average consumption levels of the FFQA before and after calibration for all nutrients. With the use of calibrated data the proportion of schoolchildren who had fiber intake below the recommended level increased. Therefore, it is seen that calibrated data can be used to obtain adjusted associations due to reclassification of subjects within the predetermined categories.
Resumo:
The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. Methods: The nkexact value was determined by obtaining differences (DPc) between keratometric corneal power (Pk) and Gaussian corneal power (PGauss c ) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of DPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with PGauss c , Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. Results: nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and PGauss c did not exceed 60.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P , 0.01), whereas no differences were found between PGauss c and Pkadj (P . 0.01). Conclusions: The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.
Resumo:
"Retyped October, 1964"
Resumo:
One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.
Resumo:
International audience
Resumo:
Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.
Resumo:
We report a comprehensive study of weak-localization and electron-electron interaction effects in a GaAs/InGaAs two-dimensional electron system with nearby InAs quantum dots, using measurements of the electrical conductivity with and without magnetic field. Although both the effects introduce temperature dependent corrections to the zero magnetic field conductivity at low temperatures, the magnetic field dependence of conductivity is dominated by the weak-localization correction. We observed that the electron dephasing scattering rate tau(-1)(phi), obtained from the magnetoconductivity data, is enhanced by introducing quantum dots in the structure, as expected, and obeys a linear dependence on the temperature and elastic mean free path, which is against the Fermi-liquid model. (c) 2008 American Institute of Physics. [DOI: 10.1063/1.2996034]
Resumo:
In this paper, employing the Ito stochastic Schrodinger equation, we extend Bell's beable interpretation of quantum mechanics to encompass dissipation, decoherence, and the quantum-to-classical transition through quantum trajectories. For a particular choice of the source of stochasticity, the one leading to a dissipative Lindblad-type correction to the Hamiltonian dynamics, we find that the diffusive terms in Nelsons stochastic trajectories are naturally incorporated into Bohm's causal dynamics, yielding a unified Bohm-Nelson theory. In particular, by analyzing the interference between quantum trajectories, we clearly identify the decoherence time, as estimated from the quantum formalism. We also observe the quantum-to-classical transition in the convergence of the infinite ensemble of quantum trajectories to their classical counterparts. Finally, we show that our extended beables circumvent the problems in Bohm's causal dynamics regarding stationary states in quantum mechanics.
Resumo:
The mapping, exact or approximate, of a many-body problem onto an effective single-body problem is one of the most widely used conceptual and computational tools of physics. Here, we propose and investigate the inverse map of effective approximate single-particle equations onto the corresponding many-particle system. This approach allows us to understand which interacting system a given single-particle approximation is actually describing, and how far this is from the original physical many-body system. We illustrate the resulting reverse engineering process by means of the Kohn-Sham equations of density-functional theory. In this application, our procedure sheds light on the nonlocality of the density-potential mapping of density-functional theory, and on the self-interaction error inherent in approximate density functionals.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.