923 resultados para uncorrected refractive error
Resumo:
In this paper we exploit the nonlinear property of the SiC multilayer devices to design an optical processor for error detection that enables reliable delivery of spectral data of four-wave mixing over unreliable communication channels. The SiC optical processor is realized by using double pin/pin a-SiC:H photodetector with front and back biased optical gating elements. Visible pulsed signals are transmitted together at different bit sequences. The combined optical signal is analyzed. Data show that the background acts as selector that picks one or more states by splitting portions of the input multi optical signals across the front and back photodiodes. Boolean operations such as EXOR and three bit addition are demonstrated optically, showing that when one or all of the inputs are present, the system will behave as an XOR gate representing the SUM. When two or three inputs are on, the system acts as AND gate indicating the present of the CARRY bit. Additional parity logic operations are performed using four incoming pulsed communication channels that are transmitted and checked for errors together. As a simple example of this approach, we describe an all-optical processor for error detection and then provide an experimental demonstration of this idea. (C) 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
The SiC optical processor for error detection and correction is realized by using double pin/pin a-SiC:H photodetector with front and back biased optical gating elements. Data shows that the background act as selector that pick one or more states by splitting portions of the input multi optical signals across the front and back photodiodes. Boolean operations such as exclusive OR (EXOR) and three bit addition are demonstrated optically with a combination of such switching devices, showing that when one or all of the inputs are present the output will be amplified, the system will behave as an XOR gate representing the SUM. When two or three inputs are on, the system acts as AND gate indicating the present of the CARRY bit. Additional parity logic operations are performed by use of the four incoming pulsed communication channels that are transmitted and checked for errors together. As a simple example of this approach, we describe an all optical processor for error detection and correction and then, provide an experimental demonstration of this fault tolerant reversible system, in emerging nanotechnology.
Resumo:
Thick smears of human feces can be made adequate for identification of helminth eggs by means of refractive index matching. Although this effect can be obtained by simply spreading a fleck of feces on a microscope slide, a glycerol solution has been routinely used to this end. Aiming at practicability, a new quantitative technique has been developed. To enhance both sharpness and contrast of the images, a sucrose solution (refractive index = 1.49) is used, which reduces the effect of light-scattering particulates. To each slide a template-measured (38.5 mm³) fecal sample is transferred. Thus, egg counts and sensitivity evaluations are easily made.
Resumo:
This paper is mainly concerned with the tracking accuracy of Exchange Traded Funds (ETFs) listed on the London Stock Exchange (LSE) but also evaluates their performance and pricing efficiency. The findings show that ETFs offer virtually the same return but exhibit higher volatility than their benchmark. It seems that the pricing efficiency, which should come from the creation and redemption process, does not fully hold as equity ETFs show consistent price premiums. The tracking error of the funds is generally small and is decreasing over time. The risk of the ETF, daily price volatility and the total expense ratio explain a large part of the tracking error. Trading volume, fund size, bid-ask spread and average price premium or discount did not have an impact on the tracking error. Finally, it is concluded that market volatility and the tracking error are positively correlated.
Resumo:
The aim of this study was to evaluated the efficacy of the Old Way/New Way methodology (Lyndon, 1989/2000) with regard to the permanent correction of a consolidated and automated technical error experienced by a tennis athlete (who is 18 years old and has been engaged in practice mode for about 6 years) in the execution of serves. Additionally, the study assessed the impact of intervention on the athlete’s psychological skills. An individualized intervention was designed using strategies that aimed to produce a) a detailed analysis of the error using video images; b) an increased kinaesthetic awareness; c) a reactivation of memory error; d) the discrimination and generalization of the correct motor action. The athlete’s psychological skills were measured with a Portuguese version of the Psychological Skills Inventory for Sports (Cruz & Viana, 1993). After the intervention, the technical error was corrected with great efficacy and an increase in the athlete’s psychological skills was verified. This study demonstrates the methodology’s efficacy, which is consistent with the effects of this type of intervention in different contexts.
Resumo:
Purpose: The purpose of this study was to evaluate the effect of orthokeratology for different degrees of myopia correction in the relative location of tangential (FT) and sagittal (FS) power errors across the central 70 of the visual field in the horizontal meridian. Methods: Thirty-four right eyes of 34 patients with a mean age of 25.2 ± 6.4 years were fitted with Paragon CRT (Mesa, AZ) rigid gas permeable contact lenses to treat myopia (2.15 ± 1.26D, range: 0.88 to 5.25D). Axial and peripheral refraction were measured along the central 70 of the horizontal visual field with the Grand Seiko WAM5500 open-field auto-refractor. Spherical equivalent (M), as well as tangential (FT) and sagittal power errors (FS) were obtained. Analysis was stratified in three groups according to baseline spherical equivalent: Group 1 [MBaseline = 0.88 to 1.50D; n = 11], Group 2 [MBaseline = 1.51 to 2.49D; n = 11], and Group 3 [MBaseline = 2.50 to 5.25D; n = 12]. Results: Spherical equivalent was significantly more myopic after treatment beyond the central 40 of the visual field (p50.001). FT became significantly more myopic for all groups in the nasal and temporal retina with 25 (p 0.017), 30 (p 0.007) and 35 (p 0.004) of eye rotation. Myopic change in FS was less consistent, achieving only statistical significance for all groups at 35 in the nasal and temporal retina (p 0.045). Conclusions: Orthokeratology changes significantly FT in the myopic direction beyond the central 40 of the visual field for all degrees of myopia. Changes induced by orthokeratology in relative peripheral M, FT and FS with 35 of eye rotation were significantly correlated with axial myopia at baseline. Keywords: Field
Resumo:
El objetivo que persigue un proceso de auditoría de estados contables es la comunicación por parte del auditor de una conclusión en relación al grado de razonabilidad con que tales estados reflejan la situación patrimonial, económica y financiera del ente de acuerdo a los criterios plasmados en las normas contables de referencia a ser utilizadas. El hecho que un auditor emita una conclusión errónea como consecuencia de su labor puede implicar la asunción de responsabilidades profesionales, civiles y penales como consecuencia de reclamos de usuarios de los estados contables que pudieran haberse visto perjudicados como consecuencia de la emisión de la conclusión errónea. Las normas contables a nivel nacional e internacional admiten la existencia de errores u omisiones en la información contenida en los estados contables, en la medida que tales desvíos no provoquen en los usuarios interesados en tales estados una decisión distinta a la que tomarían en caso de no existir los errores u omisiones aludidos. De lo expuesto en el párrafo anterior surge la cabal importancia que la determinación del nivel de significación total (nivel de desvíos admitidos por los usuarios de los estados contables en la información por ellos contenida) adquiere en los procesos de auditoría, como así también la asignación de tal nivel entre los distintos componentes de los estados contables (asignación del error tolerable) a los efectos de que los auditores eviten asumir responsabilidades de índole profesional, civil y/o penal. Hasta el momento no se conoce la existencia de modelos matemáticos que respalden de modo objetivo y verificable el cálculo del nivel de significación total y la asignación del error tolerable entre los distintos elementos conformantes de los estados contables. Entendemos que el desarrollo e integración de un modelo de cuantificación del nivel de significación total y de asignación del error tolerable tiene las siguientes repercusiones: 1 – Representaría para el auditor un elemento que respalde el modo de cuantificación del nivel de significación y la asignación del error tolerable entre los componentes de los estados contables. 2 – Permitiría que los auditores reduzcan las posibilidades de asumir responsabilidades de carácter profesional, civil y/o penales como consecuencia de su labor. 3 – Representaría un principio de avance a los efectos de que los organismos emisores de normas de auditoría a nivel nacional e internacional recepten elementos a los efectos de fijar directrices en relación al cálculo del nivel de significación y de asignación del error tolerable. 4 - Eliminaría al cálculo del nivel de significación como una barrera que afecte la comparabilidad de los estados contables.
Resumo:
The classical central limit theorem states the uniform convergence of the distribution functions of the standardized sums of independent and identically distributed square integrable real-valued random variables to the standard normal distribution function. While first versions of the central limit theorem are already due to Moivre (1730) and Laplace (1812), a systematic study of this topic started at the beginning of the last century with the fundamental work of Lyapunov (1900, 1901). Meanwhile, extensions of the central limit theorem are available for a multitude of settings. This includes, e.g., Banach space valued random variables as well as substantial relaxations of the assumptions of independence and identical distributions. Furthermore, explicit error bounds are established and asymptotic expansions are employed to obtain better approximations. Classical error estimates like the famous bound of Berry and Esseen are stated in terms of absolute moments of the random summands and therefore do not reflect a potential closeness of the distributions of the single random summands to a normal distribution. Non-classical approaches take this issue into account by providing error estimates based on, e.g., pseudomoments. The latter field of investigation was initiated by work of Zolotarev in the 1960's and is still in its infancy compared to the development of the classical theory. For example, non-classical error bounds for asymptotic expansions seem not to be available up to now ...
Resumo:
This paper dis cusses the fitting of a Cobb-Doug las response curve Yi = αXβi, with additive error, Yi = αXβi + e i, instead of the usual multiplicative error Yi = αXβi (1 + e i). The estimation of the parameters A and B is discussed. An example is given with use of both types of error.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Univ., Dissertation, 2015
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
Digital holographic microscopy (DHM) allows optical-path-difference (OPD) measurements with nanometric accuracy. OPD induced by transparent cells depends on both the refractive index (RI) of cells and their morphology. This Letter presents a dual-wavelength DHM that allows us to separately measure both the RI and the cellular thickness by exploiting an enhanced dispersion of the perfusion medium achieved by the utilization of an extracellular dye. The two wavelengths are chosen in the vicinity of the absorption peak of the dye, where the absorption is accompanied by a significant variation of the RI as a function of the wavelength.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.