817 resultados para Error of measurement


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the generalized Huygens-Fresnel diffraction integral theory and the stationary-phase method, we analyze the influence on diffraction-free beam patterns of an elliptical manufacture error in an axicon. The numerical simulation is compared with the beam patterns photographed by using a CCD camera. Theoretical simulation and experimental results indicate that the intensity of the central spot decreases with increasing elliptical manufacture defect and propagation distance. Meanwhile, the bright rings around the central spot are gradually split into four or more symmetrical bright spots. The experimental results fit the theoretical simulation very well. (C) 2008 Society of Photo-Optical Instrumentation Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ground state structure of C(4N+2) rings is believed to exhibit a geometric transition from angle alternation (N < or = 2) to bond alternation (N > 2). All previous density functional theory (DFT) studies on these molecules have failed to reproduce this behavior by predicting either that the transition occurs at too large a ring size, or that the transition leads to a higher symmetry cumulene. Employing the recently proposed perspective of delocalization error within DFT we rationalize this failure of common density functional approximations (DFAs) and present calculations with the rCAM-B3LYP exchange-correlation functional that show an angle-to-bond-alternation transition between C(10) and C(14). The behavior exemplified here manifests itself more generally as the well known tendency of DFAs to bias toward delocalized electron distributions as favored by Huckel aromaticity, of which the C(4N+2) rings provide a quintessential example. Additional examples are the relative energies of the C(20) bowl, cage, and ring isomers; we show that the results from functionals with minimal delocalization error are in good agreement with CCSD(T) results, in contrast to other commonly used DFAs. An unbiased DFT treatment of electron delocalization is a key for reliable prediction of relative stability and hence the structures of complex molecules where many structure stabilization mechanisms exist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of a “true” ground-truth map is introduced, from which the inaccuracy/error of any production map may be measured. A partition of the mapped region is defined in terms of the “residual rectification” transformation. Geometric RMS-type and Geometric Distortion error criteria are defined as well as a map mis-classification error criterion (the latter for hard and fuzzy produc-tion maps). The total map error is defined to be the sum (over each set of the map partition men-tioned above) of these three error components integrated over each set of the partition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigations of Li-7(p,n)Be-7 reactions using Cu and CH primary and LiF secondary targets were performed using the VULCAN laser [C.N. Danson , J. Mod. Opt. 45, 1653 (1997)] with intensities up to 3x10(19) W cm(-2). The neutron yield was measured using CR-39 plastic track detector and the yield was up to 3x10(8) sr(-1) for CH primary targets and up to 2x10(8) sr(-1) for Cu primary targets. The angular distribution of neutrons was measured at various angles and revealed a relatively anisotropic neutron distribution over 180degrees that was greater than the error of measurement. It may be possible to exploit such reactions on high repetition, table-top lasers for neutron radiography. (C) 2004 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Policy-based network management (PBNM) paradigms provide an effective tool for end-to-end resource
management in converged next generation networks by enabling unified, adaptive and scalable solutions
that integrate and co-ordinate diverse resource management mechanisms associated with heterogeneous
access technologies. In our project, a PBNM framework for end-to-end QoS management in converged
networks is being developed. The framework consists of distributed functional entities managed within a
policy-based infrastructure to provide QoS and resource management in converged networks. Within any
QoS control framework, an effective admission control scheme is essential for maintaining the QoS of
flows present in the network. Measurement based admission control (MBAC) and parameter basedadmission control (PBAC) are two commonly used approaches. This paper presents the implementationand analysis of various measurement-based admission control schemes developed within a Java-based
prototype of our policy-based framework. The evaluation is made with real traffic flows on a Linux-based experimental testbed where the current prototype is deployed. Our results show that unlike with classic MBAC or PBAC only schemes, a hybrid approach that combines both methods can simultaneously result in improved admission control and network utilization efficiency

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Previous research demonstrates various associations between depression, cardiovascular disease (CVD) incidence and mortality, possibly as a result of the different methodologies used to measure depression and analyse relationships. This analysis investigated the association between depression, CVD incidence (CVDI) and mortality from CVD (MCVD), smoking related conditions (MSRC), and all causes (MALL), in a sample data set, where depression was measured using items from a validated questionnaire and using items derived from the factor analysis of a larger questionnaire, and analyses were conducted based on continuous data and grouped data.

Methods: Data from the PRIME Study (N=9798 men) on depression and 10-year CVD incidence and mortality were analysed using Cox proportional hazards models.

Results: Using continuous data, both measures of depression resulted in the emergence of positive associations between depression and mortality (MCVD, MSRC, MALL). Using grouped data, however, associations between a validated measure of depression and MCVD, and between a measure of depression derived from factor analysis and all measures of mortality were lost.

Limitations: Low levels of depression, low numbers of individuals with high depression and low numbers of outcome events may limit these analyses, but levels are usual for the population studied.

Conclusions: These data demonstrate a possible association between depression and mortality but detecting this association is dependent on the measurement used and method of analysis. Different findings based on methodology present clear problems for the elucidation and determination of relationships. The differences here argue for the use of validated scales where possible and suggest against over-reduction via factor analysis and grouping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE:

To determine the test-retest variability in perimetric, optic disc, and macular thickness parameters in a cohort of treated patients with established glaucoma.

PATIENTS AND METHODS:

In this cohort study, the authors analyzed the imaging studies and visual field tests at the baseline and 6-month visits of 162 eyes of 162 participant in the Glaucoma Imaging Longitudinal Study (GILS). They assessed the difference, expressed as the standard error of measurement, of Humphrey field analyzer II (HFA) Swedish Interactive Threshold Algorithm fast, Heidelberg retinal tomograph (HRT) II, and retinal thickness analyzer (RTA) parameters between the two visits and assumed that this difference was due to measurement variability, not pathologic change. A statistically significant change was defined as twice the standard error of measurement.

RESULTS:

In this cohort of treated glaucoma patients, it was found that statistically significant changes were 3.2 dB for mean deviation (MD), 2.2 for pattern standard deviation (PSD), 0.12 for cup shape measure, 0.26 mm for rim area, and 32.8 microm and 31.8 microm for superior and inferior macular thickness, respectively. On the basis of these values, it was estimated that the number of potential progression events detectable in this cohort by the parameters of MD, PSD, cup shape measure, rim area, superior macular thickness, and inferior macular thickness was 7.5, 6.0, 2.3, 5.7, 3.1, and 3.4, respectively.

CONCLUSIONS:

The variability of the measurements of MD, PSD, and rim area, relative to the range of possible values, is less than the variability of cup shape measure or macular thickness measurements. Therefore, the former measurements may be more useful global measurements for assessing progressive glaucoma damage.