63 resultados para Error of measurement


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To determine the heritability of refractive error and the familial aggregation of myopia in an older population. METHODS: Seven hundred fifty-nine siblings (mean age, 73.4 years) in 241 families were recruited from the Salisbury Eye Evaluation (SEE) Study in eastern Maryland. Refractive error was determined by noncycloplegic subjective refraction (if presenting distance visual acuity was < or =20/40) or lensometry (if best corrected visual acuity was >20/40 with spectacles). Participants were considered plano (refractive error of zero) if uncorrected visual acuity was >20/40. Preoperative refraction from medical records was used for pseudophakic subjects. Heritability of refractive error was calculated with multivariate linear regression and was estimated as twice the residual between-sibling correlation after adjusting for age, gender, and race. Logistic regression models were used to estimate the odds ratio (OR) of myopia, given a myopic sibling relative to having a nonmyopic sibling. RESULTS: The estimated heritability of refractive error was 61% (95% confidence interval [CI]: 34%-88%) in this population. The age-, race-, and sex-adjusted ORs of myopia were 2.65 (95% CI: 1.67-4.19), 2.25 (95% CI: 1.31-3.87), 3.00 (95% CI: 1.56-5.79), and 2.98 (95% CI: 1.51-5.87) for myopia thresholds of -0.50, -1.00, -1.50, and -2.00 D, respectively. Neither race nor gender was significantly associated with an increased risk of myopia. CONCLUSIONS: Refractive error and myopia are highly heritable in this elderly population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To study spectacle wear among rural Chinese children. METHODS: Visual acuity, refraction, spectacle wear, and visual function were measured. RESULTS: Among 1892 subjects (84.7% of the sample), the mean (SD) age was 14.7 (0.8) years. Among 948 children (50.1%) potentially benefiting from spectacle wear, 368 (38.8%) did not own them. Among 580 children owning spectacles, 17.9% did not wear them at school. Among 476 children wearing spectacles, 25.0% had prescriptions that could not improve their visual acuity to better than 6/12. Therefore, 62.3% (591 of 948) of children needing spectacles did not benefit from appropriate correction. Children not owning and not wearing spectacles had better self-reported visual function but worse visual acuity at initial examination than children wearing spectacles and had a mean (SD) refractive error of -2.06 (1.15) diopter (D) and -2.78 (1.32) D, respectively. Girls (P < .001) and older children (P = .03) were more likely to be wearing their spectacles. A common reason for nonwear (17.0%) was the belief that spectacles weaken the eyes. Among children without spectacles, 79.3% said their families would pay for them (mean, US $15). CONCLUSIONS: Although half of the children could benefit from spectacle wear, 62.3% were not wearing appropriate correction. These children have significant uncorrected refractive errors. There is potential to support programs through spectacle sales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new way of extracting policy positions from political texts that treats texts not as discourses to be understood and interpreted but rather, as data in the form of words. We compare this approach to previous methods of text analysis and use it to replicate published estimates of the policy positions of political parties in Britain and Ireland, on both economic and social policy dimensions. We “export” the method to a non-English-language environment, analyzing the policy positions of German parties, including the PDS as it entered the former West German party system. Finally, we extend its application beyond the analysis of party manifestos, to the estimation of political positions from legislative speeches. Our “language-blind” word scoring technique successfully replicates published policy estimates without the substantial costs of time and labor that these require. Furthermore, unlike in any previous method for extracting policy positions from political texts, we provide uncertainty measures for our estimates, allowing analysts to make informed judgments of the extent to which differences between two estimated policy positions can be viewed as significant or merely as products of measurement error.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The University of Waikato, Hamilton, New Zealand and The Queen's University of Belfast, Northern Ireland radiocarbon dating laboratories have undertaken a series of high-precision measurements on decadal samples of dendrochronologically dated oak (Quercus petraea) from Great Britain and cedar (Libocedrus bidwillii) and silver pine (Lagarostrobos colensoi) from New Zealand. The results show an average hemispheric offset over the 900 yr of measurement of 40±13 yr. This value is not constant but varies with a periodicity of about 130 yr. The Northern Hemisphere measurements confirm the validity of the Pearson et al. (1986) calibration dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work presented is concerned with the estimation of manufacturing cost at the concept design stage, when little technical information is readily available. The work focuses on the nose cowl sections of a wide range of engine nacelles built at Bombardier Aerospace Shorts of Belfast. A core methodology is presented that: defines manufacturing cost elements that are prominent; utilises technical parameters that are highly influential in generating those costs; establishes the linkage between these two; and builds the associated cost estimating relations into models. The methodology is readily adapted to deal with both the early and more mature conceptual design phases, which thereby highlights the generic, flexible and fundamental nature of the method. The early concept cost model simplifies cost as a cumulative element that can be estimated using higher level complexity ratings, while the mature concept cost model breaks manufacturing cost down into a number of constituents that are each driven by their own specific drivers. Both methodologies have an average error of less that ten percent when correlated with actual findings, thus achieving an acceptable level of accuracy. By way of validity and application, the research is firmly based on industrial case studies and practice and addresses the integration of design and manufacture through cost. The main contribution of the paper is the cost modelling methodology. The elemental modelling of the cost breakdown structure through materials, part fabrication, assembly and their associated drivers is relevant to the analytical design procedure, as it utilises design definition and complexity that is understood by engineers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For the purpose of a nonlocality test, we propose a general correlation observable of two parties by utilizing local d- outcome measurements with SU(d) transformations and classical communications. Generic symmetries of the SU(d) transformations and correlation observables are found for the test of nonlocality. It is shown that these symmetries dramatically reduce the number of numerical variables, which is important for numerical analysis of nonlocality. A linear combination of the correlation observables, which is reduced to the Clauser- Home-Shimony-Holt (CHSH) Bell's inequality for two outcome measurements, leads to the Collins-Gisin-Linden-Massar-Popescu (CGLMP) nonlocality test for d-outcome measurement. As a system to be tested for its nonlocality, we investigate a continuous- variable (CV) entangled state with d measurement outcomes. It allows the comparison of nonlocality based on different numbers of measurement outcomes on one physical system. In our example of the CV state, we find that a pure entangled state of any degree violates Bell's inequality for d(greater than or equal to2) measurement outcomes when the observables are of SU(d) transformations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: Raman spectroscopy has been used for the first time to predict the FA composition of unextracted adipose tissue of pork, beef, lamb, and chicken. It was found that the bulk unsaturation parameters could be predicted successfully [R-2 = 0.97, root mean square error of prediction (RMSEP) = 4.6% of 4 sigma], with cis unsaturation, which accounted for the majority of the unsaturation, giving similar correlations. The combined abundance of all measured PUFA (>= 2 double bonds per chain) was also well predicted with R-2 = 0.97 and RMSEP = 4.0% of 4 sigma. Trans unsaturation was not as well modeled (R-2 = 0.52, RMSEP = 18% of 4 sigma); this reduced prediction ability can be attributed to the low levels of trans FA found in adipose tissue (0.035 times the cis unsaturation level). For the individual FA, the average partial least squares (PLS) regression coefficient of the 18 most abundant FA (relative abundances ranging from 0.1 to 38.6% of the total FA content) was R-2 = 0.73; the average RMSEP = 11.9% of 4 sigma. Regression coefficients and prediction errors for the five most abundant FA were all better than the average value (in some cases as low as RMSEP = 4.7% of 4 sigma). Cross-correlation between the abundances of the minor FA and more abundant acids could be determined by principal component analysis methods, and the resulting groups of correlated compounds were also well-predicted using PLS. The accuracy of the prediction of individual FA was at least as good as other spectroscopic methods, and the extremely straightforward sampling method meant that very rapid analysis of samples at ambient temperature was easily achieved. This work shows that Raman profiling of hundreds of samples per day is easily achievable with an automated sampling system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The potential of Raman spectroscopy for the determination of meat quality attributes has been investigated using data from a set of 52 cooked beef samples, which were rated by trained taste panels. The Raman spectra, shear force and cooking loss were measured and PLS used to correlate the attributes with the Raman data. Good correlations and standard errors of prediction were found when the Raman data were used to predict the panels' rating of acceptability of texture (R-2 = 0.71, Residual Mean Standard Error of Prediction (RMSEP)% of the mean (mu) = 15%), degree of tenderness (R-2 = 0.65, RMSEP% of mu = 18%), degree of juiciness (R-2 = 0.62, RMSEP% of mu = 16%), and overall acceptability (R-2 = 0.67, RMSEP% of mu = 11%). In contrast, the mechanically determined shear force was poorly correlated with tenderness (R-2 = 0.15). Tentative interpretation of the plots of the regression coefficients suggests that the alpha-helix to beta-sheet ratio of the proteins and the hydrophobicity of the myofibrillar environment are important factors contributing to the shear force, tenderness, texture and overall acceptability of the beef. In summary, this work demonstrates that Raman spectroscopy can be used to predict consumer-perceived beef quality. In part, this overall success is due to the fact that the Raman method predicts texture and tenderness, which are the predominant factors in determining overall acceptability in the Western world. Nonetheless, it is clear that Raman spectroscopy has considerable potential as a method for non-destructive and rapid determination of beef quality parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Index properties such as the liquid limit and plastic limit are widely used to evaluate certain geotechnical parameters of fine-grained soils. Measurement of the liquid limit is a mechanical process, and the possibility of errors occurring during measurement is not significant. However, this is not the case for plastic limit testing, despite the fact that the current method of measurement is embraced by many standards around the world. The method in question relies on a fairly crude procedure known widely as the ‘thread rolling' test, though it has been the subject of much criticism in recent years. It is essential that a new, more reliable method of measuring the plastic limit is developed using a mechanical process that is both consistent and easily reproducible. The work reported in this paper concerns the development of a new device to measure the plastic limit, based on the existing falling cone apparatus. The force required for the test is equivalent to the application of a 54 N fast-static load acting on the existing cone used in liquid limit measurements. The test is complete when the relevant water content of the soil specimen allows the cone to achieve a penetration of 20 mm. The new technique was used to measure the plastic limit of 16 different clays from around the world. The plastic limit measured using the new method identified reasonably well the water content at which the soil phase changes from the plastic to the semi-solid state. Further evaluation was undertaken by conducting plastic limit tests using the new method on selected samples and comparing the results with values reported by local site investigation laboratories. Again, reasonable agreement was found.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a practical algorithm for the simulation of interactive deformation in a 3D polygonal mesh model. The algorithm combines the conventional simulation of deformation using a spring-mass-damping model, solved by explicit numerical integration, with a set of heuristics to describe certain features of the transient behaviour, to increase the speed and stability of solution. In particular, this algorithm was designed to be used in the simulation of synthetic environments where it is necessary to model realistically, in real time, the effect on non-rigid surfaces being touched, pushed, pulled or squashed. Such objects can be solid or hollow, and have plastic, elastic or fabric-like properties. The algorithm is presented in an integrated form including collision detection and adaptive refinement so that it may be used in a self-contained way as part of a simulation loop to include human interface devices that capture data and render a realistic stereoscopic image in real time. The algorithm is designed to be used with polygonal mesh models representing complex topology, such as the human anatomy in a virtual-surgery training simulator. The paper evaluates the model behaviour qualitatively and then concludes with some examples of the use of the algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of artificial neural network (ANN) models to predict the rheological behavior of grouts is described is this paper and the sensitivity of such parameters to the variation in mixture ingredients is also evaluated. The input parameters of the neural network were the mixture ingredients influencing the rheological behavior of grouts, namely the cement content, fly ash, ground-granulated blast-furnace slag, limestone powder, silica fume, water-binder ratio (w/b), high-range water-reducing admixture, and viscosity-modifying agent (welan gum). The six outputs of the ANN models were the mini-slump, the apparent viscosity at low shear, and the yield stress and plastic viscosity values of the Bingham and modified Bingham models, respectively. The model is based on a multi-layer feed-forward neural network. The details of the proposed ANN with its architecture, training, and validation are presented in this paper. A database of 186 mixtures from eight different studies was developed to train and test the ANN model. The effectiveness of the trained ANN model is evaluated by comparing its responses with the experimental data that were used in the training process. The results show that the ANN model can accurately predict the mini-slump, the apparent viscosity at low shear, the yield stress, and the plastic viscosity values of the Bingham and modified Bingham models of the pseudo-plastic grouts used in the training process. The results can also predict these properties of new mixtures within the practical range of the input variables used in the training with an absolute error of 2%, 0.5%, 8%, 4%, 2%, and 1.6%, respectively. The sensitivity of the ANN model showed that the trend data obtained by the models were in good agreement with the actual experimental results, demonstrating the effect of mixture ingredients on fluidity and the rheological parameters with both the Bingham and modified Bingham models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Three experiments were conducted to test the effectiveness of different footbath solutions and regimens in the treatment of digital dermatitis (DD) in dairy cows. During the study, groups of cows walked through allocated footbath solutions after milking on 4 consecutive occasions. All cows were scored weekly for DD lesion stage on the hind feet during milking. A “transition grade” was assigned on the basis of whether the DD lesions improved (1) or deteriorated or did not improve (0) from week to week. This grade per cow was averaged for all cows in the group. In experiment 1, 118 cows were allocated to 1 of 3 footbath treatments for 5 wk: (1) 5% CuSO4 each week, (2) 2% ClO- each week, or (3) no footbath (control). The mean transition grade, and proportion of cows without DD lesions at the end of the trial were significantly higher for treatment 1 above (0.36, 0.13, and 0.11, respectively; standard error of the difference, SED=0.057). In experiment 2, 117 cows were allocated to 1 of 4 footbath treatment regimens for 8 wk: (1) 5% CuSO4 each week, (2) 2% CuSO4 each week, (3) 5% CuSO4 each fortnight, or (4) 2% CuSO4 each fortnight. For welfare reasons, cows allocated to the weekly and fortnightly footbath regimens had an average prevalence of >60% and =25% active DD at the start of the trial, respectively. Significantly more cows had no DD lesions (0.53 vs. 0.36, respectively; SED=0.049), and the mean transition grade of DD lesions was higher in the 5% compared with the 2% weekly CuSO4 treatment (0.52 vs. 0.38, respectively; SED=0.066). Similarly, significantly more cows had no DD lesions in the 5% compared with the 2% fortnightly CuSO4 treatments (0.64 vs. 0.47, respectively; SED=0.049). In experiment 3, 95 cows were allocated to 1 of 3 footbath treatments: (1) each week alternating 5% CuSO4 with 10% salt water, (2) each week alternating 5% CuSO4 with water, or (3) 5% CuSO4 each fortnight (control). After 10 wk, more cows had no DD in the salt water treatment than in the control treatment (0.35 vs. 0.26, respectively; SED=0.038), but levels of active lesions were higher for this treatment than in the other 2 treatments (0.17, 0.00, and 0.13, respectively; SED=0.029). Treatment did not affect mean transition grade of DD lesions. In conclusion, CuSO4 was the only footbath solution that was consistently effective for treatment of DD. In cases when DD prevalence was high, a footbath each week using 5% CuSO4 was the most effective treatment.