973 resultados para Error in substance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"August 1978"--Cover.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medication errors are associated with significant morbidity and people with mental health problems may be particularly susceptible to medication errors due to various factors. Primary care has a key role in improving medication safety in this vulnerable population. The complexity of services, involving primary and secondary care and social services, and potential training issues may increase error rates, with physical medicines representing a particular risk. Service users may be cognitively impaired and fail to identify an error placing additional responsibilities on clinicians. The potential role of carers in error prevention and medication safety requires further elaboration. A potential lack of trust between service users and clinicians may impair honest communication about medication issues leading to errors. There is a need for detailed research within this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A szerző a 2008-ban kezdődött gazdasági világválság hatását vizsgálja az egy részvényre jutó nyereség előrejelzésének hibájára. Számos publikáció bizonyította, hogy az elemzők a tényértékeknél szisztematikusan kedvezőbb tervértéket adnak meg az egy részvényre jutó előrejelzéseikben. Más vizsgálatok azt igazolták, hogy az egy részvényre jutó előrejelzési hiba bizonytalan környezetben növekszik, míg arra is számos bizonyítékot lehet találni, hogy a negatív hírek hatását az elemzők alulsúlyozzák. A gazdasági világválság miatt az elemzőknek számtalan negatív hírt kellett figyelembe venniük az előrejelzések készítésekor, továbbá a válság az egész gazdaságban jelentősen növelte a bizonytalanságot. A szerző azt vizsgálja, hogy miként hatott a gazdasági világválság az egy részvényre jutó nyereség- előrejelzés hibájára, megkülönböztetve azt az időszakot, amíg a válság negatív hír volt, attól, amikor már hatásaként jelentősen megnőtt a bizonytalanság. _____ The author investigated the impact of the financial crisis that started in 2008 on the forecasting error for earnings per share. There is plentiful evidence from the 1980s that analysts give systematically more favourable values in their earnings per share (EPS) forecasts than reality, i.e. they are generally optimistic. Other investigations have supported the idea that the EPS forecasting error is greater under uncertain environmental circumstances, while other researchers prove that the analysts under-react to the negative information in their forecasts. The financial crisis brought a myriad of negative information for analysts to consider in such forecasts, while also increasing the level of uncertainty for the entire economy. The article investigates the impact of the financial crisis on the EPS forecasting error, distinguishing the period when the crisis gave merely negative information, from the one when its effect of uncertainty was significantly increased over the entire economy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.