47 resultados para reliability-cost evaluation
em Aston University Research Archive
Resumo:
People depend on various sources of information when trying to verify their autobiographical memories. Yet recent research shows that people prefer to use cheap-and-easy verification strategies, even when these strategies are not reliable. We examined the robustness of this cheap strategy bias, with scenarios designed to encourage greater emphasis on source reliability. In three experiments, subjects described real (Experiments 1 and 2) or hypothetical (Experiment 3) autobiographical events, and proposed strategies they might use to verify their memories of those events. Subjects also rated the reliability, cost, and the likelihood that they would use each strategy. In line with previous work, we found that the preference for cheap information held when people described how they would verify childhood or recent memories (Experiment 1); personally-important or trivial memories (Experiment 2), and even when the consequences of relying on incorrect information could be significant (Experiment 3). Taken together, our findings fit with an account of source monitoring in which the tendency to trust one’s own autobiographical memories can discourage people from systematically testing or accepting strong disconfirmatory evidence.
Resumo:
It is a crucial task to evaluate the reliability of manufacturing process in product development process. Process reliability is a measurement of production ability of reconfigurable manufacturing system (RMS), which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. An integration framework of manufacturing process reliability evaluation is presented together with product development process. A mathematical model and algorithm based on universal generating function (UGF) is developed for calculating the reliability of manufacturing process with respect to task intensity and process capacity, which are both independent random variables. The rework strategies of RMS are analyzed under different task intensity based on process reliability is presented, and the optimization of rework strategies based on process reliability is discussed afterwards.
Resumo:
Background/aims Macular pigment is thought to protect the macula against exposure to light and oxidative stress, both of which may play a role in the development of age-related macular degeneration. The aim was to clinically evaluate a novel cathode-ray-tube-based method for measurement of macular pigment optical density (MPOD) known as apparent motion photometry (AMP). Methods The authors took repeat readings of MPOD centrally (0°) and at 3° eccentricity for 76 healthy subjects (mean (±SD) 26.5±13.2 years, range 18–74 years). Results The overall mean MPOD for the cohort was 0.50±0.24 at 0°, and 0.28±0.20 at 3° eccentricity; these values were significantly different (t=-8.905, p<0.001). The coefficients of repeatability were 0.60 and 0.48 for the 0 and 3° measurements respectively. Conclusions The data suggest that when the same operator is taking repeated 0° AMP MPOD readings over time, only changes of more than 0.60 units can be classed as clinically significant. In other words, AMP is not suitable for monitoring changes in MPOD over time, as increases of this magnitude would not be expected, even in response to dietary modification or nutritional supplementation.
Resumo:
This paper disputes the fact that product design determines 70% of costs and the implications that follow for design evaluation tools. Using the idea of decision chains, it is argued that such tools need to consider more of the downstream business activities and should take into account the current and future state of the business rather than some idealized view of it. To illustrate the argument, a series of experiments using an enterprise simulator are described that show the benefit from the application of a more holistic 'design for' technique. Design For the Existing Environment.
Resumo:
OBJECTIVE: To determine the accuracy, acceptability and cost-effectiveness of polymerase chain reaction (PCR) and optical immunoassay (OIA) rapid tests for maternal group B streptococcal (GBS) colonisation at labour. DESIGN: A test accuracy study was used to determine the accuracy of rapid tests for GBS colonisation of women in labour. Acceptability of testing to participants was evaluated through a questionnaire administered after delivery, and acceptability to staff through focus groups. A decision-analytic model was constructed to assess the cost-effectiveness of various screening strategies. SETTING: Two large obstetric units in the UK. PARTICIPANTS: Women booked for delivery at the participating units other than those electing for a Caesarean delivery. INTERVENTIONS: Vaginal and rectal swabs were obtained at the onset of labour and the results of vaginal and rectal PCR and OIA (index) tests were compared with the reference standard of enriched culture of combined vaginal and rectal swabs. MAIN OUTCOME MEASURES: The accuracy of the index tests, the relative accuracies of tests on vaginal and rectal swabs and whether test accuracy varied according to the presence or absence of maternal risk factors. RESULTS: PCR was significantly more accurate than OIA for the detection of maternal GBS colonisation. Combined vaginal or rectal swab index tests were more sensitive than either test considered individually [combined swab sensitivity for PCR 84% (95% CI 79-88%); vaginal swab 58% (52-64%); rectal swab 71% (66-76%)]. The highest sensitivity for PCR came at the cost of lower specificity [combined specificity 87% (95% CI 85-89%); vaginal swab 92% (90-94%); rectal swab 92% (90-93%)]. The sensitivity and specificity of rapid tests varied according to the presence or absence of maternal risk factors, but not consistently. PCR results were determinants of neonatal GBS colonisation, but maternal risk factors were not. Overall levels of acceptability for rapid testing amongst participants were high. Vaginal swabs were more acceptable than rectal swabs. South Asian women were least likely to have participated in the study and were less happy with the sampling procedure and with the prospect of rapid testing as part of routine care. Midwives were generally positive towards rapid testing but had concerns that it might lead to overtreatment and unnecessary interference in births. Modelling analysis revealed that the most cost-effective strategy was to provide routine intravenous antibiotic prophylaxis (IAP) to all women without screening. Removing this strategy, which is unlikely to be acceptable to most women and midwives, resulted in screening, based on a culture test at 35-37 weeks' gestation, with the provision of antibiotics to all women who screened positive being most cost-effective, assuming that all women in premature labour would receive IAP. The results were sensitive to very small increases in costs and changes in other assumptions. Screening using a rapid test was not cost-effective based on its current sensitivity, specificity and cost. CONCLUSIONS: Neither rapid test was sufficiently accurate to recommend it for routine use in clinical practice. IAP directed by screening with enriched culture at 35-37 weeks' gestation is likely to be the most acceptable cost-effective strategy, although it is premature to suggest the implementation of this strategy at present.
Resumo:
Background: Screening for congenital heart defects (CHDs) relies on antenatal ultrasound and postnatal clinical examination; however, life-threatening defects often go undetected. Objective: To determine the accuracy, acceptability and cost-effectiveness of pulse oximetry as a screening test for CHDs in newborn infants. Design: A test accuracy study determined the accuracy of pulse oximetry. Acceptability of testing to parents was evaluated through a questionnaire, and to staff through focus groups. A decision-analytic model was constructed to assess cost-effectiveness. Setting: Six UK maternity units. Participants: These were 20,055 asymptomatic newborns at = 35 weeks’ gestation, their mothers and health-care staff. Interventions: Pulse oximetry was performed prior to discharge from hospital and the results of this index test were compared with a composite reference standard (echocardiography, clinical follow-up and follow-up through interrogation of clinical databases). Main outcome measures: Detection of major CHDs – defined as causing death or requiring invasive intervention up to 12 months of age (subdivided into critical CHDs causing death or intervention before 28 days, and serious CHDs causing death or intervention between 1 and 12 months of age); acceptability of testing to parents and staff; and the cost-effectiveness in terms of cost per timely diagnosis. Results: Fifty-three of the 20,055 babies screened had a major CHD (24 critical and 29 serious), a prevalence of 2.6 per 1000 live births. Pulse oximetry had a sensitivity of 75.0% [95% confidence interval (CI) 53.3% to 90.2%] for critical cases and 49.1% (95% CI 35.1% to 63.2%) for all major CHDs. When 23 cases were excluded, in which a CHD was already suspected following antenatal ultrasound, pulse oximetry had a sensitivity of 58.3% (95% CI 27.7% to 84.8%) for critical cases (12 babies) and 28.6% (95% CI 14.6% to 46.3%) for all major CHDs (35 babies). False-positive (FP) results occurred in 1 in 119 babies (0.84%) without major CHDs (specificity 99.2%, 95% CI 99.0% to 99.3%). However, of the 169 FPs, there were six cases of significant but not major CHDs and 40 cases of respiratory or infective illness requiring medical intervention. The prevalence of major CHDs in babies with normal pulse oximetry was 1.4 (95% CI 0.9 to 2.0) per 1000 live births, as 27 babies with major CHDs (6 critical and 21 serious) were missed. Parent and staff participants were predominantly satisfied with screening, perceiving it as an important test to detect ill babies. There was no evidence that mothers given FP results were more anxious after participating than those given true-negative results, although they were less satisfied with the test. White British/Irish mothers were more likely to participate in the study, and were less anxious and more satisfied than those of other ethnicities. The incremental cost-effectiveness ratio of pulse oximetry plus clinical examination compared with examination alone is approximately £24,900 per timely diagnosis in a population in which antenatal screening for CHDs already exists. Conclusions: Pulse oximetry is a simple, safe, feasible test that is acceptable to parents and staff and adds value to existing screening. It is likely to identify cases of critical CHDs that would otherwise go undetected. It is also likely to be cost-effective given current acceptable thresholds. The detection of other pathologies, such as significant CHDs and respiratory and infective illnesses, is an additional advantage. Other pulse oximetry techniques, such as perfusion index, may enhance detection of aortic obstructive lesions.
Resumo:
Because memories are not always accurate, people rely on a variety of strategies to verify whether the events that they remember really did occur. Several studies have examined which strategies people tend to use, but none to date has asked why people opt for certain strategies over others. Here we examined the extent to which people's beliefs about the reliability and the cost of different strategies would determine their strategy selection. Subjects described a childhood memory and then suggested strategies they might use to verify the accuracy of that memory. Next, they rated the reliability and cost of each strategy, and the likelihood that they might use it. Reliability and cost each predicted strategy selection, but a combination of the two ratings provided even greater predictive value. Cost was significantly more influential than reliability, which suggests that a tendency to seek and to value "cheap" information more than reliable information could underlie many real-world memory errors. © 2013 Elsevier B.V.
Resumo:
Background: The MacDQoL is an individualised measure of the impact of macular degeneration (MD) on quality of life (QoL). There is preliminary evidence of its psychometric properties and sensitivity to severity of MD. The aim of this study was to carry out further psychometric evaluation with a larger sample and investigate the measure's sensitivity to MD severity. Methods: Patients with MD (n = 156: 99 women, 57 men, mean age 79 ± 13 years), recruited from eye clinics (one NHS, one private) completed the MacDQoL by telephone interview and later underwent a clinic vision assessment including near and distance visual acuity (VA), comfortable near VA, contrast sensitivity, colour recognition, recovery from glare and presence or absence of distortion or scotoma in the central 10° of the visual field. Results: The completion rate for the MacDQoL items was 99.8%. Of the 26 items, three were dropped from the measure due to redundancy. A fourth was retained in the questionnaire but excluded when computing the scale score. Principal components analysis and Cronbach's alpha (0.944) supported combining the remaining 22 items in a single scale. Lower MacDQoL scores, indicating more negative impact of MD on QoL, were associated with poorer distance VA (better eye r = -0.431 p < 0.001; worse eye r = -0.350 p < 0.001; binocular vision r = -0.419 p < 0.001) and near VA (better eye r -0.326 p < 0.001; worse eye r = -0.226 p < 0.001; binocular vision r = -0.326 p < 0.001). Poorer MacDQoL scores were associated with poorer contrast sensitivity (better eye r = 0.392 p < 0.001; binocular vision r = 0.423 p < 0.001), poorer colour recognition (r = 0.417 p < 0.001) and poorer comfortable near VA (r = -0.283, p < 0.001). The MacDQoL differentiated between those with and without binocular scotoma (U = 1244 p < 0.001). Conclusion: The MacDQoL 22-item scale has excellent internal consistency reliability and a single-factor structure. The measure is acceptable to respondents and the generic QoL item, MD-specific QoL item and average weighted impact score are related to several measures of vision. The MacDQoL demonstrates that MD has considerable negative impact on many aspects of QoL, particularly independence, leisure activities, dealing with personal affairs and mobility. The measure may be valuable for use in clinical trials and routine clinical care. © 2005 Mitchell et al; licensee BioMed Central Ltd.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.
Resumo:
Purpose: To evaluate the reliability and repeatability of intraocular pressure (IOP) measurements using a new rebound tonometer. Methods: Intraocular pressure was measured in 42 healthy human eyes of subjects aged 18-30 years (mean ± standard deviation [SD] 21.5 ± 3.2 years) using the ICare Rebound and Goldmann tonometers in two separate sessions. Results: Intraocular pressure measurements were found to read slightly, but not significantly, higher with the ICare tonometer compared with the Goldmann instrument in both sessions (first session: mean bias ± SD + 0.50 ± 2.33 mmHg; second session: mean bias ± SD + 0.52 ± 1.92 mmHg). Limits of agreement between repeated readings were ± 5.11 mmHg for measurements taken with the ICare tonometer, compared with ± 3.15 mmHg for measurements taken with the Goldmann method. Conclusion: Measurement of IOP in normal, healthy subjects using the ICare rebound tonometer produced a small, statistically insignificant, positive bias when compared with the Goldmann tonometer. Intersessional repeatability of IOP taken with the ICare is poorer than that of IOP taken with the Goldmann tonometer, but is comparable with that of other non-Goldman-type tonometers currently available. Copyright © Acta Ophthalmol Scand, 2006.
Resumo:
The evaluation and selection of industrial projects before investment decision is customarily done using marketing, technical and financial information. Subsequently, environmental impact assessment and social impact assessment are carried out mainly to satisfy the statutory agencies. Because of stricter environment regulations in developed and developing countries, quite often impact assessment suggests alternate sites, technologies, designs, and implementation methods as mitigating measures. This causes considerable delay to complete project feasibility analysis and selection as complete analysis requires to be taken up again and again till the statutory regulatory authority approves the project. Moreover, project analysis through above process often results sub-optimal project as financial analysis may eliminate better options, as more environment friendly alternative will always be cost intensive. In this circumstance, this study proposes a decision support system, which analyses projects with respect to market, technicalities, and social and environmental impact in an integrated framework using analytic hierarchy process, a multiple-attribute decision-making technique. This not only reduces duration of project evaluation and selection, but also helps select optimal project for the organization for sustainable development. The entire methodology has been applied to a cross-country oil pipeline project in India and its effectiveness has been demonstrated. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Supplier evaluation and selection problem has been studied extensively. Various decision making approaches have been proposed to tackle the problem. In contemporary supply chain management, the performance of potential suppliers is evaluated against multiple criteria rather than considering a single factor-cost. This paper reviews the literature of the multi-criteria decision making approaches for supplier evaluation and selection. Related articles appearing in the international journals from 2000 to 2008 are gathered and analyzed so that the following three questions can be answered: (i) Which approaches were prevalently applied? (ii) Which evaluating criteria were paid more attention to? (iii) Is there any inadequacy of the approaches? Based on the inadequacy, if any, some improvements and possible future work are recommended. This research not only provides evidence that the multi-criteria decision making approaches are better than the traditional cost-based approach, but also aids the researchers and decision makers in applying the approaches effectively.
Resumo:
Advances in technology coupled with increasing labour costs have caused service firms to explore self-service delivery options. Although some studies have focused on self-service and use of technology in service delivery, few have explored the role of service quality in consumer evaluation of technology-based self-service options. By integrating and extending the self-service quality framework the service evaluation model and the Technology Acceptance Model the authors address this emerging issue by empirically testing a comprehensive model that captures the antecedents and consequences of perceived service quality to predict continued customer interaction in the technology-based self-service context of Internet banking. Important service evaluation constructs like perceived risk, perceived value and perceived satisfaction are modelled in this framework. The results show that perceived control has the strongest influence on service quality evaluations. Perceived speed of delivery, reliability and enjoyment also have a significant impact on service quality perceptions. The study also found that even though perceived service quality, perceived risk and satisfaction are important predictors of continued interaction, perceived customer value plays a pivotal role in influencing continued interaction.
Resumo:
The topic of bioenergy, biofuels and bioproducts remains at the top of the current political and research agenda. Identification of the optimum processing routes for biomass, in terms of efficiency, cost, environment and socio-economics is vital as concern grows over the remaining fossil fuel resources, climate change and energy security. It is known that the only renewable way of producing conventional hydrocarbon fuels and organic chemicals is from biomass, but the problem remains of identifying the best product mix and the most efficient way of processing biomass to products. The aim is to move Europe towards a biobased economy and it is widely accepted that biorefineries are key to this development. A methodology was required for the generation and evaluation of biorefinery process chains for converting biomass into one or more valuable products that properly considers performance, cost, environment, socio-economics and other factors that influence the commercial viability of a process. In this thesis a methodology to achieve this objective is described. The completed methodology includes process chain generation, process modelling and subsequent analysis and comparison of results in order to evaluate alternative process routes. A modular structure was chosen to allow greater flexibility and allowing the user to generate a large number of different biorefinery configurations The significance of the approach is that the methodology is defined and is thus rigorous and consistent and may be readily re-examined if circumstances change. There was the requirement for consistency in structure and use, particularly for multiple analyses. It was important that analyses could be quickly and easily carried out to consider, for example, different scales, configurations and product portfolios and so that previous outcomes could be readily reconsidered. The result of the completed methodology is the identification of the most promising biorefinery chains from those considered as part of the European Biosynergy Project.