26 resultados para Evaluation. Effectiveness. Efficacy. Efficiency. Participation
Resumo:
Background: Screening for congenital heart defects (CHDs) relies on antenatal ultrasound and postnatal clinical examination; however, life-threatening defects often go undetected. Objective: To determine the accuracy, acceptability and cost-effectiveness of pulse oximetry as a screening test for CHDs in newborn infants. Design: A test accuracy study determined the accuracy of pulse oximetry. Acceptability of testing to parents was evaluated through a questionnaire, and to staff through focus groups. A decision-analytic model was constructed to assess cost-effectiveness. Setting: Six UK maternity units. Participants: These were 20,055 asymptomatic newborns at = 35 weeks’ gestation, their mothers and health-care staff. Interventions: Pulse oximetry was performed prior to discharge from hospital and the results of this index test were compared with a composite reference standard (echocardiography, clinical follow-up and follow-up through interrogation of clinical databases). Main outcome measures: Detection of major CHDs – defined as causing death or requiring invasive intervention up to 12 months of age (subdivided into critical CHDs causing death or intervention before 28 days, and serious CHDs causing death or intervention between 1 and 12 months of age); acceptability of testing to parents and staff; and the cost-effectiveness in terms of cost per timely diagnosis. Results: Fifty-three of the 20,055 babies screened had a major CHD (24 critical and 29 serious), a prevalence of 2.6 per 1000 live births. Pulse oximetry had a sensitivity of 75.0% [95% confidence interval (CI) 53.3% to 90.2%] for critical cases and 49.1% (95% CI 35.1% to 63.2%) for all major CHDs. When 23 cases were excluded, in which a CHD was already suspected following antenatal ultrasound, pulse oximetry had a sensitivity of 58.3% (95% CI 27.7% to 84.8%) for critical cases (12 babies) and 28.6% (95% CI 14.6% to 46.3%) for all major CHDs (35 babies). False-positive (FP) results occurred in 1 in 119 babies (0.84%) without major CHDs (specificity 99.2%, 95% CI 99.0% to 99.3%). However, of the 169 FPs, there were six cases of significant but not major CHDs and 40 cases of respiratory or infective illness requiring medical intervention. The prevalence of major CHDs in babies with normal pulse oximetry was 1.4 (95% CI 0.9 to 2.0) per 1000 live births, as 27 babies with major CHDs (6 critical and 21 serious) were missed. Parent and staff participants were predominantly satisfied with screening, perceiving it as an important test to detect ill babies. There was no evidence that mothers given FP results were more anxious after participating than those given true-negative results, although they were less satisfied with the test. White British/Irish mothers were more likely to participate in the study, and were less anxious and more satisfied than those of other ethnicities. The incremental cost-effectiveness ratio of pulse oximetry plus clinical examination compared with examination alone is approximately £24,900 per timely diagnosis in a population in which antenatal screening for CHDs already exists. Conclusions: Pulse oximetry is a simple, safe, feasible test that is acceptable to parents and staff and adds value to existing screening. It is likely to identify cases of critical CHDs that would otherwise go undetected. It is also likely to be cost-effective given current acceptable thresholds. The detection of other pathologies, such as significant CHDs and respiratory and infective illnesses, is an additional advantage. Other pulse oximetry techniques, such as perfusion index, may enhance detection of aortic obstructive lesions.
Resumo:
A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
Environmental law increasingly provides for participatory rights, including appeal rights, to ensure informed, legitimate decision-making. Despite consensus around the general need for participatory rights, including strong ones such as a right to appeal, public participation in environmental decision-making is often criticised. The critics' main argument is that the negative side effects resulting particularly from the use of strong participatory rights outweigh their benefits. Recent regulatory trends arising from better regulation policy to make environmental decision-making more cost-efficient tend to pay special attention to such arguments despite limited empirical evidence. This article provides evidence using material-concerning appeals against pollution permits in Finland and suggests that judicial review is a necessary and effective process for both protecting citizens' rights and improving the quality of environmental protection. © The Author [2008]. Published by Oxford University Press. All rights reserved.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Healthcare organisations are increasingly being challenged to look at their operations and find opportunities to improve the quality, efficiency and effectiveness of their supply chain services. In light of this situation, there is an apparent need for healthcare organisations to invest in integration technologies and to achieve the integration of supply chain processes, in order to break up the historical structure characterised by numerous interfaces and the segregation of responsibilities. The aim of this paper is to take an independent look at the healthcare supply chain and identify at different levels the core entities, processes, information flows, and system integration challenges which impede supply chain quality improvements to be realised. Moreover, this paper proposes, from an information systems perspective, a framework for the evaluation of different integration technology approaches, which can be used as a potential guideline tool for assessing integration technology alternatives, in order to add value to a healthcare-supply-chain management system. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
Background: Coronary heart disease (CHD) is a public health priority in the UK. The National Service Framework (NSF) has set standards for the prevention, diagnosis and treatment of CHD, which include the use of cholesterol-lowering agents aimed at achieving targets of blood total cholesterol (TC) < 5.0 mmol/L and low density lipoprotein-cholesterol (LDL-C) < 3.0 mmol/L. In order to achieve these targets cost effectively, prescribers need to make an informed choice from the range of statins available. Aim: To estimate the average and relative cost effectiveness of atorvastatin, fluvastatin, pravastatin and simvastatin in achieving the NSF LDL-C and TC targets. Design: Model-based economic evaluation. Methods: An economic model was constructed to estimate the number of patients achieving the NSF targets for LDL-C and TC at each dose of statin, and to calculate the average drug cost and incremental drug cost per patient achieving the target levels. The population baseline LDL-C and TC, and drug efficacy and drug costs were taken from previously published data. Estimates of the distribution of patients receiving each dose of statin were derived from the UK national DIN-LINK database. Results: The estimated annual drug cost per 1000 patients treated with atorvastatin was £289 000, with simvastatin £315 000, with pravastatin £333 000 and with fluvastatin £167 000. The percentages of patients achieving target are 74.4%, 46.4%, 28.4% and 13.2% for atorvastatin, simvastatin, pravastatin and fluvastatin, respectively. Incremental drug cost per extra patient treated to LDL-C and TC targets compared with fluvastafin were £198 and £226 for atorvastatin, £443 and £567 for simvastatin and £1089 and £2298 for pravastatin, using 2002 drug costs. Conclusions: As a result of its superior efficacy, atorvastatin generates a favourable cost-effectiveness profile as measured by drug cost per patient treated to LDL-C and TC targets. For a given drug budget, more patients would achieve NSF LDL-C and TC targets with atorvastatin than with any of the other statins examined.
Resumo:
Identifying the cellular responses to photodynamic therapy (PDT) is important if the mechanisms of cellular damage are to be fully understood. The relationship between sensitizer, fluence rate and the removal of cells by trypsinization was studied using the RIF-1 cell line. Following treatment of RIF-1 cells with pyridinium zinc (II) phthalocyanine (PPC), or polyhaematoporphyrin at 10 mW cm−2 (3 J cm−2), there was a significant number of cells that were not removed by trypsin incubation compared to controls. Decreasing the fluence rate from 10 to 2.5 mW cm−2 resulted in a two-fold increase in the number of cells attached to the substratum when PPC used as sensitizer; however, with 5,10,15,20 meso-tetra(hydroxyphenyl) chlorin (m-THPC) there was no resistance to trypsinization following treatment at either fluence rate. The results indicate that resistance of cells to trypsinization following PDT is likely to be both sensitizer and fluence rate dependent. Increased activity of the enzyme tissue-transglutaminase (tTGase) was observed following PPC-PDT, but not following m-THPC-PDT. Similar results were obtained using HT29 human colonic carcinoma and ECV304 human umbilical vein endothelial cell lines. Hamster fibrosarcoma cell (Met B) clones transfected with human tTGase also exhibited resistance to trypsinization following PPC-mediated photosensitization; however, a similar degree of resistance was observed in PDT-treated control Met B cells suggesting that tTGase activity alone was not involved in this process.
Resumo:
Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.
Resumo:
Defining 'effectiveness' in the context of community mental health teams (CMHTs) has become increasingly difficult under the current pattern of provision required in National Health Service mental health services in England. The aim of this study was to establish the characteristics of multi-professional team working effectiveness in adult CMHTs to develop a new measure of CMHT effectiveness. The study was conducted between May and November 2010 and comprised two stages. Stage 1 used a formative evaluative approach based on the Productivity Measurement and Enhancement System to develop the scale with multiple stakeholder groups over a series of qualitative workshops held in various locations across England. Stage 2 analysed responses from a cross-sectional survey of 1500 members in 135 CMHTs from 11 Mental Health Trusts in England to determine the scale's psychometric properties. Based on an analysis of its structural validity and reliability, the resultant 20-item scale demonstrated good psychometric properties and captured one overall latent factor of CMHT effectiveness comprising seven dimensions: improved service user well-being, creative problem-solving, continuous care, inter-team working, respect between professionals, engagement with carers and therapeutic relationships with service users. The scale will be of significant value to CMHTs and healthcare commissioners both nationally and internationally for monitoring, evaluating and improving team functioning in practice.
Resumo:
Internal quantum efficiency (IQE) of a high-brightness blue LED has been evaluated from the external quantum efficiency measured as a function of current at room temperature. Processing the data with a novel evaluation procedure based on the ABC-model, we have determined separately IQE of the LED structure and light extraction efficiency (LEE) of UX:3 chip. Full text Nowadays, understanding of LED efficiency behavior at high currents is quite critical to find ways for further improvement of III-nitride LED performance [1]. External quantum efficiency ηe (EQE) provides integral information on the recombination and photon emission processes in LEDs. Meanwhile EQE is the product of IQE ηi and LEE ηext at negligible carrier leakage from the active region. Separate determination of IQE and LEE would be much more helpful, providing correlation between these parameters and specific epi-structure and chip design. In this paper, we extend the approach of [2,3] to the whole range of the current/optical power variation, providing an express tool for separate evaluation of IQE and LEE. We studied an InGaN-based LED fabricated by Osram OS. LED structure grown by MOCVD on sapphire substrate was processed as UX:3 chip and mounted into the Golden Dragon package without molding. EQE was measured with Labsphere CDS-600 spectrometer. Plotting EQE versus output power P and finding the power Pm corresponding to EQE maximum ηm enables comparing the measurements with the analytical relationships ηi = Q/(Q+p1/2+p-1/2) ,p = P/Pm , and Q = B/(AC) 1/2 where A, Band C are recombination constants [4]. As a result, maximum IQE value equal to QI(Q+2) can be found from the ratio ηm/ηe plotted as a function of p1/2 +p1-1/2 (see Fig.la) and then LEE calculated as ηext = ηm (Q+2)/Q . Experimental EQE as a function of normalized optical power p is shown in Fig. 1 b along with the analytical approximation based on the ABCmodel. The approximation fits perfectly the measurements in the range of the optical power (or operating current) variation by eight orders of magnitude. In conclusion, new express method for separate evaluation of IQE and LEE of III-nitride LEDs is suggested and applied to characterization of a high-brightness blue LED. With this method, we obtained LEE from the free chip surface to the air as 69.8% and IQE as 85.7% at the maximum and 65.2% at the operation current 350 rnA. [I] G. Verzellesi, D. Saguatti, M. Meneghini, F. Bertazzi, M. Goano, G. Meneghesso, and E. Zanoni, "Efficiency droop in InGaN/GaN blue light-emitting diodes: Physical mechanisms and remedies," 1. AppL Phys., vol. 114, no. 7, pp. 071101, Aug., 2013. [2] C. van Opdorp and G. W. 't Hooft, "Method for determining effective non radiative lifetime and leakage losses in double-heterostructure lasers," 1. AppL Phys., vol. 52, no. 6, pp. 3827-3839, Feb., 1981. [3] M. Meneghini, N. Trivellin, G. Meneghesso, E. Zanoni, U. Zehnder, and B. Hahn, "A combined electro-optical method for the determination of the recombination parameters in InGaN-based light-emitting diodes," 1. AppL Phys., vol. 106, no. II, pp. 114508, Dec., 2009. [4] Qi Dai, Qifeng Shan, ling Wang, S. Chhajed, laehee Cho, E. F. Schubert, M. H. Crawford, D. D. Koleske, Min-Ho Kim, and Yongjo Park, "Carrier recombination mechanisms and efficiency droop in GalnN/GaN light-emitting diodes," App/. Phys. Leu., vol. 97, no. 13, pp. 133507, Sept., 2010. © 2014 IEEE.