995 resultados para Coffee quality
Resumo:
Advances in data mining have provided techniques for automatically discovering underlying knowledge and extracting useful information from large volumes of data. Data mining offers tools for quick discovery of relationships, patterns and knowledge in large complex databases. Application of data mining to manufacturing is relatively limited mainly because of complexity of manufacturing data. Growing self organizing map (GSOM) algorithm has been proven to be an efficient algorithm to analyze unsupervised DNA data. However, it produced unsatisfactory clustering when used on some large manufacturing data. In this paper a data mining methodology has been proposed using a GSOM tool which was developed using a modified GSOM algorithm. The proposed method is used to generate clusters for good and faulty products from a manufacturing dataset. The clustering quality (CQ) measure proposed in the paper is used to evaluate the performance of the cluster maps. The paper also proposed an automatic identification of variables to find the most probable causative factor(s) that discriminate between good and faulty product by quickly examining the historical manufacturing data. The proposed method offers the manufacturers to smoothen the production flow and improve the quality of the products. Simulation results on small and large manufacturing data show the effectiveness of the proposed method.
Resumo:
This paper presents the results of a pilot study examining the factors that impact most on the effective implementation of, and improvement to, Quality Mangement Sytems (QMSs) amongst Indonesian construction companies. Nine critical factors were identified from an extensive literature review, and a survey was conducted of 23 respondents from three specific groups (Quality Managers, Project Managers, and Site Engineers) undertaking work in the Indonesian infrastructure construction sector. The data has been analyzed initially using simple descriptive techniques. This study reveals that different groups within the sector have different opinions of the factors regardless of the degree of importance of each factor. However, the evaluation of construction project success and the incentive schemes for high performance staff, are the two factors that were considered very important by most of the respondents in all three groups. In terms of their assessment of tools for measuring contractor’s performance, additional QMS guidelines, techniques related to QMS practice provided by the Government, and benchmarking, a clear majority in each group regarded their usefulness as ‘of some importance’.
Resumo:
Background: Clinical practice and clinical research has made a concerted effort to move beyond the use of clinical indicators alone and embrace patient focused care through the use of patient reported outcomes such as healthrelated quality of life. However, unless patients give consistent consideration to the health states that give meaning to measurement scales used to evaluate these constructs, longitudinal comparison of these measures may be invalid. This study aimed to investigate whether patients give consideration to a standard health state rating scale (EQ-VAS) and whether consideration of good and poor health state descriptors immediately changes their selfreport. Methods: A randomised crossover trial was implemented amongst hospitalised older adults (n = 151). Patients were asked to consider descriptions of extremely good (Description-A) and poor (Description-B) health states. The EQ-VAS was administered as a self-report at baseline, after the first descriptors (A or B), then again after the remaining descriptors (B or A respectively). At baseline patients were also asked if they had considered either EQVAS anchors. Results: Overall 106/151 (70%) participants changed their self-evaluation by ≥5 points on the 100 point VAS, with a mean (SD) change of +4.5 (12) points (p < 0.001). A total of 74/151 (49%) participants did not consider the best health VAS anchor, of the 77 who did 59 (77%) thought the good health descriptors were more extreme (better) then they had previously considered. Similarly 85/151 (66%) participants did not consider the worst health anchor of the 66 who did 63 (95%) thought the poor health descriptors were more extreme (worse) then they had previously considered. Conclusions: Health state self-reports may not be well considered. An immediate significant shift in response can be elicited by exposure to a mere description of an extreme health state despite no actual change in underlying health state occurring. Caution should be exercised in research and clinical settings when interpreting subjective patient reported outcomes that are dependent on brief anchors for meaning. Trial Registration: Australian and New Zealand Clinical Trials Registry (#ACTRN12607000606482) http://www.anzctr. org.au
Resumo:
Background: Assessments of change in subjective patient reported outcomes such as health-related quality of life (HRQoL) are a key component of many clinical and research evaluations. However, conventional longitudinal evaluation of change may not agree with patient perceived change if patients' understanding of the subjective construct under evaluation changes over time (response shift) or if patients' have inaccurate recollection (recall bias). This study examined whether older adults' perception of change is in agreement with conventional longitudinal evaluation of change in their HRQoL over the duration of their hospital stay. It also investigated this level of agreement after adjusting patient perceived change for recall bias that patients may have experienced. Methods: A prospective longitudinal cohort design nested within a larger randomised controlled trial was implemented. 103 hospitalised older adults participated in this investigation at a tertiary hospital facility. The EQ-5D utility and Visual Analogue Scale (VAS) scores were used to evaluate HRQoL. Participants completed EQ-5D reports as soon as they were medically stable (within three days of admission) then again immediately prior to discharge. Three methods of change score calculation were used (conventional change, patient perceived change and patient perceived change adjusted for recall bias). Agreement was primarily investigated using intraclass correlation coefficients (ICC) and limits of agreement. Results: Overall 101 (98%) participants completed both admission and discharge assessments. The mean (SD) age was 73.3 (11.2). The median (IQR) length of stay was 38 (20-60) days. For agreement between conventional longitudinal change and patient perceived change: ICCs were 0.34 and 0.40 for EQ-5D utility and VAS respectively. For agreement between conventional longitudinal change and patient perceived change adjusted for recall bias: ICCs were 0.98 and 0.90 respectively. Discrepancy between conventional longitudinal change and patient perceived change was considered clinically meaningful for 84 (83.2%) of participants, after adjusting for recall bias this reduced to 8 (7.9%). Conclusions: Agreement between conventional change and patient perceived change was not strong. A large proportion of this disagreement could be attributed to recall bias. To overcome the invalidating effect of response shift (on conventional change) and recall bias (on patient perceived change) a method of adjusting patient perceived change for recall bias has been described.
Resumo:
Objective: To identify agreement levels between conventional longitudinal evaluation of change (post–pre) and patient-perceived change (post–then test) in health-related quality of life. Design: A prospective cohort investigation with two assessment points (baseline and six-month follow-up) was implemented. Setting: Community rehabilitation setting. Subjects: Frail older adults accessing community-based rehabilitation services. Intervention: Nil as part of this investigation. Main measures: Conventional longitudinal change in health-related quality of life was considered the difference between standard EQ-5D assessments completed at baseline and follow-up. To evaluate patient-perceived change a ‘then test’ was also completed at the follow-up assessment. This required participants to report (from their current perspective) how they believe their health-related quality of life was at baseline (using the EQ-5D). Patient-perceived change was considered the difference between ‘then test’ and standard follow-up EQ-5D assessments. Results: The mean (SD) age of participants was 78.8 (7.3). Of the 70 participants 62 (89%) of data sets were complete and included in analysis. Agreement between conventional (post–pre) and patient-perceived (post–then test) change was low to moderate (EQ-5D utility intraclass correlation coefficient (ICC)¼0.41, EQ-5D visual analogue scale (VAS) ICC¼0.21). Neither approach inferred greater change than the other (utility P¼0.925, VAS P¼0.506). Mean (95% confidence interval (CI)) conventional change in EQ-5D utility and VAS were 0.140 (0.045,0.236) and 8.8 (3.3,14.3) respectively, while patient-perceived change was 0.147 (0.055,0.238) and 6.4 (1.7,11.1) respectively. Conclusions: Substantial disagreement exists between conventional longitudinal evaluation of change in health-related quality of life and patient-perceived change in health-related quality of life (as measured using a then test) within individuals.
Resumo:
The purpose of this research is to report preliminary empirical evidence regarding the association between common physical performance measures and health-related quality of life (HRQoL) of hospitalized older adults recovering from illness and injury. Frequently, these patients do not return to premorbid levels of independence and physical ability. Rehabilitation for this population often focuses on improving physical functioning and mobility with the intention of maximizing their HRQoL for discharge and thereafter. For this reason, longitudinal use of physical performance measures as an indicator of improvement in physical functioning (and thus HRQoL) is common. Although this is a logical approach, there have been mixed results from previous investigations into the association between common measures of physical function and HRQoL amongst other adult patient populations.1,2 There has been no previous investigation reporting the association between HRQoL and a variety of common physical performance measures in hospitalized older adults.
Resumo:
Teacher quality is recognised as a lynchpin for education reforms internationally, and both Federal and State governments in Australia have turned their attention to teacher education institutions: the starting point for preparing quality teachers. Changes to policy and shifts in expectations impact on Faculties of Education, despite the fact that little is known about what makes a quality teacher preparation program effective. New accountability measures, mandated Professional Standards, and proposals to test all graduates before registration, mean that teacher preparation programs need capacity for flexibility and responsiveness. The risk is that undergraduate degree programs can become ‘patchwork quilts’ with traces of the old and new stitched together, sometimes at the expense of coherence and integrity. This paper provides a roadmap used by one large Faculty of Education in Queensland for reforming and reconceptualising the curriculum for a 4-year undergraduate program, in response to new demands from government and the professional bodies.
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.
Resumo:
The variability of input parameters is the most important source of overall model uncertainty. Therefore, an in-depth understanding of the variability is essential for uncertainty analysis of stormwater quality model outputs. This paper presents the outcomes of a research study which investigated the variability of pollutants build-up characteristics on road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary highly even within the same land use. Additionally, industrial land use showed relatively higher variability of maximum build-up, build-up rate and particle size distribution, whilst the commercial land use displayed a relatively higher variability of pollutant-solid ratio. Among the various build-up parameters analysed, D50 (volume-median-diameter) displayed the relatively highest variability for all three land uses.
Resumo:
Franchised convenience stores successfully operate throughout Taiwan, but the convenience store market is approaching saturation point. Creating a cooperative long-term franchising relationship between franchisors and franchisees is essential to maintain the proportion of convenience stores...
Resumo:
The issue of ensuring that construction projects achieve high quality outcomes continues to be an important consideration for key project stakeholders. Although a lot of quality practices have been done within the industry, establishment and achievement of reasonable levels of quality in construction projects continues to be a problem. While some studies into the introduction and development of quality practices and stakeholder management in the construction industry have been undertaken separately, no major studies have so far been completed that examine in depth how quality management practices that specifically address stakeholders’ perspectives of quality can be utilised to contribute to the ultimate constructed quality of projects. This paper encompasses and summarizes a review of the literature related to previous research undertaken on quality within the industry, focuses on the benefits and shortcomings, together with examining the concept of integrating stakeholder perspectives of project quality for improvement of outcomes throughout the project lifecycle. Findings discussed in this paper reveal a pressing need for investigation, development and testing of a framework to facilitate better implementation of quality management practices and thus achievement of better quality outcomes within the construction industry. The framework will incorporate and integrate the views of stakeholders on what constitutes final project quality to be utilised in developing better quality management planning and systems aimed ultimately at achieving better project quality delivery.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.