26 resultados para Multifactor performance measurement
em Université de Lausanne, Switzerland
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
OBJECTIVE: To review and update the conceptual framework, indicator content and research priorities of the Organisation for Economic Cooperation and Development's (OECD) Health Care Quality Indicators (HCQI) project, after a decade of collaborative work. DESIGN: A structured assessment was carried out using a modified Delphi approach, followed by a consensus meeting, to assess the suite of HCQI for international comparisons, agree on revisions to the original framework and set priorities for research and development. SETTING: International group of countries participating to OECD projects. PARTICIPANTS: Members of the OECD HCQI expert group. RESULTS: A reference matrix, based on a revised performance framework, was used to map and assess all seventy HCQI routinely calculated by the OECD expert group. A total of 21 indicators were agreed to be excluded, due to the following concerns: (i) relevance, (ii) international comparability, particularly where heterogeneous coding practices might induce bias, (iii) feasibility, when the number of countries able to report was limited and the added value did not justify sustained effort and (iv) actionability, for indicators that were unlikely to improve on the basis of targeted policy interventions. CONCLUSIONS: The revised OECD framework for HCQI represents a new milestone of a long-standing international collaboration among a group of countries committed to building common ground for performance measurement. The expert group believes that the continuation of this work is paramount to provide decision makers with a validated toolbox to directly act on quality improvement strategies.
Resumo:
This guide introduces Data Envelopment Analysis (DEA), a performance measurement technique, in such a way as to be appropriate to decision makers with little or no background in economics and operational research. The use of mathematics is kept to a minimum. This guide therefore adopts a strong practical approach in order to allow decision makers to conduct their own efficiency analysis and to easily interpret results. DEA helps decision makers for the following reasons: - By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement. - By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient. - By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimize the average cost. - By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices.
Resumo:
This contribution introduces Data Envelopment Analysis (DEA), a performance measurement technique. DEA helps decision makers for the following reasons: (1) By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement; (2) By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient; (3) By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimise the average total cost; (4) By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices. This contribution presents the essentials about DEA, alongside a case study to intuitively understand its application. It also introduces Win4DEAP, a software package that conducts efficiency analysis based on DEA methodology. The methodical background of DEA is presented for more demanding readers. Finally, four advanced topics of DEA are treated: adjustment to the environment, preferences, sensitivity analysis and time series data.
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
Purpose - The purpose of this paper is to analyze what transaction costs are acceptable for customers in different investments. In this study, two life insurance contracts, a mutual fund and a risk-free investment, as alternative investment forms are considered. The first two products under scrutiny are a life insurance investment with a point-to-point capital guarantee and a participating contract with an annual interest rate guarantee and participation in the insurer's surplus. The policyholder assesses the various investment opportunities using different utility measures. For selected types of risk profiles, the utility position and the investor's preference for the various investments are assessed. Based on this analysis, the authors study which cost levels can make all of the products equally rewarding for the investor. Design/methodology/approach - The paper notes the risk-neutral valuation calibration using empirical data utility and performance measurement dynamics underlying: geometric Brownian motion numerical examples via Monte Carlo simulation. Findings - In the first step, the financial performance of the various saving opportunities under different assumptions of the investor's utility measurement is studied. In the second step, the authors calculate the level of transaction costs that are allowed in the various products to make all of the investment opportunities equally rewarding from the investor's point of view. A comparison of these results with transaction costs that are common in the market shows that insurance companies must be careful with respect to the level of transaction costs that they pass on to their customers to provide attractive payoff distributions. Originality/value - To the best of the authors' knowledge, their research question - i.e. which transaction costs for life insurance products would be acceptable from the customer's point of view - has not been studied in the above described context so far.
Resumo:
High performance liquid chromatography (HPLC) is the reference method for measuring concentrations of antimicrobials in blood. This technique requires careful sample preparation. Protocols using organic solvents and/or solid extraction phases are time consuming and entail several manipulations, which can lead to partial loss of the determined compound and increased analytical variability. Moreover, to obtain sufficient material for analysis, at least 1 ml of plasma is required. This constraint makes it difficult to determine drug levels when blood sample volumes are limited. However, drugs with low plasma-protein binding can be reliably extracted from plasma by ultra-filtration with a minimal loss due to the protein-bound fraction. This study validated a single-step ultra-filtration method for extracting fluconazole (FLC), a first-line antifungal agent with a weak plasma-protein binding, from plasma to determine its concentration by HPLC. Spiked FLC standards and unknowns were prepared in human and rat plasma. Samples (240 microl) were transferred into disposable microtube filtration units containing cellulose or polysulfone filters with a 5 kDa cut-off. After centrifugation for 60 min at 15000g, FLC concentrations were measured by direct injection of the filtrate into the HPLC. Using cellulose filters, low molecular weight proteins were eluted early in the chromatogram and well separated from FLC that eluted at 8.40 min as a sharp single peak. In contrast, with polysulfone filters several additional peaks interfering with the FLC peak were observed. Moreover, the FLC recovery using cellulose filters compared to polysulfone filters was higher and had a better reproducibility. Cellulose filters were therefore used for the subsequent validation procedure. The quantification limit was 0.195 mgl(-1). Standard curves with a quadratic regression coefficient > or = 0.9999 were obtained in the concentration range of 0.195-100 mgl(-1). The inter and intra-run accuracies and precisions over the clinically relevant concentration range, 1.875-60 mgl(-1), fell well within the +/-15% variation recommended by the current guidelines for the validation of analytical methods. Furthermore, no analytical interference was observed with commonly used antibiotics, antifungals, antivirals and immunosuppressive agents. Ultra-filtration of plasma with cellulose filters permits the extraction of FLC from small volumes (240 microl). The determination of FLC concentrations by HPLC after this single-step procedure is selective, precise and accurate.
Resumo:
We propose a new method, based on inertial sensors, to automatically measure at high frequency the durations of the main phases of ski jumping (i.e. take-off release, take-off, and early flight). The kinematics of the ski jumping movement were recorded by four inertial sensors, attached to the thigh and shank of junior athletes, for 40 jumps performed during indoor conditions and 36 jumps in field conditions. An algorithm was designed to detect temporal events from the recorded signals and to estimate the duration of each phase. These durations were evaluated against a reference camera-based motion capture system and by trainers conducting video observations. The precision for the take-off release and take-off durations (indoor < 39 ms, outdoor = 27 ms) can be considered technically valid for performance assessment. The errors for early flight duration (indoor = 22 ms, outdoor = 119 ms) were comparable to the trainers' variability and should be interpreted with caution. No significant changes in the error were noted between indoor and outdoor conditions, and individual jumping technique did not influence the error of take-off release and take-off. Therefore, the proposed system can provide valuable information for performance evaluation of ski jumpers during training sessions.
Resumo:
Whereas during the last few years handling of the transcutaneous PO2 (tcPO2) and PCO2 (tcPCO2) sensor has been simplified, the high electrode temperature and the short application time remain major drawbacks. In order to determine whether the application of a topical metabolic inhibitor allows reliable measurement at a sensor temperature of 42 degrees C for a period of up to 12 h, we performed a prospective, open, nonrandomized study in a sequential sample of 20 critically ill neonates. A total of 120 comparisons (six repeated measurements per patient) between arterial and transcutaneous values were obtained. Transcutaneous values were measured with a control sensor at 44 degrees C (conventional contact medium, average application time 3 h) and a test sensor at 42 degrees C (Eugenol solution, average application time 8 h). Comparison of tcPO2 and PaO2 at 42 degrees C (Eugenol solution) showed a mean difference of +0.16 kPa (range +1.60 to -2.00 kPa), limits of agreement +1.88 and -1.56 kPa. Comparison of tcPO2 and PaO2 at 44 degrees C (control sensor) revealed a mean difference of +0.02 kPa (range +2.60 to -1.90 kPa), limits of agreement +2.12 and -2.08 kPa. Comparison of tcPCO2 and PaCO2 at 42 degrees C (Eugenol solution) showed a mean difference of +0.91 (range +2.30 to +0.10 kPa), limits of agreement +2.24 and -0.42 kPa. Comparison of tcPCO2 and PaCO2 at 44 degrees C (control sensor) revealed a mean difference of +0.63 kPa (range 1.50 to -0.30 kPa), limits of agreement +1.73 and -0.47 kPa. CONCLUSION: Our results show that the use of an Eugenol solution allows reliable measurement of tcPO2 at a heating temperature of 42 degrees C; the application time can be prolongued up to a maximum of 12 h without aggravating the skin lesions. The performance of the tcPCO2 monitor was slightly worse at 42 degrees C than at 44 degrees C suggesting that for the Eugenol solution the metabolic offset should be corrected.
Resumo:
Leaders must scan the internal and external environment, chart strategic and task objectives, and provide performance feedback. These instrumental leadership (IL) functions go beyond the motivational and quid-pro quo leader behaviors that comprise the full-range-transformational, transactional, and laissez faire-leadership model. In four studies we examined the construct validity of IL. We found evidence for a four-factor IL model that was highly prototypical of good leadership. IL predicted top-level leader emergence controlling for the full-range factors, initiating structure, and consideration. It also explained unique variance in outcomes beyond the full-range factors; the effects of transformational leadership were vastly overstated when IL was omitted from the model. We discuss the importance of a "fuller full-range" leadership theory for theory and practice. We also showcase our methodological contributions regarding corrections for common method variance (i.e., endogeneity) bias using two-stage least squares (2SLS) regression and Monte Carlo split-sample designs.
Resumo:
The blood pressure (BP), heart rate (HR), and humoral effects of single intravenous (i.v.) doses of the angiotensin-converting enzyme (ACE) inhibitor captopril was investigated in five normotensive healthy volunteers. Each subject received at 1-week intervals a bolus dose of either captopril (1, 5, and 25 mg) or its vehicle. The study was conducted in a single-blind fashion, and the order of treatment phases was randomized. The different doses of captopril had no acute effect on BP and HR. They induced a dose-dependent decrease in plasma ACE activity and plasma angiotensin II levels. The angiotensin-(1-8) octapeptide was isolated by solid-phase extraction and high-performance liquid chromatography (HPLC) prior to radioimmunoassay (RIA). All three doses of captopril reduced circulating angiotensin II levels within 15 min of drug administration. Only with the 25-mg dose was the angiotensin II concentration below the detection limit at 15 min and still significantly reduced 90 min after drug administration. Simultaneous and progressive decreases in plasma aldosterone levels were observed both with ACE inhibition and during vehicle injection, but the relative fall was more pronounced after captopril administration. No adverse reaction was noticed. These results demonstrate that captopril given parenterally blocks the renin-angiotensin system in a dose-dependent manner. Only with the dose of 25 mg was the inhibition of plasma-converting enzyme activity and the reduction of plasma angiotensin II sustained for at least 1 1/2 h.
Resumo:
Perfusion CT studies of regional cerebral blood flow (rCBF), involving sequential acquisition of cerebral CT sections during IV contrast material administration, have classically been reported to be achieved at 120 kVp. We hypothesized that using 80 kVp should result in the same image quality while significantly lowering the patient's radiation dose, and we evaluated this assumption. In five patients undergoing cerebral CT survey, one section level was imaged at 120 kVp and 80 kVp, before and after IV administration of iodinated contrast material. These four cerebral CT sections obtained in each patient were analyzed with special interest to contrast, noise, and radiation dose. Contrast enhancement at 80 kVp is significantly increased (P < .001), as well as contrast between gray matter and white matter after contrast enhancement (P < .001). Mean noise at 80 kVp is not statistically different (P = .042). Finally, performance of perfusion CT studies at 80 kVp, keeping mAs constant, lowers the radiation dose by a factor of 2.8. We, thus, conclude that 80 kVp acquisition of perfusion CT studies of rCBF will result in increased contrast enhancement and should improve rCBF analysis, with a reduced patient's irradiation.
Resumo:
Little attention has been paid so far to the influence of the chemical nature of the substance when measuring δ 15N by elemental analysis (EA)-isotope ratio mass spectrometry (IRMS). Although the bulk nitrogen isotope analysis of organic material is not to be questioned, literature from different disciplines using IRMS provides hints that the quantitative conversion of nitrate into nitrogen presents difficulties. We observed abnormal series of δ 15N values of laboratory standards and nitrates. These unexpected results were shown to be related to the tailing of the nitrogen peak of nitrate-containing compounds. A series of experiments were set up to investigate the cause of this phenomenon, using ammonium nitrate (NH4NO3) and potassium nitrate (KNO3) samples, two organic laboratory standards as well as the international secondary reference materials IAEA-N1, IAEA-N2-two ammonium sulphates [(NH4)2SO4]-and IAEA-NO-3, a potassium nitrate. In experiment 1, we used graphite and vanadium pentoxide (V2O5) as additives to observe if they could enhance the decomposition (combustion) of nitrates. In experiment 2, we tested another elemental analyser configuration including an additional section of reduced copper in order to see whether or not the tailing could originate from an incomplete reduction process. Finally, we modified several parameters of the method and observed their influence on the peak shape, δ 15N value and nitrogen content in weight percent of nitrogen of the target substances. We found the best results using mere thermal decomposition in helium, under exclusion of any oxygen. We show that the analytical procedure used for organic samples should not be used for nitrates because of their different chemical nature. We present the best performance given one set of sample introduction parameters for the analysis of nitrates, as well as for the ammonium sulphate IAEA-N1 and IAEA-N2 reference materials. We discuss these results considering the thermochemistry of the substances and the analytical technique itself. The results emphasise the difference in chemical nature of inorganic and organic samples, which necessarily involves distinct thermochemistry when analysed by EA-IRMS. Therefore, they should not be processed using the same analytical procedure. This clearly impacts on the way international secondary reference materials should be used for the calibration of organic laboratory standards.
Resumo:
Voriconazole (VRC) is a broad-spectrum antifungal triazole with nonlinear pharmacokinetics. The utility of measurement of voriconazole blood levels for optimizing therapy is a matter of debate. Available high-performance liquid chromatography (HPLC) and bioassay methods are technically complex, time-consuming, or have a narrow analytical range. Objectives of the present study were to develop new, simple analytical methods and to assess variability of voriconazole blood levels in patients with invasive mycoses. Acetonitrile precipitation, reverse-phase separation, and UV detection were used for HPLC. A voriconazole-hypersusceptible Candida albicans mutant lacking multidrug efflux transporters (cdr1Delta/cdr1Delta, cdr2Delta/cdr2Delta, flu1Delta/flu1Delta, and mdr1Delta/mdr1Delta) and calcineurin subunit A (cnaDelta/cnaDelta) was used for bioassay. Mean intra-/interrun accuracies over the VRC concentration range from 0.25 to 16 mg/liter were 93.7% +/- 5.0%/96.5% +/- 2.4% (HPLC) and 94.9% +/- 6.1%/94.7% +/- 3.3% (bioassay). Mean intra-/interrun coefficients of variation were 5.2% +/- 1.5%/5.4% +/- 0.9% and 6.5% +/- 2.5%/4.0% +/- 1.6% for HPLC and bioassay, respectively. The coefficient of concordance between HPLC and bioassay was 0.96. Sequential measurements in 10 patients with invasive mycoses showed important inter- and intraindividual variations of estimated voriconazole area under the concentration-time curve (AUC): median, 43.9 mg x h/liter (range, 12.9 to 71.1) on the first and 27.4 mg x h/liter (range, 2.9 to 93.1) on the last day of therapy. During therapy, AUC decreased in five patients, increased in three, and remained unchanged in two. A toxic encephalopathy probably related to the increase of the VRC AUC (from 71.1 to 93.1 mg x h/liter) was observed. The VRC AUC decreased (from 12.9 to 2.9 mg x h/liter) in a patient with persistent signs of invasive aspergillosis. These preliminary observations suggest that voriconazole over- or underexposure resulting from variability of blood levels might have clinical implications. Simple HPLC and bioassay methods offer new tools for monitoring voriconazole therapy.
Resumo:
BACKGROUND: Measurement of plasma renin is important for the clinical assessment of hypertensive patients. The most common methods for measuring plasma renin are the plasma renin activity (PRA) assay and the renin immunoassay. The clinical application of renin inhibitor therapy has thrown into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information. Whereas activity assays measure only active renin, immunoassays measure both active and inhibited renin. Particular care must be taken in the collection and processing of blood samples and in the performance of these assays to avoid errors in renin measurement. Both activity assays and immunoassays are susceptible to renin overestimation due to prorenin activation. In addition, activity assays performed with peptidase inhibitors may overestimate the degree of inhibition of PRA by renin inhibitor therapy. Moreover, immunoassays may overestimate the reactive increase in plasma renin concentration in response to renin inhibitor therapy, owing to the inhibitor promoting conversion of prorenin to an open conformation that is recognized by renin immunoassays. CONCLUSIONS: The successful application of renin assays to patient care requires that the clinician and the clinical chemist understand the information provided by these assays and of the precautions necessary to ensure their accuracy.