502 resultados para BENCHMARKS
European Policy Uses of International Comparisons of Academic Achievement. ACES Working Papers, 2012
Resumo:
International large-scale assessments (ILSAs) and the resulting ranking of countries in key academic subjects have become increasingly significant in the development of global performance indicators and national level reforms in education. As one of the largest international surveys, the Programme for International Student Assessment (PISA) has had a considerable impact on the world of international comparisons of education. Based on the results of these assessments, claims are often made about the relative success or failure of education systems, and in some cases, such as Germany or Japan, ILSAs have sparked national level reforms (Ertl, 2006; Takayama, 2007, 2009). In this paper, I offer an analysis of how PISA is increasingly used as a key reference both for a regional2 entity like the European Union (EU) and for national level performance targets in the example of Spain (Breakspear, 2012). Specifically, the paper examines the growth of OECD and EU initiatives in defining quality education, and the use of both EU benchmarks and PISA in defining the education indicators used in Spain to measure and set goals for developing quality education. By doing so, this paper points to the role of the OECD and the EU in national education systems. It therefore adds to a body of literature pointing to the complex relationship between international, regional, and national education policy spaces (cf. Dale & Robertson, 2002; Lawn & Grek, 2012; Rizvi & Lingard, 2009).
Resumo:
At the 18 March EU-Turkey Migration Summit EU leaders pledged to lift visa requirements for Turkish citizens travelling to the Schengen zone by the end of June 2016 if Ankara met the required 72 benchmarks. On 4 May the European Commission will decide whether or not Turkey has done enough. The stakes are high because Turkey has threatened to cancel the readmission agreement, which is central to the success of the migration deal, if the EU fails to deliver.
Resumo:
The aim of my dissertation is to analyze how selected elements of language are addressed in two contemporary dystopias, Feed by M. T. Anderson (2002) and Super Sad True Love Story by Gary Shteyngart (2010). I chose these two novels because language plays a key role in both of them: both are primarily focused on the pervasiveness of technology, and on how the use/abuse of technology affects language in all its forms. In particular, I examine four key aspects of language: books, literacy, diary writing, as well as oral language. In order to analyze how the aforementioned elements of language are dealt with in Feed and Super Sad True Love Story, I consider how the same aspects of language are presented in a sample of classical dystopias selected as benchmarks: We by Yevgeny Zamyatin (1921), Brave New World by Aldous Huxley (1932), Animal Farm (1945) and Nineteen Eighty-Four (1949) by George Orwell, Fahrenheit 451 by Ray Bradbury (1952), and The Handmaid's Tale by Margaret Atwood (1986). In this way, I look at how language, books, literacy, and diaries are dealt with in Anderson’s Feed and in Shteyngart’s Super Sad True Love Story, both in comparison with the classical dystopias as well as with one another. This allows for an analysis of the similarities, as well as the differences, between the two novels. The comparative analysis carried out also takes into account the fact that the two contemporary dystopias have different target audiences: one is for young adults (Feed), whereas the other is for adults (Super Sad True Love Story). Consequently, I also consider whether further differences related to target readers affect differences in how language is dealt with. Preliminary findings indicate that, despite their different target audiences, the linguistic elements considered are addressed in the two novels in similar ways.
Resumo:
We present a study on the dependence of electric breakdown discharge properties on electrode geometry and the breakdown field in liquid argon near its boiling point. The measurements were performed with a spherical cathode and a planar anode at distances ranging from 0.1 mm to 10.0 mm. A detailed study of the time evolution of the breakdown volt-ampere characteristics was performed for the first time. It revealed a slow streamer development phase in the discharge. The results of a spectroscopic study of the visible light emission of the breakdowns complement the measurements. The light emission from the initial phase of the discharge is attributed to electro-luminescence of liquid argon following a current of drifting electrons. These results contribute to set benchmarks for breakdown-safe design of ionization detectors, such as Liquid Argon Time Projection Chambers (LAr TPC).
Resumo:
This paper examines the functioning of energy efficiency standards and labeling policies for air conditioners in Japan. The results of our empirical analysis suggest that consumers respond more to label information, which benchmarks the energy efficiency performance of each product to a pre-specified target, than to direct performance measures. This finding provides justification for the setting, and regular updating, of target standards as well as their use in calculating relative performance measures. We also find, through graphical analysis, that air conditioner manufacturers face a tradeoff between energy efficiency and product compactness when they develop their products. This tradeoff, combined with the semi-regular upward revision of minimum energy efficiency standards, has led to the growth in indoor unit size of air conditioners in recent years. In the face of this phenomenon, regulatory rules were revised so that manufacturers could adhere to less stringent standards if the indoor unit size of their product remains below a certain size. Our demand estimates provide no evidence that larger indoor unit size causes disutility to consumers. It is therefore possible that the regulatory change was not warranted from a consumer welfare point of view.
Resumo:
At Sleipner, CO2 is being separated from natural gas and injected into an underground saline aquifer for environmental purposes. Uncertainty in the aquifer temperature leads to uncertainty in the in situ density of CO2. In this study, gravity measurements were made over the injection site in 2002 and 2005 on top of 30 concrete benchmarks on the seafloor in order to constrain the in situ CO2 density. The gravity measurements have a repeatability of 4.3 µGal for 2003 and 3.5 µGal for 2005. The resulting time-lapse uncertainty is 5.3 µGal. Unexpected benchmark motions due to local sediment scouring contribute to the uncertainty. Forward gravity models are calculated based on both 3D seismic data and reservoir simulation models. The time-lapse gravity observations best fit a high temperature forward model based on the time-lapse 3D seismics, suggesting that the average in situ CO2 density is about to 530kg/m**3. Uncertainty in determining the average density is estimated to be ±65 kg/m**3 (95% confidence), however, this does not include uncertainties in the modeling. Additional seismic surveys and future gravity measurements will put better constraints on the CO2 density and continue to map out the CO2 flow.
Resumo:
Pursuant to Public Act 93-0331, the Illinois Workforce Investment Board is required to submit annual progress reports on the benchmarks established for measuring workforce development in Illinois.
Resumo:
The information in this booklet may be useful to educators when teaching about animal and plant adaptations. Illinois Learning Standards and Benchmarks: 12.B.1a, 12.B.1b, 12B.2a, 12B.2b.
Resumo:
The information in this booklet may be useful to educators when teaching about animal and plant adaptations. Illinois Learning Standards Benchmarks: 12.B.1a, 12.B.1b, 12B.2a, 12B.2b.
Resumo:
This paper presents an analysis of the thermomechanical behavior of hollow circular cylinders of functionally graded material (FGM). The solutions are obtained by a novel limiting process that employs the solutions of homogeneous hollow circular cylinders, with no recourse to the basic theory or the equations of non-homogeneous thermoclasticity. Several numerical cases are studied, and conclusions are drawn regarding the general properties of thermal stresses in the FGM cylinder. We conclude that thermal stresses necessarily occur in the FGM cylinder, except in the trivial case of zero temperature. While heat resistance may be improved by sagaciously designing the material composition, careful attention must be paid to the fact that thermal stresses in the FGM cylinder are governed by more factors than are its homogeneous counterparts. The results that are presented here will serve as benchmarks for future related work. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Background: Hospital performance reports based on administrative data should distinguish differences in quality of care between hospitals from case mix related variation and random error effects. A study was undertaken to determine which of 12 diagnosis-outcome indicators measured across all hospitals in one state had significant risk adjusted systematic ( or special cause) variation (SV) suggesting differences in quality of care. For those that did, we determined whether SV persists within hospital peer groups, whether indicator results correlate at the individual hospital level, and how many adverse outcomes would be avoided if all hospitals achieved indicator values equal to the best performing 20% of hospitals. Methods: All patients admitted during a 12 month period to 180 acute care hospitals in Queensland, Australia with heart failure (n = 5745), acute myocardial infarction ( AMI) ( n = 3427), or stroke ( n = 2955) were entered into the study. Outcomes comprised in-hospital deaths, long hospital stays, and 30 day readmissions. Regression models produced standardised, risk adjusted diagnosis specific outcome event ratios for each hospital. Systematic and random variation in ratio distributions for each indicator were then apportioned using hierarchical statistical models. Results: Only five of 12 (42%) diagnosis-outcome indicators showed significant SV across all hospitals ( long stays and same diagnosis readmissions for heart failure; in-hospital deaths and same diagnosis readmissions for AMI; and in-hospital deaths for stroke). Significant SV was only seen for two indicators within hospital peer groups ( same diagnosis readmissions for heart failure in tertiary hospitals and inhospital mortality for AMI in community hospitals). Only two pairs of indicators showed significant correlation. If all hospitals emulated the best performers, at least 20% of AMI and stroke deaths, heart failure long stays, and heart failure and AMI readmissions could be avoided. Conclusions: Diagnosis-outcome indicators based on administrative data require validation as markers of significant risk adjusted SV. Validated indicators allow quantification of realisable outcome benefits if all hospitals achieved best performer levels. The overall level of quality of care within single institutions cannot be inferred from the results of one or a few indicators.
Resumo:
The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.
Resumo:
This paper examines the measurement of long-horizon abnormal performance when stock selection is conditional on an extended period of past survival. Filtering on survival results in a sample driven towards more-established, frequently traded stocks and this has implications for the choice of benchmark used in performance measurement (especially in the presence of the well-documented size effect). A simulation study is conducted to document the properties of commonly employed performance measures conditional on past survival. The results suggest that the popular index benchmarks used in long-horizon event studies are severely biased and yield test statistics that are badly misspecified. In contrast, a matched-stock benchmark based on size and industry performs consistently well. Also, an eligible-stock index designed to mitigate the influence of the size effect proves effective.
Resumo:
This article analyses the way newspapers and journalists sometimes fail to acknowledge and resolve some of the contentious ethical dilemmas associated with reporting news. Its focus is on not exploiting and vilifying the vulnerable, especially people with mental illness, through sensationalism and inaccurate and imprecise use of medical terminology such as "psycho ". "schizo" or "lunatic ". Because ethics is central to our understanding of professionalism, this article uses professions and professionalism as benchmarks aginst which to analyse and critique how journalists and newspapers define and report news.Sometimes journalists fail the test of good ethical practice in terms of negative. outdated and inaccurate expressions they use in the news stories they report. Likewise, regulators of news industry standards appear not to recognize and sanction such reporting. The apparent inability to resolve these ethical dilemmas creates a context conducive to tolerance for, not acceptance of. unethical news reporting.
Resumo:
Objective To describe patients' perceptions of minimum worthwhile and desired reductions in pain and disability upon commencing treatment for chronic low back pain. Design and Setting Descriptive study nested within a community-based randomized controlled trial on prolotherapy injections and exercises. Patients A total of 110 participants with chronic low back pain. Interventions Prior to treatment, participants were asked what minimum percentage reductions in pain and disability would make treatment worthwhile and what percentage reductions in pain and disability they desired with treatment. Outcome Measures. Minimum worthwhile reductions and desired reductions in pain and disability. Results. Median (inter-quartile range) minimum worthwhile reductions were 25% (20%, 50%) for pain and 35% (20%, 50%) for disability. This compared with desired reductions of 80% (60%, 100%) for pain and 80% (50%, 100%) for disability. The internal consistency between pain and disability responses was high (Spearman's coefficient of association of 0.81 and 0.87, respectively). A significant association existed between minimum worthwhile reductions and desired reductions, but no association was found between these two factors and patient age, gender, pain severity or duration, disability, anxiety, depression, response to treatment, or treatment satisfaction. Conclusions. Inquiring directly about patients' expectations of reductions in pain and in disability is important in establishing realistic treatment goals and setting benchmarks for success. There is a wide disparity between the reductions that they regard as minimum worthwhile and reductions that they hope to achieve. However, there is a high internal consistency between reductions in pain and disability that they expect.