796 resultados para Performance Measures
Resumo:
Governmental accountability is the requirement of government entities to be accountable to the citizenry in order to justify the raising and expenditure of public resources. The concept of service efforts and accomplishments measurement for government programs was introduced by the Governmental Accounting Standards Board (GASB) in Service Efforts and Accomplishments Reporting: Its Time Has Come (1990). This research tested the feasibility of implementing the concept for the Federal-aid highway construction program and identified factors affecting implementation with a case study of the District of Columbia. Changes in condition and performance ratings for specific highway segments in 15 projects, before and after construction expenditures, were evaluated using data provided by the Federal Highway Administration. The results of the evaluation indicated difficulty in drawing conclusions on the state program performance, as a whole. The state program reflects problems within the Federally administered program that severely limit implementation of outcome-oriented performance measurement. Major problems identified with data acquisition are: data reliability, availability, compatibility and consistency among states. Other significant factors affecting implementation are institutional barriers and political barriers. Institutional issues in the Federal Highway Administration include the lack of integration of the fiscal project specific database with the Highway Performance Monitoring System database. The Federal Highway Administration has the ability to resolve both of the data problems, however interviews with key Federal informants indicate this will not occur without external directives and changes to the Federal “stewardship” approach to program administration. ^ The findings indicate many issues must be resolved for successful implementation of outcome-oriented performance measures in the Federal-aid construction program. The issues are organizational and political in nature, however in the current environment resolution is possible. Additional research is desirable and would be useful in overcoming the obstacles to successful implementation. ^
Resumo:
Highways are generally designed to serve a mixed traffic flow that consists of passenger cars, trucks, buses, recreational vehicles, etc. The fact that the impacts of these different vehicle types are not uniform creates problems in highway operations and safety. A common approach to reducing the impacts of truck traffic on freeways has been to restrict trucks to certain lane(s) to minimize the interaction between trucks and other vehicles and to compensate for their differences in operational characteristics. ^ The performance of different truck lane restriction alternatives differs under different traffic and geometric conditions. Thus, a good estimate of the operational performance of different truck lane restriction alternatives under prevailing conditions is needed to help make informed decisions on truck lane restriction alternatives. This study develops operational performance models that can be applied to help identify the most operationally efficient truck lane restriction alternative on a freeway under prevailing conditions. The operational performance measures examined in this study include average speed, throughput, speed difference, and lane changes. Prevailing conditions include number of lanes, interchange density, free-flow speeds, volumes, truck percentages, and ramp volumes. ^ Recognizing the difficulty of collecting sufficient data for an empirical modeling procedure that involves a high number of variables, the simulation approach was used to estimate the performance values for various truck lane restriction alternatives under various scenarios. Both the CORSIM and VISSIM simulation models were examined for their ability to model truck lane restrictions. Due to a major problem found in the CORSIM model for truck lane modeling, the VISSIM model was adopted as the simulator for this study. ^ The VISSIM model was calibrated mainly to replicate the capacity given in the 2000 Highway Capacity Manual (HCM) for various free-flow speeds under the ideal basic freeway section conditions. Non-linear regression models for average speed, throughput, average number of lane changes, and speed difference between the lane groups were developed. Based on the performance models developed, a simple decision procedure was recommended to select the desired truck lane restriction alternative for prevailing conditions. ^
Resumo:
Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^
Resumo:
Rates of survival of victims of sudden cardiac arrest (SCA) using cardio pulmonary resuscitation (CPR) have shown little improvement over the past three decades. Since registered nurses (RNs) comprise the largest group of healthcare providers in U.S. hospitals, it is essential that they are competent in performing the four primary measures (compression, ventilation, medication administration, and defibrillation) of CPR in order to improve survival rates of SCA patients. The purpose of this experimental study was to test a color-coded SMOCK system on: 1) time to implement emergency patient care measures 2) technical skills performance 3) number of medical errors, and 4) team performance during simulated CPR exercises. The study sample was 260 RNs (M 40 years, SD=11.6) with work experience as an RN (M 7.25 years, SD=9.42).Nurses were allocated to a control or intervention arm consisting of 20 groups of 5-8 RNs per arm for a total of 130 RNs in each arm. Nurses in each study arm were given clinical scenarios requiring emergency CPR. Nurses in the intervention group wore different color labeled aprons (smocks) indicating their role assignment (medications, ventilation, compression, defibrillation, etc) on the code team during CPR. Findings indicated that the intervention using color-labeled smocks for pre-assigned roles had a significant effect on the time nurses started compressions (t=3.03, p=0.005), ventilations (t=2.86, p=0.004) and defibrillations (t=2.00, p=.05) when compared to the controls using the standard of care. In performing technical skills, nurses in the intervention groups performed compressions and ventilations significantly better than those in the control groups. The control groups made significantly (t=-2.61, p=0.013) more total errors (7.55 SD 1.54) than the intervention group (5.60, SD 1.90). There were no significant differences in team performance measures between the groups. Study findings indicate use of colored labeled smocks during CPR emergencies resulted in: shorter times to start emergency CPR; reduced errors; more technical skills completed successfully; and no differences in team performance.
Resumo:
In this chapter, we assess the recent development and performance of ethical investments around the world. Ethical investments include both socially responsible investments (following Environmental, Social and Governance criteria) and faith-based investments (following religious principles). After presenting the development of each type of funds in a historical context, we analyse their ethical screening process, highlighting similarities and differences across funds and regions. This leads us to investigate their characteristics in terms of return and risk, and finally evaluate their historical performance using various risk-adjusted performance measures on a small sample of US funds. Hence we are able to not only compare the performance of each fund with each other and with traditional investments, but also assess their relative resilience to the 2007-08 financial crisis.
Resumo:
We investigate whether Real Estate Investment Trust (REIT) managers actively manipulate performance measures in spite of the strict regulation under the REIT regime. We provide empirical evidence that is consistent with this hypothesis. Specifically, manipulation strategies may rely on the opportunistic use of leverage. However, manipulation does not appear to be uniform across REIT sectors and seems to become more common as the level of competition in the underlying property sector increases. We employ a set of commonly used traditional performance measures and a recently developed manipulation-proof measure (MPPM, Goetzmann, Ingersoll, Spiegel, and Welch (2007)) to evaluate the performance of 147 REITs from seven different property sectors over the period 1991-2009. Our findings suggest that the existing REIT regulation may fail to mitigate a substantial agency conflict and that investors can benefit from evaluating return information carefully in order to avoid potentially manipulative funds.
Resumo:
Purpose The aim of this study was to test the effects of sprint interval training (SIT) on cardiorespiratory fitness and aerobic performance measures in young females. Methods Eight healthy, untrained females (age 21 ± 1 years; height 165 ± 5 cm; body mass 63 ± 6 kg) completed cycling peak oxygen uptake ( V˙O2V˙O2 peak), 10-km cycling time trial (TT) and critical power (CP) tests pre- and post-SIT. SIT protocol included 4 × 30-s “all-out” cycling efforts against 7 % body mass interspersed with 4 min of active recovery performed twice per week for 4 weeks (eight sessions in total). Results There was no significant difference in V˙O2V˙O2 peak following SIT compared to the control period (control period: 31.7 ± 3.0 ml kg−1 min−1; post-SIT: 30.9 ± 4.5 ml kg−1 min−1; p > 0.05), but SIT significantly improved time to exhaustion (TTE) (control period: 710 ± 101 s; post-SIT: 798 ± 127 s; p = 0.00), 10-km cycling TT (control period: 1055 ± 129 s; post-SIT: 997 ± 110 s; p = 0.004) and CP (control period: 1.8 ± 0.3 W kg−1; post-SIT: 2.3 ± 0.6 W kg−1; p = 0.01). Conclusions These results demonstrate that young untrained females are responsive to SIT as measured by TTE, 10-km cycling TT and CP tests. However, eight sessions of SIT over 4 weeks are not enough to provide sufficient training stimulus to increase V˙O2V˙O2 peak.
Resumo:
Over the last few years, football entered in a period of accelerated access to large amount of match analysis data. Social networks have been adopted to reveal the structure and organization of the web of interactions, such as the players passing distribution tendencies. In this study we investigated the influence of ball possession characteristics in the competitive success of Spanish La Liga teams. The sample was composed by OPTA passing distribution raw data (n=269,055 passes) obtained from 380 matches involving all the 20 teams of the 2012/2013 season. Then, we generated 760 adjacency matrixes and their corresponding social networks using Node XL software. For each network we calculated three team performance measures to evaluate ball possession tendencies: graph density, average clustering and passing intensity. Three levels of competitive success were determined using two-step cluster analysis based on two input variables: the total points scored by each team and the scored per conceded goals ratio. Our analyses revealed significant differences between competitive performances on all the three team performance measures (p < .001). Bottom-ranked teams had less number of connected players (graph density) and triangulations (average clustering) than intermediate and top-ranked teams. However, all the three clusters diverged in terms of passing intensity, with top-ranked teams having higher number of passes per possession time, than intermediate and bottom-ranked teams. Finally, similarities and dissimilarities in team signatures of play between the 20 teams were displayed using Cohen’s effect size. In sum, findings suggest the competitive performance was influenced by the density and connectivity of the teams, mainly due to the way teams use their possession time to give intensity to their game.
Resumo:
Rates of survival of victims of sudden cardiac arrest (SCA) using cardio pulmonary resuscitation (CPR) have shown little improvement over the past three decades. Since registered nurses (RNs) comprise the largest group of healthcare providers in U.S. hospitals, it is essential that they are competent in performing the four primary measures (compression, ventilation, medication administration, and defibrillation) of CPR in order to improve survival rates of SCA patients. The purpose of this experimental study was to test a color-coded SMOCK system on:1) time to implement emergency patient care measures 2) technical skills performance 3) number of medical errors, and 4) team performance during simulated CPR exercises. The study sample was 260 RNs (M 40 years, SD=11.6) with work experience as an RN (M 7.25 years, SD=9.42).Nurses were allocated to a control or intervention arm consisting of 20 groups of 5-8 RNs per arm for a total of 130 RNs in each arm. Nurses in each study arm were given clinical scenarios requiring emergency CPR. Nurses in the intervention group wore different color labeled aprons (smocks) indicating their role assignment (medications, ventilation, compression, defibrillation, etc) on the code team during CPR. Findings indicated that the intervention using color-labeled smocks for pre-assigned roles had a significant effect on the time nurses started compressions (t=3.03, p=0.005), ventilations (t=2.86, p=0.004) and defibrillations (t=2.00, p=.05) when compared to the controls using the standard of care. In performing technical skills, nurses in the intervention groups performed compressions and ventilations significantly better than those in the control groups. The control groups made significantly (t=-2.61, p=0.013) more total errors (7.55 SD 1.54) than the intervention group (5.60, SD 1.90). There were no significant differences in team performance measures between the groups. Study findings indicate use of colored labeled smocks during CPR emergencies resulted in: shorter times to start emergency CPR; reduced errors; more technical skills completed successfully; and no differences in team performance.
Resumo:
The purpose of this study was to establish the optimal allometric models to predict International Ski Federation’s ski-ranking points for sprint competitions (FISsprint) among elite female cross-country skiers based on maximal oxygen uptake (V̇O2max) and lean mass (LM). Ten elite female cross-country skiers (age: 24.5±2.8 years [mean ± SD]) completed a treadmill roller-skiing test to determine V̇O2max (ie, aerobic power) using the diagonal stride technique, whereas LM (ie, a surrogate indicator of anaerobic capacity) was determined by dual-emission X-ray anthropometry. The subjects’ FISsprint were used as competitive performance measures. Power function modeling was used to predict the skiers’ FISsprint based on V̇O2max, LM, and body mass. The subjects’ test and performance data were as follows: V̇O2max, 4.0±0.3 L min-1; LM, 48.9±4.4 kg; body mass, 64.0±5.2 kg; and FISsprint, 116.4±59.6 points. The following power function models were established for the prediction of FISsprint: 3.91×105 ∙ VO -6.002maxand 6.95×1010 ∙ LM-5.25; these models explained 66% (P=0.0043) and 52% (P=0.019), respectively, of the variance in the FISsprint. Body mass failed to contribute to both models; hence, the models are based on V̇O2max and LM expressed absolutely. The results demonstrate that the physiological variables that reflect aerobic power and anaerobic capacity are important indicators of competitive sprint performance among elite female skiers. To accurately indicate performance capability among elite female skiers, the presented power function models should be used. Skiers whose V̇O2max differs by 1% will differ in their FISsprint by 5.8%, whereas the corresponding 1% difference in LM is related to an FISsprint difference of 5.1%, where both differences are in favor of the skier with higher V̇O2max or LM. It is recommended that coaches use the absolute expression of these variables to monitor skiers’ performance-related training adaptations linked to changes in aerobic power and anaerobic capacity.
Resumo:
The literature on corporate identity management suggests that managing corporate identity is a strategically complex task embracing the shaping of a range of dimensions of organisational life. The performance measurement literature and its applications likewise now also emphasise organisational ability to incorporate various dimensions considering both financial and non-financial performance measures when assessing success. The inclusion of these soft non-financial measures challenges organisations to quantify intangible aspects of performance such as corporate identity, transforming unmeasurables into measurables. This paper explores the regulatory roles of the use of the balanced scorecard in shaping key dimensions of corporate identities in a public sector shared service provider in Australia. This case study employs qualitative interviews of senior managers and employees, secondary data and participant observation. The findings suggest that the use of the balanced scorecard has potential to support identity construction, as an organisational symbol, a communication tool of vision, and as strategy, through creating conversations that self-regulate behaviour. The development of an integrated performance measurement system, the balanced scorecard, becomes an expression of a desired corporate identity, and the performance measures and continuous process provide the resource for interpreting actual corporate identities. Through this process of understanding and mobilising the interaction, it may be possible to create a less obtrusive and more subtle way to control “what an organisation is”. This case study also suggests that the theoretical and practical fusion of the disciplinary knowledge around corporate identities and performance measurement systems could make a contribution to understanding and shaping corporate identities.
Resumo:
This document outlines a framework that could be used by government agencies in assessing policy interventions aimed at achieving social outcomes from government construction contracts. The framework represents a rational interpretation of the information gathered during the multi-outcomes construction policies project. The multi-outcomes project focused on the costs and benefits of using public construction contracts to promote the achievement of training and employment and public art objectives. The origin of the policy framework in a cost-benefit appraisal of current policy interventions is evidenced by its emphasis on sensitivity to policy commitment and project circumstances (especially project size and scope).The quantitative and qualitative analysis conducted in the multi-outcomes project highlighted, first, that in the absence of strong industry commitment to policy objectives, policy interventions typically result in high levels of avoidance activity, substantial administrative costs and very few benefits. Thus, for policy action on, for example, training or local employment to be successful compliance issues must be adequately addressed. Currently it appears that pre-qualification schemes (similar to the Priority Access Scheme) and schemes that rely on measuring, for example, the training investments of contractors within particular projects do not achieve high levels of compliance and involve significant administrative costs. Thus, an alternative is suggested in the policy framework developed here: a levy on each public construction project – set as a proportion of the total project costs. Although a full evaluation of this policy alternative was beyond the scope of the multi-outcomes construction policies project, it appears to offer the potential to minimize the transaction costs on contractors whilst enabling the creation of a training agency dedicated to improving the supply of skilled construction labour. A recommendation is thus made that this policy alternative be fully researched and evaluated. As noted above, the outcomes of the multi-outcomes research project also highlighted the need for sensitivity to project circumstances in the development and implementation of polices for public construction projects. Ideally a policy framework would have the flexibility to respond to circumstances where contractors share a commitment to the policy objectives and are able to identify measurable social outcomes from the particular government projects they are involved in. This would involve a project-by-project negotiation of goals and performance measures. It is likely to only be practical for large, longer term projects.
Resumo:
In Australia, an average 49 building and construction workers have been killed at work each year since 1997-98. Building/construction workers are more than twice as likely to be killed at work, than the average worker in all Australian industries. The ‘Safer Construction’ project, funded by the CRC-Construction Innovation and led by a task force comprising representatives of construction clients, designers and constructors, developed a Guide to Best Practice for Safer Construction. The Guide, which was informed by research undertaken at RMIT University, Queensland University of Technology and Curtin University, establishes broad principles for the improvement of safety in the industry and provides a ‘roadmap’ for improvement based upon lifecycle stages of a building/construction project. Within each project stage, best practices for the management of safety are identified. Each best practice is defined in terms of the recommended action, its key benefits, desirable outcomes, performance measures and leadership. ‘Safer Construction’ practices are identified from the planning to commissioning stages of a project. The ‘Safer Construction’ project represents the first time that key stakeholder groups in the Australian building/construction industry have worked together to articulate best practice and establish an appropriate basis for allocating (and sharing) responsibility for project safety performance.
Resumo:
Understanding users' capabilities, needs and expectations is key to the domain of Inclusive Design. Much of the work in the field could be informed and further strengthened by clear, valid and representative data covering the full range of people's capabilities. This article reviews existing data sets and identifies the challenges inherent in measuring capability in a manner that is informative for work in Inclusive Design. The need for a design-relevant capability data set is identified and consideration is given to a variety of capability construct operationalisation issues including questions associated with self-report and performance measures, sampling and the appropriate granularity of measures. The need for further experimental work is identified and a programme of research designed to culminate in the design of a valid and reliable capability survey is described.