23 resultados para Internal process-level performance
Resumo:
Many restaurant organizations have committed a substantial amount of effort to studying the relationship between a firm’s performance and its effort to develop an effective human resources management reward-and-retention system. These studies have produced various metrics for determining the efficacy of restaurant management and human resources management systems. This paper explores the best metrics to use when calculating the overall unit performance of casual restaurant managers. These metrics were identified through an exploratory qualitative case study method that included interviews with executives and a Delphi study. Experts proposed several diverse metrics for measuring management value and performance. These factors seem to represent all stakeholders’interest.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
In this dissertation, I present an integrated model of organizational performance. Most prior research has relied extensively on testing individual linkages, often with cross-sectional data. In this dissertation, longitudinal unit-level data from 559 restaurants, collected over a one-year period, were used to test the proposed model. The model was hypothesized to begin with employee satisfaction as a key antecedent that would ultimately lead to improved financial performance. Several variables including turnover, efficiency, and guest satisfaction are proposed as mediators of the satisfaction-performance relationship. The current findings replicate and extend past research using individual-level data. The overall model adequately explained the data, but was significantly improved with an additional link from employee satisfaction to efficiency, which was not originally hypothesized. Management turnover was a strong predictor of hourly level team turnover, and both were significant predictors of efficiency. Full findings for each hypothesis are presented and practical organizational implications are given. Limitations and recommendations for future research are provided. ^
Resumo:
This sequential explanatory, mixed methods research design examines the role teachers should enact in the development process of the teacher evaluation system in Louisiana. These insights will ensure teachers are catalysts in the classroom to significantly increase student achievement and allow policymakers, practitioners, and instructional leaders to direct as learned decision makers.
Resumo:
Increasing parental involvement was made an important goal for all Florida schools in educational reform legislation in the 1990's. A forum for this input was established and became known as the School Advisory Council (SAC). To demonstrate the importance of process and inclusion, a south Florida school district and its local teacher's union agreed on the following five goals for SACs: (a) to foster an environment of professional collaboration among all stakeholders, (b) to assist in the preparation and evaluation of the school improvement plan, (c) to address all state and district goals, (d) to serve as the avenue for authentic and representative input from all stakeholders, and (e) to ensure the continued existence of the consensus-building process on all issues related to the school's instructional program. The purpose of this study was to determine to what extent and in what ways the parent members of one south Florida middle school's SAC achieved the five district goals during its first three years of implementation. The primary participants were 16 parents who served as members of the SAC, while 16 non-parent members provided perspective on parent involvement as "outside sources." Being qualitative by design, factors such as school climate, leadership styles, and the quality of parental input were described from data collected from four sources: parent interviews, a questionnaire of non-parents, researcher observations, and relevant documents. A cross-case analysis of all data informed a process evaluation that described the similarities and differences of intended and observed outcomes of parent involvement from each source using Stake's descriptive matrix model. A formative evaluation of the process compared the observed outcomes with standards set for successful SACs, such as the district's five goals. The findings indicated that parents elected to the SACs did not meet the intended goals set by the state and district. The school leadership did not foster an environment of professional collaboration and authentic decision-making for parents and other stakeholders. The overall process did not include consensus-building, and there was little if any input by parents on school improvement and other important issues relating to the instructional program. Only two parents gave the SAC a successful rating for involving parents in the decision-making process. Although compliance was met in many of the procedural transactions of the SAC, the reactions of parents to their perceived role and influence often reflected feelings of powerlessness and frustration with a process that many thought lacked meaningfulness and productivity. Two conclusions made from this study are as follows: (a) that the role of the principal in the collaborative process is pivotal, and (b) that the normative-re-educative approach to change would be most appropriate for SACs.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.