911 resultados para PERFORMANCE WORK SYSTEMS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The current U.S. health care system faces numerous environmental challenges. To compete and survive, health care organizations are developing strategies to lower costs and increase efficiency and quality. All of these strategies require rapid and precise decision making by top level managers. The purpose of this study is to determine the relationship between the environment, made up of unfavorable market conditions and limited resources, and the work roles of top level managers, specifically in the settings of academic medical centers. Managerial work roles are based on the ten work roles developed by Henry Mintzberg, in his book, The Nature of Managerial Work (1973). ^ This research utilized an integrated conceptual framework made up of systems theory in conjunction with role, attribution and contingency theories to illustrate that four most frequently performed Mintzberg's work roles are affected by the two environment dimensions. The study sample consisted of 108 chief executive officers in academic medical centers throughout the United States. The methods included qualitative methods in the form of key informants and case studies and quantitative in the form of a survey questionnaire. Research analysis involved descriptive statistics, reliability tests, correlation, principal component and multivariate analyses. ^ Results indicated that under the market condition of increased revenue based on capitation, the work roles increased. In addition, under the environment dimension of limited resources, the work roles increased when uncompensated care increased while Medicare and non-government funding decreased. ^ Based on these results, a typology of health care managers in academic medical centers was created. Managers could be typed as a strategy-formulator, relationship-builder or task delegator. Therefore, managers who ascertained their types would be able to use this knowledge to build their strengths and develop their weaknesses. Furthermore, organizations could use the typology to identify appropriate roles and responsibilities of managers for their specific needs. Consequently, this research is a valuable tool for understanding health care managerial behaviors that lead to improved decision making. At the same time, this could enhance satisfaction and performance and enable organizations to gain the competitive edge . ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A major challenge of modern teams lies in the coordination of the efforts not just of individuals within a team, but also of teams whose efforts are ultimately entwined with those of other teams. Despite this fact, much of the research on work teams fails to consider the external dependencies that exist in organizational teams and instead focuses on internal or within team processes. Multi-Team Systems Theory is used as a theoretical framework for understanding teams-of-teams organizational forms (Multi-Team Systems; MTS's); and leadership teams are proposed as one remedy that enable MTS members to dedicate needed resources to intra-team activities while ensuring effective synchronization of between-team activities. Two functions of leader teams were identified: strategy development and coordination facilitation; and a model was developed delineating the effects of the two leader roles on multi-team cognitions, processes, and performance.^ Three hundred eighty-four undergraduate psychology and business students participated in a laboratory simulation that modeled an MTS; each MTS was comprised of three, two-member teams each performing distinct but interdependent components of an F-22 battle simulation task. Two roles of leader teams supported in the literature were manipulated through training in a 2 (strategy training vs. control) x 2 (coordination training vs. control) design. Multivariate analysis of variance (MANOVA) and mediated regression analysis were used to test the study's hypotheses. ^ Results indicate that both training manipulations produced differences in the effectiveness of the intended form of leader behavior. The enhanced leader strategy training resulted in more accurate (but not more similar) MTS mental models, better inter-team coordination, and higher levels of multi-team (but not component team) performance. Moreover, mental model accuracy fully mediated the relationship between leader strategy and inter-team coordination; and inter-team coordination fully mediated the effect of leader strategy on multi-team performance. Leader coordination training led to better inter-team coordination, but not to higher levels of either team or multi-team performance. Mediated Input-Process-Output (I-P-O) relationships were not supported with leader coordination; rather, leader coordination facilitation and inter-team coordination uniquely contributed to component team and multi-team level performance. The implications of these findings and future research directions are also discussed. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work is the first work using patterned soft underlayers in multilevel three-dimensional vertical magnetic data storage systems. The motivation stems from an exponentially growing information stockpile, and a corresponding need for more efficient storage devices with higher density. The world information stockpile currently exceeds 150EB (ExaByte=1x1018Bytes); most of which is in analog form. Among the storage technologies (semiconductor, optical and magnetic), magnetic hard disk drives are posed to occupy a big role in personal, network as well as corporate storage. However; this mode suffers from a limit known as the Superparamagnetic limit; which limits achievable areal density due to fundamental quantum mechanical stability requirements. There are many viable techniques considered to defer superparamagnetism into the 100's of Gbit/in2 such as: patterned media, Heat-Assisted Magnetic Recording (HAMR), Self Organized Magnetic Arrays (SOMA), antiferromagnetically coupled structures (AFC), and perpendicular magnetic recording. Nonetheless, these techniques utilize a single magnetic layer; and can thusly be viewed as two-dimensional in nature. In this work a novel three-dimensional vertical magnetic recording approach is proposed. This approach utilizes the entire thickness of a magnetic multilayer structure to store information; with potential areal density well into the Tbit/in2 regime. ^ There are several possible implementations for 3D magnetic recording; each presenting its own set of requirements, merits and challenges. The issues and considerations pertaining to the development of such systems will be examined, and analyzed using empirical and numerical analysis techniques. Two novel key approaches are proposed and developed: (1) Patterned soft underlayer (SUL) which allows for enhanced recording of thicker media, (2) A combinatorial approach for 3D media development that facilitates concurrent investigation of various film parameters on a predefined performance metric. A case study is presented using combinatorial overcoats of Tantalum and Zirconium Oxides for corrosion protection in magnetic media. ^ Feasibility of 3D recording is demonstrated, and an emphasis on 3D media development is emphasized as a key prerequisite. Patterned SUL shows significant enhancement over conventional "un-patterned" SUL, and shows that geometry can be used as a design tool to achieve favorable field distribution where magnetic storage and magnetic phenomena are involved. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study was to determine if there was a difference in the self-determined evaluations of work performance and support needs by adults with mental retardation in supported employment and in sheltered workshop environments. The instrument, Job Observation and Behavior Scale: Opportunity for Self-Determination (JOBS: OSD; Brady, Rosenberg, & Frain, 2006), was administered to 38 adults with mental retardation from sheltered workshops and 32 adults with mental retardation from supported employment environments. Cross-tabulations with Chi-square tests and independent samples t-tests were conducted to evaluate differences between the two groups, sheltered workshop and supported work. Two Multivariate Analyses of Variance (MANOVAs) were conducted to determine the effect of work environment on Quality of Performance (QP) and Types of Support (TS) test scores and their subscales. ^ This study found that there were significant differences between the groups on the QP Behavior and Job Duties subscales. The sheltered workshop group perceived themselves as performing significantly better on job duties than the supported work group. Conversely, the supported work group perceived themselves to have better behavior than the sheltered workshop group. However, there were no significant differences between groups in their perception of support needs for the three subscales. ^ The findings imply that work environment affects the self-determined evaluations of work performance by adults with mental retardation. Recommendations for further study include (a) detailing the characteristics of supported work and sheltered workshops that support and/or discourage self-determined behaviors, (b) exploring the behavior of adults with mental retardation in sheltered workshops and supported work environments, and (c) analysis of the support needs for and understanding of them by adults with mental retardation in sheltered workshops and in supported work environments. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation develops a process improvement method for service operations based on the Theory of Constraints (TOC), a management philosophy that has been shown to be effective in manufacturing for decreasing WIP and improving throughput. While TOC has enjoyed much attention and success in the manufacturing arena, its application to services in general has been limited. The contribution to industry and knowledge is a method for improving global performance measures based on TOC principles. The method proposed in this dissertation will be tested using discrete event simulation based on the scenario of the service factory of airline turnaround operations. To evaluate the method, a simulation model of aircraft turn operations of a U.S. based carrier was made and validated using actual data from airline operations. The model was then adjusted to reflect an application of the Theory of Constraints for determining how to deploy the scarce resource of ramp workers. The results indicate that, given slight modifications to TOC terminology and the development of a method for constraint identification, the Theory of Constraints can be applied with success to services. Bottlenecks in services must be defined as those processes for which the process rates and amount of work remaining are such that completing the process will not be possible without an increase in the process rate. The bottleneck ratio is used to determine to what degree a process is a constraint. Simulation results also suggest that redefining performance measures to reflect a global business perspective of reducing costs related to specific flights versus the operational local optimum approach of turning all aircraft quickly results in significant savings to the company. Savings to the annual operating costs of the airline were simulated to equal 30% of possible current expenses for misconnecting passengers with a modest increase in utilization of the workers through a more efficient heuristic of deploying them to the highest priority tasks. This dissertation contributes to the literature on service operations by describing a dynamic, adaptive dispatch approach to manage service factory operations similar to airline turnaround operations using the management philosophy of the Theory of Constraints.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In - Appraising Work Group Performance: New Productivity Opportunities in Hospitality Management – a discussion by Mark R. Edwards, Associate Professor, College of Engineering, Arizona State University and Leslie Edwards Cummings, Assistant Professor, College of Hotel Administration University of Nevada, Las Vegas; the authors initially provide: “Employee group performance variation accounts for a significant portion of the degree of productivity in the hotel, motel, and food service sectors of the hospitality industry. The authors discuss TEAMSG, a microcomputer based approach to appraising and interpreting group performance. TEAMSG appraisal allows an organization to profile and to evaluate groups, facilitating the targeting of training and development decisions and interventions, as well as the more equitable distribution of organizational rewards.” “The caliber of employee group performance is a major determinant in an organization's productivity and success within the hotel and food service industries,” Edwards and Cummings say. “Gaining accurate information about the quality of performance of such groups as organizational divisions, individual functional departments, or work groups can be as enlightening...” the authors further reveal. This perspective is especially important not only for strategic human resources planning purposes, but also for diagnosing development needs and for differentially distributing organizational rewards.” The authors will have you know, employee requirements in an unpredictable environment, which is what the hospitality industry largely is, are difficult to quantify. In an effort to measure elements of performance Edwards and Cummings look to TEAMSG, which is an acronym for Team Evaluation and Management System for Groups. They develop the concept. In discussing background for employees, Edwards and Cummings point-out that employees - at the individual level - must often possess and exercise varied skills. In group circumstances employees often work at locations outside of, or move from corporate unit-to-unit, as in the case of a project team. Being able to transcend individual-to-group mentality is imperative. “A solution which addresses the frustration and lack of motivation on the part of the employee is to coach, develop, appraise, and reward employees on the basis of group achievement,” say the authors. “An appraisal, effectively developed and interpreted, has at least three functions,” Edwards and Cummings suggest, and go on to define them. The authors do place a great emphasis on rewards and interventions to bolster the assertion set forth in their thesis statement. Edwards and Cummings warn that individual agendas can threaten, erode, and undermine group performance; there is no - I - in TEAM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bonded repair of concrete structures with fiber reinforced polymer (FRP) systems is increasingly being accepted as a cost-efficient and structurally viable method of rapid rehabilitation of concrete structures. However, the relationships between long-term performance attributes, service-life, and details of the installation process are not easy to quantify. Accordingly, there is currently a lack of generally accepted construction specifications, making it difficult for the field engineer to certify the adequacy of the construction process. ^ The objective of the present study, as part of the National Cooperative Highway Research Program (NCHRP) Project 10-59B, was to investigate the effect of surface preparation on the behavior of wet lay-up FRP repair systems and consequently develop rational thresholds that provide sufficient performance. ^ The research program was comprised of both experimental and analytical work for wet lay-up FRP applications. The experimental work included flexure testing of sixty-seven (67) reinforced concrete beams and bond testing of ten (10) reinforced concrete blocks. Four different parameters were studied: surface roughness, surface flatness, surface voids and bug holes, and surface cracks/cuts. The findings were analyzed from various aspects and compared with the data available in the literature. As part of the analytical work, finite element models of the flexural specimens with surface flaws were developed using ANSYS. The purpose of this part was to extend the parametric study on the effects of concrete surface flaws and verify the experimental results based on nonlinear finite element analysis. ^ Test results showed that surface roughness does not appear to have a significant influence on the overall performance of the wet lay-up FRP systems with or without adequate anchorage, and whether failure was by debonding or rupture of FRP. Both experimental and analytical results for surface flatness proved that peaks on concrete surface, in the range studied, do not have a significant effect on the performance of wet lay-up FRP systems. However, valleys of particular size could reduce the strength of wet lay-up FRP systems. Test results regarding surface voids and surface cracks/cuts revealed that previously suggested thresholds for these flaws appear to be conservative, as also confirmed by analytical study. ^