940 resultados para PERFORMANCE WORK SYSTEMS


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study was to determine if there was a difference in the self-determined evaluations of work performance and support needs by adults with mental retardation in supported employment and in sheltered workshop environments. The instrument, Job Observation and Behavior Scale: Opportunity for Self-Determination (JOBS: OSD; Brady, Rosenberg, & Frain, 2006), was administered to 38 adults with mental retardation from sheltered workshops and 32 adults with mental retardation from supported employment environments. Cross-tabulations with Chi-square tests and independent samples t-tests were conducted to evaluate differences between the two groups, sheltered workshop and supported work. Two Multivariate Analyses of Variance (MANOVAs) were conducted to determine the effect of work environment on Quality of Performance (QP) and Types of Support (TS) test scores and their subscales. ^ This study found that there were significant differences between the groups on the QP Behavior and Job Duties subscales. The sheltered workshop group perceived themselves as performing significantly better on job duties than the supported work group. Conversely, the supported work group perceived themselves to have better behavior than the sheltered workshop group. However, there were no significant differences between groups in their perception of support needs for the three subscales. ^ The findings imply that work environment affects the self-determined evaluations of work performance by adults with mental retardation. Recommendations for further study include (a) detailing the characteristics of supported work and sheltered workshops that support and/or discourage self-determined behaviors, (b) exploring the behavior of adults with mental retardation in sheltered workshops and supported work environments, and (c) analysis of the support needs for and understanding of them by adults with mental retardation in sheltered workshops and in supported work environments. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation develops a process improvement method for service operations based on the Theory of Constraints (TOC), a management philosophy that has been shown to be effective in manufacturing for decreasing WIP and improving throughput. While TOC has enjoyed much attention and success in the manufacturing arena, its application to services in general has been limited. The contribution to industry and knowledge is a method for improving global performance measures based on TOC principles. The method proposed in this dissertation will be tested using discrete event simulation based on the scenario of the service factory of airline turnaround operations. To evaluate the method, a simulation model of aircraft turn operations of a U.S. based carrier was made and validated using actual data from airline operations. The model was then adjusted to reflect an application of the Theory of Constraints for determining how to deploy the scarce resource of ramp workers. The results indicate that, given slight modifications to TOC terminology and the development of a method for constraint identification, the Theory of Constraints can be applied with success to services. Bottlenecks in services must be defined as those processes for which the process rates and amount of work remaining are such that completing the process will not be possible without an increase in the process rate. The bottleneck ratio is used to determine to what degree a process is a constraint. Simulation results also suggest that redefining performance measures to reflect a global business perspective of reducing costs related to specific flights versus the operational local optimum approach of turning all aircraft quickly results in significant savings to the company. Savings to the annual operating costs of the airline were simulated to equal 30% of possible current expenses for misconnecting passengers with a modest increase in utilization of the workers through a more efficient heuristic of deploying them to the highest priority tasks. This dissertation contributes to the literature on service operations by describing a dynamic, adaptive dispatch approach to manage service factory operations similar to airline turnaround operations using the management philosophy of the Theory of Constraints.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In - Appraising Work Group Performance: New Productivity Opportunities in Hospitality Management – a discussion by Mark R. Edwards, Associate Professor, College of Engineering, Arizona State University and Leslie Edwards Cummings, Assistant Professor, College of Hotel Administration University of Nevada, Las Vegas; the authors initially provide: “Employee group performance variation accounts for a significant portion of the degree of productivity in the hotel, motel, and food service sectors of the hospitality industry. The authors discuss TEAMSG, a microcomputer based approach to appraising and interpreting group performance. TEAMSG appraisal allows an organization to profile and to evaluate groups, facilitating the targeting of training and development decisions and interventions, as well as the more equitable distribution of organizational rewards.” “The caliber of employee group performance is a major determinant in an organization's productivity and success within the hotel and food service industries,” Edwards and Cummings say. “Gaining accurate information about the quality of performance of such groups as organizational divisions, individual functional departments, or work groups can be as enlightening...” the authors further reveal. This perspective is especially important not only for strategic human resources planning purposes, but also for diagnosing development needs and for differentially distributing organizational rewards.” The authors will have you know, employee requirements in an unpredictable environment, which is what the hospitality industry largely is, are difficult to quantify. In an effort to measure elements of performance Edwards and Cummings look to TEAMSG, which is an acronym for Team Evaluation and Management System for Groups. They develop the concept. In discussing background for employees, Edwards and Cummings point-out that employees - at the individual level - must often possess and exercise varied skills. In group circumstances employees often work at locations outside of, or move from corporate unit-to-unit, as in the case of a project team. Being able to transcend individual-to-group mentality is imperative. “A solution which addresses the frustration and lack of motivation on the part of the employee is to coach, develop, appraise, and reward employees on the basis of group achievement,” say the authors. “An appraisal, effectively developed and interpreted, has at least three functions,” Edwards and Cummings suggest, and go on to define them. The authors do place a great emphasis on rewards and interventions to bolster the assertion set forth in their thesis statement. Edwards and Cummings warn that individual agendas can threaten, erode, and undermine group performance; there is no - I - in TEAM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bonded repair of concrete structures with fiber reinforced polymer (FRP) systems is increasingly being accepted as a cost-efficient and structurally viable method of rapid rehabilitation of concrete structures. However, the relationships between long-term performance attributes, service-life, and details of the installation process are not easy to quantify. Accordingly, there is currently a lack of generally accepted construction specifications, making it difficult for the field engineer to certify the adequacy of the construction process. ^ The objective of the present study, as part of the National Cooperative Highway Research Program (NCHRP) Project 10-59B, was to investigate the effect of surface preparation on the behavior of wet lay-up FRP repair systems and consequently develop rational thresholds that provide sufficient performance. ^ The research program was comprised of both experimental and analytical work for wet lay-up FRP applications. The experimental work included flexure testing of sixty-seven (67) reinforced concrete beams and bond testing of ten (10) reinforced concrete blocks. Four different parameters were studied: surface roughness, surface flatness, surface voids and bug holes, and surface cracks/cuts. The findings were analyzed from various aspects and compared with the data available in the literature. As part of the analytical work, finite element models of the flexural specimens with surface flaws were developed using ANSYS. The purpose of this part was to extend the parametric study on the effects of concrete surface flaws and verify the experimental results based on nonlinear finite element analysis. ^ Test results showed that surface roughness does not appear to have a significant influence on the overall performance of the wet lay-up FRP systems with or without adequate anchorage, and whether failure was by debonding or rupture of FRP. Both experimental and analytical results for surface flatness proved that peaks on concrete surface, in the range studied, do not have a significant effect on the performance of wet lay-up FRP systems. However, valleys of particular size could reduce the strength of wet lay-up FRP systems. Test results regarding surface voids and surface cracks/cuts revealed that previously suggested thresholds for these flaws appear to be conservative, as also confirmed by analytical study. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents the development of an in-plane vertical micro-coaxial probe using bulk micromachining technique for high frequency material characterization. The coaxial probe was fabricated in a silicon substrate by standard photolithography and a deep reactive ion etching (DRIE) technique. The through-hole structure in the form of a coaxial probe was etched and metalized with a diluted silver paste. A co-planar waveguide configuration was integrated with the design to characterize the probe. The electrical and RF characteristics of the coaxial probe were determined by simulating the probe design in Ansoft's High Frequency Structure Simulator (HFSS). The reflection coefficient and transducer gain performance of the probe was measured up to 65 GHz using a vector network analyzer (VNA). The probe demonstrated excellent results over a wide frequency band, indicating its ability to integrate with millimeter wave packaging systems as well as characterize unknown materials at high frequencies. The probe was then placed in contact with 3 materials where their unknown permittivities were determined. To accomplish this, the coaxial probe was placed in contact with the material under test and electromagnetic waves were directed to the surface using the VNA, where its reflection coefficient was then determined over a wide frequency band from dc-to -65GHz. Next, the permittivity of each material was deduced from its measured reflection coefficients using a cross ratio invariance coding technique. The permittivity results obtained when measuring the reflection coefficient data were compared to simulated permittivity results and agreed well. These results validate the use of the micro-coaxial probe to characterize the permittivity of unknown materials at high frequencies up to 65GHz.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To stay competitive, many employers are looking for creative and innovative employees to add value to their organization. However, current models of job performance overlook creative performance as an important criterion to measure in the workplace. The purpose of this dissertation is to conduct two separate but related studies on creative performance that aim to provide support that creative performance should be included in models of job performance, and ultimately included in performance evaluations in organizations. Study 1 is a meta-analysis on the relationship between creative performance and task performance, and the relationship between creative performance and organizational citizenship behavior (OCB). Overall, I found support for a medium to large corrected correlation for both the creative performance-task performance (ρ = .51) and creative performance-OCB (ρ = .49) relationships. Further, I also found that both rating-source and study location were significant moderators. Study 2 is a process model that includes creative performance alongside task performance and OCB as the outcome variables. I test a model in which both individual differences (specifically: conscientiousness, extraversion, proactive personality, and self-efficacy) and job characteristics (autonomy, feedback, and supervisor support) predict creative performance, task performance, and OCB through engagement as a mediator. In a sample of 299 employed individuals, I found that all the individual differences and job characteristics were positively correlated with all three performance criteria. I also looked at these relationships in a multiple regression framework and most of the individual differences and job characteristics still predicted the performance criteria. In the mediation analyses, I found support for engagement as a significant mediator of the individual differences-performance and job characteristics-performance relationships. Taken together, Study 1 and Study 2 support the notion that creative performance should be included in models of job performance. Implications for both researchers and practitioners alike are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The reduction in energy consumption is the main requirement to be satisfied in refrigeration and air conditioning by mechanical vapor compression system. In automotive system isn´t different. Thermal analyses in these systems are crucial for a better performance in automotive air conditioner. This work aims to evaluate the conditions of use of R134A refrigerant (used in vehicles) and compare with R437A (alternative refrigerant), varying the speed of the electric fan in the evaporator. All tests were performed in automotive air conditioning unit ATR600, simulating the thermal conditions of the system. The equipment is instrumented for data acquisition temperature, condensation and evaporation pressures and electrical power consumed to determine the coefficient of performance of the cycle. The system was tested under rotations of 800, 1600 and 2400 rpm with constant load of R- 134a. It occurred with the same conditions with R437A. Both recommended by the manufacturer. The results show that the best system performance occurs in the rotation of 800 RPM for both refrigerants.