89 resultados para Complexity measurement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of the study is to find out how sales performance should be measured and how should sales be steered in a multinational company. The beginning of the study concentrates on the literature regarding sales, performance measurement, sales performance measurement, and sales steering. The empirical part of the study is a case study, in which the information was acquired from interviews with the key personnel of the company. The results of the interviews and the revealed problems were analyzed, and comparison for possible solutions was performed. When measuring sales performance, it is important to discover the specific needs and objectives for such a system. Specific needs should be highlighted in the design of the system. The system should be versatile and the structure of the system should be in line with the organizational structure. The role of the sales performance measurement system was seen to be important in helping sales steering. However, the importance of personal management and especially conversations were seen as really critical issue in the steering. Sales performance measurement could be based on the following perspectives: financial, market, customer, people, and future. That way the sales department could react to the environmental changes more rapidly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pulsed electroacoustic (PEA) method is a commonly used non-destructive technique for investigating space charges. It has been developed since early 1980s. These days there is continuing interest for better understanding of the influence of space charge on the reliability of solid electrical insulation under high electric field. The PEA method is widely used for space charge profiling for its robust and relatively inexpensive features. The PEA technique relies on a voltage impulse used to temporarily disturb the space charge equilibrium in a dielectric. The acoustic wave is generated by charge movement in the sample and detected by means of a piezoelectric film. The spatial distribution of the space charge is contained within the detected signal. The principle of such a system is already well established, and several kinds of setups have been constructed for different measurement needs. This thesis presents the design of a PEA measurement system as a systems engineering project. The operating principle and some recent developments are summarised. The steps of electrical and mechanical design of the instrument are discussed. A common procedure for measuring space charges is explained and applied to verify the functionality of the system. The measurement system is provided as an additional basic research tool for the Corporate Research Centre of ABB (China) Ltd. It can be used to characterise flat samples with thickness of 0.2–0.5 mm under DC stress. The spatial resolution of the measurement is 20 μm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intellectual assets have attained continuous attention in the academic field, as they are vital sources of competitive advantage and organizational performance in the contemporary knowledge intensive business environment. Intellectual capital measurement is quite thoroughly addressed in the accounting literature. However, the purpose of the measurement is to support the management of intellectual assets, but the reciprocal relationship between measurement and management has not been comprehensively considered in the literature. The theoretical motivation for this study rose from this paradox, as in order to maximise the effectiveness of knowledge management the two initiatives need to be closely integrated. The research approach of this interventionist case study is constructive. The objective is to develop the case organization’s knowledge management and intellectual capital measurement in a way that they would be closely integrated and the measurement would support the management of intellectual assets. The case analysis provides valuable practical considerations about the integration and related issues as the case company is a knowledge intensive organization in which the know-how of the employees is the central competitive asset and therefore, the management and measurement of knowledge are essential for its future success. The results suggest that the case organization is confronting challenges in managing knowledge. In order to appropriately manage knowledge processes and control the related risks, support from intellectual capital measurement is required. However, challenges in measuring intellectual capital, especially knowledge, could be recognized in the organization. By reflecting the knowledge management situation and the constructed strategy map, a new intellectual measurement system was developed for the case organization. The construction of the system as well as its indicators can be perceived to contribute to the literature, emphasizing of the importance of properly considering the organization’s knowledge situation in developing an intellectual capital measurement system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes the process of design and modeling of instrument for knee joint kinematics measurement that can work for both in-vivo and in-vitro subjects. It is designed to be compatible with imaging machine in a sagittal plane. Due to the invasiveness of the imaging machine, the instrument is designed to be able to function independently. The flexibility of this instrument allows to measure anthropometrically different subject. Among the sixth degree of freedom of a knee, three rotational and one translational degree of freedom can be measured for both type of subject. The translational, proximal-distal, motion is stimulated by external force directly applied along its axis. These angular and linear displacements are measured by magnetic sensors and high precision potentiometers respectively

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to find out the factors that affect customer profitability in the not-for-profit case company. The customer profitability was examined in two different segments of the customer base. The effects that price, cost and the amount of services provided have on the profit margin were studied. The distribution of profitability among the customers and the effect of certain characteristics, such as size of the customer measured in services purchased, on the profitability were analyzed. The theoretical framework was built around customer profitability and the use of customer profitability information in a not-for-profit organization. The present use of customer profitability information and the possibilities of using the results of this research in the case company were presented. Quantitative research methods were used in the empirical part of the study. The results indicate that the two customer segments have differences in their buying behaviors which affect the profitability and thus the measures taken to improve the profitability should be considered with the different characteristics of the customers in mind. Finally the limitations of the study were discussed as possible further research topics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurement is a tool for researching. Therefore, it is important that the measuring process is carried out correctly, without distorting the signal or the measured event. Researches of thermoelectric phenomena have been focused more on transverse thermoelectric phenomena during recent decades. Transverse Seebeck effect enables to produce thinner and faster heat flux sensor than before. Studies about transverse Seebeck effect have so far focused on materials, so in this Master’s Thesis instrumentation of transverse Seebeck effect based heat flux sensor is studied, This Master’s Thesis examines an equivalent circuit of transverse Seebeck effect heat flux sensors, their connectivity to electronics and choosing and design a right type amplifier. The research is carried out with a case study which is Gradient Heat Flux Sensors and an electrical motor. In this work, a general equivalent circuit was presented for the transverse Seebeck effect-based heat flux sensor. An amplifier was designed for the sensor of the case study, and the solution was produced for the measurement of the local heat flux of the electric motor to improve the electromagnetic compatibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes an approach to overcoming the complexity of software product management (SPM) and consists of several studies that investigate the activities and roles in product management, as well as issues related to the adoption of software product management. The thesis focuses on organizations that have started the adoption of SPM but faced difficulties due to its complexity and fuzziness and suggests the frameworks for overcoming these challenges using the principles of decomposition and iterative improvements. The research process consisted of three phases, each of which provided complementary results and empirical observation to the problem of overcoming the complexity of SPM. Overall, product management processes and practices in 13 companies were studied and analysed. Moreover, additional data was collected with a survey conducted worldwide. The collected data were analysed using the grounded theory (GT) to identify the possible ways to overcome the complexity of SPM. Complementary research methods, like elements of the Theory of Constraints were used for deeper data analysis. The results of the thesis indicate that the decomposition of SPM activities depending on the specific characteristics of companies and roles is a useful approach for simplifying the existing SPM frameworks. Companies would benefit from the results by adopting SPM activities more efficiently and effectively and spending fewer resources on its adoption by concentrating on the most important SPM activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This master’s thesis is devoted to study different heat flux measurement techniques such as differential temperature sensors, semi-infinite surface temperature methods, calorimetric sensors and gradient heat flux sensors. The possibility to use Gradient Heat Flux Sensors (GHFS) to measure heat flux in the combustion chamber of compression ignited reciprocating internal combustion engines was considered in more detail. A. Mityakov conducted an experiment, where Gradient Heat Flux Sensor was placed in four stroke diesel engine Indenor XL4D to measure heat flux in the combustion chamber. The results which were obtained from the experiment were compared with model’s numerical output. This model (a one – dimensional single zone model) was implemented with help of MathCAD and the result of this implementation is graph of heat flux in combustion chamber in relation to the crank angle. The values of heat flux throughout the cycle obtained with aid of heat flux sensor and theoretically were sufficiently similar, but not identical. Such deviation is rather common for this type of experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Production and generation of electrical power is evolving to more environmental friendly technologies and schemes. Pushed by the increasing cost of fossil fuels, the operational costs of producing electrical power with fossil fuels and the effect in the environment, like pollution and global warming, renewable energy sources gain con-stant impulse into the global energy economy. In consequence, the introduction of distributed energy sources has brought a new complexity to the electrical networks. In the new concept of smart grids and decen-tralized power generation; control, protection and measurement are also distributed and requiring, among other things, a new scheme of communication to operate with each other in balance and improve performance. In this research, an analysis of different communication technologies (power line communication, Ethernet over unshielded twisted pair (UTP), optic fiber, Wi-Fi, Wi-MAX, and Long Term Evolution) and their respective characteristics will be carried out. With the objective of pointing out strengths and weaknesses from different points of view (technical, economical, deployment, etc.) to establish a richer context on which a decision for communication approach can be done depending on the specific application scenario of a new smart grid deployment. As a result, a description of possible optimal deployment solutions for communication will be shown considering different options for technologies, and a mention of different important considerations to be taken into account will be made for some of the possible network implementation scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.