809 resultados para Performance model
Resumo:
Diastrophic dysplasia (DTD) is a recessive chondrodysplasia caused by mutations in SLC26A2, a cell membrane sulfate-chloride antiporter. Sulfate uptake impairment results in low cytosolic sulfate, leading to cartilage proteoglycan (PG) undersulfation. In this work, we used the dtd mouse model to study the role of N-acetyl-l-cysteine (NAC), a well-known drug with antioxidant properties, as an intracellular sulfate source for macromolecular sulfation. Because of the important pre-natal phase of skeletal development and growth, we administered 30 g/l NAC in the drinking water to pregnant mice to explore a possible transplacental effect on the fetuses. When cartilage PG sulfation was evaluated by high-performance liquid chromatography disaccharide analysis in dtd newborn mice, a marked increase in PG sulfation was observed in newborns from NAC-treated pregnancies when compared with the placebo group. Morphometric studies of the femur, tibia and ilium after skeletal staining with alcian blue and alizarin red indicated a partial rescue of abnormal bone morphology in dtd newborns from treated females, compared with pups from untreated females. The beneficial effect of increased macromolecular sulfation was confirmed by chondrocyte proliferation studies in cryosections of the tibial epiphysis by proliferating cell nuclear antigen immunohistochemistry: the percentage of proliferating cells, significantly reduced in the placebo group, reached normal values in dtd newborns from NAC-treated females. In conclusion, NAC is a useful source of sulfate for macromolecular sulfation in vivo when extracellular sulfate supply is reduced, confirming the potential of therapeutic approaches with thiol compounds to improve skeletal deformity and short stature in human DTD and related disorders.
Resumo:
This study compares the impact of quality management tools on the performance of organisations utilising the ISO 9001:2000 standard as a basis for a quality-management system band those utilising the EFQM model for this purpose. A survey is conducted among 107 experienced and independent quality-management assessors. The study finds that organisations with qualitymanagement systems based on the ISO 9001:2000 standard tend to use general-purpose qualitative tools, and that these do have a relatively positive impact on their general performance. In contrast, organisations adopting the EFQM model tend to use more specialised quantitative tools, which produce significant improvements in specific aspects of their performance. The findings of the study will enable organisations to choose the most effective quality-improvement tools for their particular quality strategy
Resumo:
After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real-life regulatory frameworks by the developed methodology and further develop regulation model from the viewpoint of the directing signals. The results of this study have played a key role in the development of the Finnish regulatory model.
Resumo:
Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.
Resumo:
Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.
Resumo:
The objective of this master’s thesis was to develop a model for mobile subscription acquisition cost, SAC, and mobile subscription retention cost, SRC, by applying activity-based cost accounting principles. The thesis was conducted as a case study for a telecommunication company operating on the Finnish telecommunication market. In addition to activity-based cost accounting there were other theories studied and applied in order to establish a theory framework for this thesis. The concepts of acquisition and retention were explored in a broader context with the concepts of customer satisfaction, loyalty and profitability and eventually customer relationship management to understand the background and meaning of the theme of this thesis. The utilization of SAC and SRC information is discussed through the theories of decision making and activity-based management. Also, the present state and future needs of SAC and SRC information usage at the case company as well as the functions of the company were examined by interviewing some members of the company personnel. With the help of these theories and methods it was aimed at finding out both the theory-based and practical factors which affect the structure of the model. During the thesis study it was confirmed that the existing SAC and SRC model of the case company should be used as the basis in developing the activity-based model. As a result the indirect costs of the old model were transformed into activities and the direct costs were continued to be allocated directly to acquisition of new subscriptions and retention of old subscriptions. The refined model will enable managing the subscription acquisition, retention and the related costs better through the activity information. During the interviews it was found out that the SAC and SRC information is also used in performance measurement and operational and strategic planning. SAC and SRC are not fully absorbed costs and it was concluded that the model serves best as a source of indicative cost information. This thesis does not include calculating costs. Instead, the refined model together with both the theory-based and interview findings concerning the utilization of the information produced by the model will serve as a framework for the possible future development aiming at completing the model.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Centrifugal compressors are widely used for example in refrigeration processes, the oil and gas industry, superchargers, and waste water treatment. In this work, five different vaneless diffusers and six different vaned diffusers are investigated numerically. The vaneless diffusers vary only by their diffuser width, so that four of the geometries have pinch implemented to them. Pinch means a decrease in the diffuser width. Four of the vaned diffusers have the same vane turning angle and a different number of vanes, and two have different vane turning angles. The flow solver used to solve the flow fields is Finflo, which is a Navier-Stokes solver. All the cases are modeled with the Chien's k – έ- turbulence model, and selected cases are modeled also with the k – ώ-SST turbulence model. All five vaneless diffusers and three vaned diffusers are investigated also experimentally. For each construction, the compressor operating map is measured according to relevant standards. In addition to this, the flow fields before and after the diffuser are measured with static and total pressure, flow angle and total temperature measurements. When comparing the computational results to the measured results, it is evident that the k – ώ-SST turbulence model predicts the flow fields better. The simulation results indicate that it is possible to improve the efficiency with the pinch, and according to the numerical results, the two best geometries are the ones with most pinch at the shroud. These geometries have approximately 4 percentage points higher efficiency than the unpinched vaneless diffusers. The hub pinch does not seem to have any major benefits. In general, the pinches make the flow fields before and after the diffuser more uniform. The pinch also seems to improve the impeller efficiency. This is down to two reasons. The major reason is that the pinch decreases the size of slow flow and possible backflow region located near the shroud after the impeller. Secondly, the pinches decrease the flow velocity in the tip clearance, leading to a smaller tip leakage flow and therefore slightly better impeller efficiency. Also some of the vaned diffusers improve the efficiency, the increment being 1...3 percentage points, when compared to the vaneless unpinched geometry. The measurement results confirm that the pinch is beneficial to the performance of the compressor. The flow fields are more uniform with the pinched cases, and the slow flow regions are smaller. The peak efficiency is approximately 2 percentage points and the design point efficiency approximately 4 percentage points higher with the pinched geometries than with the un- pinched geometry. According to the measurements, the two best geometries are the ones with the most pinch at the shroud, the case with the pinch only at the shroud being slightly better of the two. The vaned diffusers also have better efficiency than the vaneless unpinched geometries. However, the pinched cases have even better efficiencies. The vaned diffusers narrow the operating range considerably, whilst the pinch has no significant effect on the operating range.
Resumo:
Tämän diplomityön tavoitteena oli analysoida Stora Enson palkkakeskuksen ja sen henkilöstöhallinnon järjestelmän, SAP HR:n, kustannustehokkuutta ja suorituskykyä. Tutkimuksessa käytettiin apuna benchmarkingia. Viisi suurta suomalaisyritystä osallistui benchmarkingiin. Benchmarkingin pääkohteena oli yritysten välinen kustannusvertailu. Kyselyssä perehdyttiin myös yritysten järjestelmien suorituskykyyn. Tuloksien perusteella Stora Enson palkkakeskus tarjoaa kustannustehokkaan ja kilpailukykyisen ratkaisun, joka menestyy hyvin vertailussa muihin suomalaisiin yrityksiin.
Resumo:
This thesis was produced for the Technology Marketing unit at the Nokia Research Center. Technology marketing was a new function at Nokia Research Center, and needed an established framework with the capacity to take into account multiple aspects for measuring the team performance. Technology marketing functions had existed in other parts of Nokia, yet no single method had been agreed upon for measuring their performance. The purpose of this study was to develop a performance measurement system for Nokia Research Center Technology Marketing. The target was that Nokia Research Center Technology Marketing had a framework for separate metrics; including benchmarking for starting level and target values in the future planning (numeric values were kept confidential within the company). As a result of this research, the Balanced Scorecard model of Kaplan and Norton, was chosen for the performance measurement system for Nokia Research Center Technology Marketing. This research selected the indicators, which were utilized in the chosen performance measurement system. Furthermore, performance measurement system was defined to guide the Head of Marketing in managing Nokia Research Center Technology Marketing team. During the research process the team mission, vision, strategy and critical success factors were outlined.
Resumo:
The topic of this thesis is the simulation of a combination of several control and data assimilation methods, meant to be used for controlling the quality of paper in a paper machine. Paper making is a very complex process and the information obtained from the web is sparse. A paper web scanner can only measure a zig zag path on the web. An assimilation method is needed to process estimates for Machine Direction (MD) and Cross Direction (CD) profiles of the web. Quality control is based on these measurements. There is an increasing need for intelligent methods to assist in data assimilation. The target of this thesis is to study how such intelligent assimilation methods are affecting paper web quality. This work is based on a paper web simulator, which has been developed in the TEKES funded MASI NoTes project. The simulator is a valuable tool in comparing different assimilation methods. The thesis contains the comparison of four different assimilation methods. These data assimilation methods are a first order Bayesian model estimator, an ARMA model based on a higher order Bayesian estimator, a Fourier transform based Kalman filter estimator and a simple block estimator. The last one can be considered to be close to current operational methods. From these methods Bayesian, ARMA and Kalman all seem to have advantages over the commercial one. The Kalman and ARMA estimators seems to be best in overall performance.
Resumo:
Tutkimuksen tavoite oli selvittää suorituskyvyn mittaamista, mittareita ja niiden suunnittelua tukku- ja jakeluliiketoiminnassa. Kriittisten menestystekijöiden mittarit auttavat yritystä kohti yhteistä päämäärää. Kriittisten menestystekijöiden mittarit ovat usein yhdistetty strategiseen suunnitteluun ja implementointiin ja niillä on yhtäläisyyksiä monien strategisten työkalujen kun Balanced scorecardin kanssa. Tutkimus ongelma voidaan esittää kysymyksen muodossa. •Mitkä ovat Oriola KD:n pitkänaikavälin tavoitteita tukevat kriittisten menestystekijöiden mittarit (KPIs) toimittajan ja tuotevalikoiman mittaamisessa? Tutkimus on jaettu kirjalliseen ja empiiriseen osaan. Kirjallisuus katsaus käsittelee aikaisempaa tutkimusta strategian, toimitusketjun hallinnan, toimittajan arvioinnin ja erilaisten suorituskyvyn mittaamisjärjestelmien osalta. Empiirinen osuus etenee nykytila-analyysista ehdotettuihin kriittisten menestystekijöiden mittareihin, jotka ovat kehitetty kirjallisuudesta löydetyn mallin avulla. Tutkimuksen lopputuloksena ovat case yrityksen tarpeisiin kehitetyt kriittisten menestystekijöiden mittarit toimittajan ja tuotevalikoiman arvioinnissa.
Centralized Motion Control of a Linear Tooth Belt Drive: Analysis of the Performance and Limitations
Resumo:
A centralized robust position control for an electrical driven tooth belt drive is designed in this doctoral thesis. Both a cascaded control structure and a PID based position controller are discussed. The performance and the limitations of the system are analyzed and design principles for the mechanical structure and the control design are given. These design principles are also suitable for most of the motion control applications, where mechanical resonance frequencies and control loop delays are present. One of the major challenges in the design of a controller for machinery applications is that the values of the parameters in the system model (parameter uncertainty) or the system model it self (non-parametric uncertainty) are seldom known accurately in advance. In this thesis a systematic analysis of the parameter uncertainty of the linear tooth beltdrive model is presented and the effect of the variation of a single parameter on the performance of the total system is shown. The total variation of the model parameters is taken into account in the control design phase using a Quantitative Feedback Theory (QFT). The thesis also introduces a new method to analyze reference feedforward controllers applying the QFT. The performance of the designed controllers is verified by experimentalmeasurements. The measurements confirm the control design principles that are given in this thesis.
Resumo:
The main objective of this research is creating a performance measurement system for accounting services of a large paper industry company. In this thesis there are compared different performance measurement system and then selected two systems, which are presented and compared more detailed. Performance Prism system is the used framework in this research. Performance Prism using success maps to determining objectives. Model‟s target areas are divided into five groups: stakeholder satisfaction, stakeholder contribution, strategy, processes and capabilities. The measurement system creation began by identifying stakeholders and defining their objectives. Based on the objectives are created success map. Measures are created based on the objectives and success map. Then is defined needed data for measures. In the final measurement system, there are total just over 40 measures. Each measure is defined specific target level and ownership. Number of measures is fairly large, but this is the first version of the measurement system, so the amount is acceptable.
Resumo:
In the study the recently developed concept of strategic entrepreneurship was addressed with the aim to investigate the underlying factors and components constituting the concept and their influence on firm performance. As the result of analysis of existing literature and empirical studies the model of strategic entrepreneurship for the current study is developed with the emphasis on exploration and exploitation parts of the concept. The research model is tested on the data collected in the project ―Factors of growth and success of entrepreneurial firms in Russia‖ by Center for Entrepreneurship of GSOM in 2007 containing answers of owners and managers of 500 firms operating in St. Petersburg and Moscow. Multiple regression analysis showed that exploration and exploitation presented by entrepreneurial values, investments in internal resources, knowledge management and developmental changes are significant factors constituting strategic entrepreneurship and having positive relation to firm performance. The theoretical contribution of the work is linked to development and testing of the model of strategic entrepreneurship. The results can be implemented in management practices of companies willing to engage in strategic entrepreneurship and increase their firm performance.