19 resultados para homeostatic model assessment
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
Through indisputable evidence of climate change and its link to the greenhouse gas emissions comes the necessity for change in energy production infrastructure during the coming decades. Through political conventions and restrictions energy industry is pushed toward using bigger share of renewable energy sources as energy supply. In addition to climate change, sustainable energy supply is another major issue for future development plans, but neither of these should come with unbearable price. All the power production types have environmental effects as well as strengths and weaknesses. Although each change comes with a price, right track in minimising the environmental impacts and energy supply security can be found by combining all possible low-carbon technologies and by improving energy efficiency in all sectors, for creating a new power production infrastructure of tolerable energy price and of minor environmental effects. GEMIS-Global Emission Model for Integrated Systems is a life-cycle analysis program which was used in this thesis to make indicative energy models for Finland’s future energy supply. Results indicate that the energy supply must comprise both high capacity nuclear power as well as large variation of renewable energy sources for minimization of all environmental effects and keeping energy price reasonable.
Resumo:
Coastal birds are an integral part of coastal ecosystems, which nowadays are subject to severe environmental pressures. Effective measures for the management and conservation of seabirds and their habitats call for insight into their population processes and the factors affecting their distribution and abundance. Central to national and international management and conservation measures is the availability of accurate data and information on bird populations, as well as on environmental trends and on measures taken to solve environmental problems. In this thesis I address different aspects of the occurrence, abundance, population trends and breeding success of waterbirds breeding on the Finnish coast of the Baltic Sea, and discuss the implications of the results for seabird monitoring, management and conservation. In addition, I assess the position and prospects of coastal bird monitoring data, in the processing and dissemination of biodiversity data and information in accordance with the Convention on Biological Diversity (CBD) and other national and international commitments. I show that important factors for seabird habitat selection are island area and elevation, water depth, shore openness, and the composition of island cover habitats. Habitat preferences are species-specific, with certain similarities within species groups. The occurrence of the colonial Arctic Tern (Sterna paradisaea) is partly affected by different habitat characteristics than its abundance. Using long-term bird monitoring data, I show that eutrophication and winter severity have reduced the populations of several Finnish seabird species. A major demographic factor through which environmental changes influence bird populations is breeding success. Breeding success can function as a more rapid indicator of sublethal environmental impacts than population trends, particularly for long-lived and slowbreeding species, and should therefore be included in coastal bird monitoring schemes. Among my target species, local breeding success can be shown to affect the populations of the Mallard (Anas platyrhynchos), the Eider (Somateria mollissima) and the Goosander (Mergus merganser) after a time lag corresponding to their species-specific recruitment age. For some of the target species, the number of individuals in late summer can be used as an easier and more cost-effective indicator of breeding success than brood counts. My results highlight that the interpretation and application of habitat and population studies require solid background knowledge of the ecology of the target species. In addition, the special characteristics of coastal birds, their habitats, and coastal bird monitoring data have to be considered in the assessment of their distribution and population trends. According to the results, the relationships between the occurrence, abundance and population trends of coastal birds and environmental factors can be quantitatively assessed using multivariate modelling and model selection. Spatial data sets widely available in Finland can be utilised in the calculation of several variables that are relevant to the habitat selection of Finnish coastal species. Concerning some habitat characteristics field work is still required, due to a lack of remotely sensed data or the low resolution of readily available data in relation to the fine scale of the habitat patches in the archipelago. While long-term data sets exist for water quality and weather, the lack of data concerning for instance the food resources of birds hampers more detailed studies of environmental effects on bird populations. Intensive studies of coastal bird species in different archipelago areas should be encouraged. The provision and free delivery of high-quality coastal data concerning bird populations and their habitats would greatly increase the capability of ecological modelling, as well as the management and conservation of coastal environments and communities. International initiatives that promote open spatial data infrastructures and sharing are therefore highly regarded. To function effectively, international information networks, such as the biodiversity Clearing House Mechanism (CHM) under the CBD, need to be rooted at regional and local levels. Attention should also be paid to the processing of data for higher levels of the information hierarchy, so that data are synthesized and developed into high-quality knowledge applicable to management and conservation.
Resumo:
Meeting design is one of the most critical prerequisites of the success of facilitated meetings but how to achieve the success is not yet fully understood. This study presents a descriptive model of the design of technology supported meetings based on literature findings about the key factors contributing to the success of collaborative meetings, and linking these factors to the meeting design steps by exploring how facilitators consider the factors in practice in their design process. The empirical part includes a multiple-case study conducted among 12 facilitators. The case concentrates on the GSS laboratory at LUT, which has been working on facilitation and GSS for the last fifteen years. The study also includes ‘control’ cases from two comparable institutions. The results of this study highlight both the variances and commonalities among facilitators in how they design collaboration processes. The design thinking of facilitators of all levels of experience is found to be largely consistent wherefore the key design factors as well as their role across the design process can be outlined. Session goals, group composition, supporting technology, motivational aspects, physical constraints, and correct design practices were found to outline the key factors in design thinking. These factors are further categorized into three factor types of controllable, constraining, and guiding design factors, because the study findings indicate the factor type to have an effect on the factor’s importance in design. Furthermore, the order of considering these factors in the design process is outlined.
Resumo:
Controlling the quality variables (such as basis weight, moisture etc.) is a vital part of making top quality paper or board. In this thesis, an advanced data assimilation tool is applied to the quality control system (QCS) of a paper or board machine. The functionality of the QCS is based on quality observations that are measured with a traversing scanner making a zigzag path. The basic idea is the following: The measured quality variable has to be separated into its machine direction (MD) and cross direction (CD) variations due to the fact that the QCS works separately in MD and CD. Traditionally this is done simply by assuming one scan of the zigzag path to be the CD profile and its mean value to be one point of the MD trend. In this thesis, a more advanced method is introduced. The fundamental idea is to use the signals’ frequency components to represent the variation in both CD and MD. To be able to get to the frequency domain, the Fourier transform is utilized. The frequency domain, that is, the Fourier components are then used as a state vector in a Kalman filter. The Kalman filter is a widely used data assimilation tool to combine noisy observations with a model. The observations here refer to the quality measurements and the model to the Fourier frequency components. By implementing the two dimensional Fourier transform into the Kalman filter, we get an advanced tool for the separation of CD and MD components in total variation or, to be more general, for data assimilation. A piece of a paper roll is analyzed and this tool is applied to model the dataset. As a result, it is clear that the Kalman filter algorithm is able to reconstruct the main features of the dataset from a zigzag path. Although the results are made with a very short sample of paper roll, it seems that this method has great potential to be used later on as a part of the quality control system.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
Väitöskirjassa tarkastellaan kouluikäisten lasten ja nuorten sosiaalisen kompetenssin ja yksinäisyyden mittaamista, yhteyksiä ja periytyvyyttä vanhemmilta heidän lapsilleen. Alakouluikäisten lasten tutkimusaineisto (n=985) koostuu lapsilta itseltään, heidän luokkatovereiltaan, opettajiltaan ja vanhemmiltaan vuosina 2000 - 2004 osana Merkitystä etsimässä – tutkimusprojektia (M. Vauras) kerätystä aineistosta. Mukana on itse-, toveri-, opettaja- ja vanhempien arviot lasten sosiaalisesta kompetenssista, seuranta-aineisto lasten yksinäisyydestä, opettajien arviot lasten motivationaalisesta orientaatiosta, standardoiduin testisarjoin arvioidut akateemiset taidot sekä lasten äitien ja isien arviot omasta yksinäisyydestään ja koetusta kyvykkyydestään toimia vanhempana. Yläkouluikäisten nuorten (n=386) aineisto koostuu vuosina 2006 – 2007 osana Sosioemotionaalinen oppiminen ja hyvinvointi yläkouluyhteisössä (P. M. Niemi) kerätystä nuorten yksinäisyyden, sosiaalisen ahdistuneisuuden ja sosiaalisen fobian seuranta-aineistosta. Mitattavuutta (päätavoite 1) tutkittiin erityisesti monitahoarviointien rakenteiden yhtenäisyyksiä, subjektiivisten arvioiden ajallista pysyvyyttä sekä mittareiden validiteettia ja reliabiliteettia testaamalla. Sosiaalisen kompetenssin ja yksinäisyyden keskinäisten yhteyksien lisäksi tarkasteltiin näiden yhteyttä alakoululaisten oppimiseen sekä yläkou¬lulaisten psykososiaaliseen hyvinvointiin (päätavoite 2). Kolmantena päätavoitteena oli selvittää yksinäisyyden mahdollista periytymistä vanhemmilta lapsille. Osana ensimmäistä päätavoitetta kehitettiin Monitahoarviointi sosiaalisesta kompetenssista (MASK) -arviointimenetelmä (artikkeli 1). Konfirmatorisen faktorianalyysin tulosten perusteella nelifaktorinen rakenne (prososiaalisuus sisältäen yhteistyötaidot ja empatiakyvyn sekä antisosiaalisuus sisältäen impulsiivisuuden ja häiritsevyyden) sopi sekä lasten itsensä, heidän luokkatovereidensa, opettajiensa että vanhempiensa tekemiin arviointeihin. Eri tahojen arviointien väliset korrelaatiot olivat tilastollisesti merkitseviä, joskin suhteellisen matalia, ts. eri tahojen näkökulmat lapsen sosiaalisesta kompetenssista ovat toisistaan eriäviä. Täten eri arvioitsijatahojen käyttäminen on kokonaisuuden tutkimisen kannalta tärkeää. Toisena mittaamiseen liittyvänä tavoitteena oli validoida Hozan, Bukowskin ja Beeryn (2000) sosiaalisen ja emotionaalisen yksinäisyyden mittari suomalaisille lapsille (artikkeli 3) ja nuorille (artikkeli 4) soveltuvaksi sekä tutkia, ovatko lasten ja nuorten arviot omasta yksinäisyydestään ajallisesti pysyviä. Alakoululaisten lasten osalta yksinäisyys, erityisesti sosiaalinen yksinäisyys osoittautui suhteellisen pysyväksi, mutta vahvistui entisestään yläkouluikäisten nuorten aineistoa tarkasteltaessa. Huomionarvoista sekä ala- että yläkoululaisten aineistoissa oli poikien kokema vahva emotionaalinen yksinäisyys. Molempien mittareiden osalta sekä validiteetti että reliabiliteetti todettiin hyväksyttäväksi ja niitä voidaan suositella lasten ja nuorten sosiaalisen kompetenssin ja yksinäisyyden arviointimenetelmiksi. Toisena päätavoitteena oli rakenneyhtälömallinnuksen keinoin tarkastella sosiaalisen kompetenssin ja yksinäisyyden yhteyksiä sekä keskenään (artikkelit 2 ja 3) että suhteessa lasten oppimiseen (artikkeli 2) ja nuorten psykososiaaliseen hyvinvointiin (artikkeli 4). Alakouluikäisten lasten osalta sosiaalinen kompetenssi oli yhteydessä pait¬si yksinäisyyteen myös opettajien oppilaistaan tekemiin motivationaalisen orientaation arvioihin sekä standardoiduin testien arvioituihin akateemisiin taitoihin. Yläkouluikäisten nuorten osalta yksinäisyys oli yhteydessä sosiaaliseen ahdistuneisuuteen ja sosiaaliseen fobiaan. Täten sosiaalisen kompetenssin voidaan katsoa olevan koululaisten hyvinvointia ja oppimista vahvistava, ja toisaalta yksinäisyyden nuorten psykososiaalista hyvinvointia heikentävä tekijä. Viimeisenä päätavoitteena mallinnettiin yksinäisyyden mahdollista periytyvyyttä. Ensimmäisessä vaiheessa periytyvyyttä tarkasteltiin koko perheen sisällä, vanhempien tai lasten sukupuolta erottelematta (artikkeli 2). Tässä rakenneyhtälömallissa vanhempien kokema yksinäisyys ennusti heikompaa kyvykkyydentunnetta vanhemmuudesta, joka edelleen ennusti lapsen heikompaa toveriarvioitua sosiaalista kompetenssia koulussa ja tätä kautta vahvempaa yksinäisyyden kokemusta. Toisessa mallissa eroteltiin äitien ja isien sekä tyttöjen ja poikien aineistot, jotta periytyvyyttä voitiin tarkastella äiti-tytär, äiti-poika, isä-tytär ja isä-poika dyadisuhteissa. Rakenneyhtälömallinnuksen tulosten perusteella sekä äitien että isien kokema yksinäisyys ennusti
Resumo:
Credit risk assessment is an integral part of banking. Credit risk means that the return will not materialise in case the customer fails to fulfil its obligations. Thus a key component of banking is setting acceptance criteria for granting loans. Theoretical part of the study focuses on key components of credit assessment methods of Banks in the literature when extending credits to large corporations. Main component is Basel II Accord, which sets regulatory requirement for credit risk assessment methods of banks. Empirical part comprises, as primary source, analysis of major Nordic banks’ annual reports and risk management reports. As secondary source complimentary interviews were carried out with senior credit risk assessment personnel. The findings indicate that all major Nordic banks are using combination of quantitative and qualitative information in credit risk assessment model when extending credits to large corporations. The relative input of qualitative information depends on the selected approach to the credit rating, i.e. point-in-time or through-the-cycle.
Resumo:
Hormonstörande ämnen är syntetiska eller naturliga ämnen som stör organismers hormonsystem och bidrar till könsförvirring och sterilitet. Sådana ämnen kommer i ökande takt ut i vattenmiljön genom rester av läkemedel, bekämpningsmedel och industriprodukter. Eftersom det finns många likheter mellan hormonsystemen hos människan och övriga ryggradsdjur kan till exempel fiskar användas som modellsystem för att underöka detta problem. I tidigare undersökningar har man funnit hormonella störningar, bl.a. feminisering och maskulinisering hos fiskar som utsatts för avfallsvatten från kommunala reningsverk eller avfallsvatten från pappersindustrin. Målet med denna avhandling var att undersöka om renat avfallsvatten från kommunala reningsverk längs Finlands kust innehåller hormonstörande ämnen i sådana mängder att de kan försorsaka hormonstörande effekter på fisk. Målet i denna avhandling var också att utveckla cellulära testsystem baserade på fiskceller, eftersom behovet av tillförlitliga och kostnadseffektiva cellbaserade tester för att underlätta riskbedömningen av hormonstörande ämnen är mycket stort för tillfället. Som testsystem har storspiggen använts, som besitter flera användbara biomarkörer för att mäta hormonstörande ämnens påverkan. Resultaten tyder på att problemen med hormonstörande påverkan på fisk inte är lika utbredda i Finland som i många andra europeiska länder. Detta beror troligtvis på att finska reningsverk har effektiva reningstekniker som reducerar mängden hormonstörande ämnen, eller på att utspädningen av avloppen i recipienterna är större än i många andra länder. Dock kan problemen inte helt uteslutas eftersom vissa feminiserande (estrogena) effekter kunde observeras hos fisken i de undersökta recipienterna utanför kommunala reningsverk. I kontrollerade laboratorieförsök där storpsiggar exponerades för kommunalt avloppsvatten uppmättes även här effekter som tyder på förekomst av estrogener i avloppsvattnet. De cell-baserade testsystemen klarade av att förutspå hormonella effekter hos hel fisk och kan därför vara mycket användbara i fortsatta studier av hormonstörande ämnens verkningsmekanismer i preliminära toxicitetsbedömningar.
Resumo:
Previous studies on pencil grip have typically dealt with the developmental aspects in young children while handwriting research is mainly concerned with speed and legibility. Studies linking these areas are few. Evaluation of the existing pencil grip studies is hampered by methodological inconsistencies. The operational definitions of pencil grip arerational but tend to be oversimplified while detailed descriptors tend to be impractical due to their multiplicity. The present study introduces a descriptive two-dimensional model for the categorisation of pencil grip suitable for research applications in a classroom setting. The model is used in four empirical studies of children during the first six years of writing instruction. Study 1 describes the pencil grips observed in a large group of pupils in Finland (n = 504). The results indicate that in Finland the majority of grips resemble the traditional dynamic tripod grip. Significant genderrelated differences in pencil grip were observed. Study 2 is a longitudinal exploration of grip stability vs. change (n = 117). Both expected and unexpected changes were observed in about 25 per cent of the children's grips over four years. A new finding emerged using the present model for categorisation: whereas pencil grips would change, either in terms of ease of grip manipulation or grip configuration, no instances were found where a grip would have changed concurrently on both dimensions. Study 3 is a cross-cultural comparison of grips observed in Finland and the USA (n = 793). The distribution of the pencil grips observed in the American pupils was significantly different from those found in Finland. The cross-cultural disparity is most likely related to the differences in the onset of writing instruction. The differences between the boys' and girls' grips in the American group were non-significant.An implication of Studies 2 and 3 is that the initial pencil grip is of foremost importance since pencil grips are largely stable over time. Study 4 connects the pencil grips to assessment of the mechanics of writing (n = 61). It seems that certain previously not recommended pencil grips might nevertheless be includedamong those accepted since they did not appear to hamper either fluency or legibility.
Resumo:
Life cycle costing (LCC) practices are spreading from military and construction sectors to wider area of industries. Suppliers as well as customers are demanding comprehensive cost knowledge that includes all relevant cost elements through the life cycle of products. The problem of total cost visibility is being acknowledged and the performance of suppliers is evaluated not just by low acquisition costs of their products, but by total value provided through the life time of their offerings. The main purpose of this thesis is to provide better understanding of product cost structure to the case company. Moreover, comprehensive theoretical body serves as a guideline or methodology for further LCC process. Research includes the constructive analysis of LCC related concepts and features as well as overview of life cycle support services in manufacturing industry. The case study aims to review the existing LCC practices within the case company and provide suggestions for improvements. It includes identification of most relevant life cycle cost elements, development of cost breakdown structure and generic cost model for data collection. Moreover, certain cost-effective suggestions are provided as well. This research should support decision making processes, assessment of economic viability of products, financial planning, sales and other processes within the case company.
Resumo:
After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.
Resumo:
To describe the change of purchasing moving from administrative to strategic function academics have put forward maturity models which help practitioners to compare their purchasing activities to industry top performers and best practices. However, none of the models aim to distinguish the purchasing maturity from the after-sales point of view, even though after-sales activities are acknowledged as a relevant source of revenue, profit and competitive advantage in most manufacturing firms. The maturity of purchasing and supply management practices have a large impact to the overall performance of the spare parts supply chain and ultimately to the value creation and relationship building for the end customer. The research was done as a case study for a European after-sales organization which is part of a globally operating industrial firm specialized in heavy machinery. The study mapped the current state of the purchasing practices in the case organization and also distinguished the relevant areas for future development. The study was based on the purchasing maturity model developed by Schiele (2007) and investigated also how applicable is the maturity model in the spare parts supply chain context. Data for the assessment was gathered using five expert interviews inside the case organization and other parties involved in the company’s spare parts supply chain. Inventory management dimension was added to the original maturity model in order to better capture the important areas in a spare parts supply chain. The added five questions were deduced from the spare parts management literature and verified as relevant areas by the case organization’s personnel. Results indicate that largest need for development in the case organization are: better collaboration between sourcing and operative procurement functions, use of installed base information in the spare parts management, training plan development for new buyers, assessment of aligned KPI’s between the supply chain parties and better defining the role of after-sales sourcing. The purchasing maturity model used in this research worked well in H&R Leading, Controlling and Inventory Management dimensions. The assessment was more difficult to conduct in the Supplier related processes, Process integration and Organizational structure –dimensions, mainly because the assessment in these sections would for some parts require more company-wide assessment. Results indicate also that the purchasing maturity model developed by Schiele (2007) captures the relevant areas in the spare parts supply as well.