13 resultados para Matrix analytic methods,
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tässä päättötyössä annetaan kuvaus kehitetystä sovelluksesta Quasi Birth Death processien ratkaisuun. Tämä ohjelma on tähän mennessä ainutlaatuinen ja sen avulla voi ratkaista sarjan tehtäviä ja sitä tarvitaan kommunikaatio systeemien analyysiin. Mainittuun sovellukseen on annettu kuvaus ja määritelmä. Lyhyt kuvaus toisesta sovelluksesta Quasi Birth Death prosessien tehtävien ratkaisuun on myös annettu
Resumo:
Diplomityössä tehdään jatkokehitystä KCI Konecranes yrityksen siltanosturin laskentaohjelmaan. Ohjelman tärkeimmät jatkokehityskohteet kartoitettiin käyttäjäkyselyn avulla ja niistä valittiin toivotuimmat, sekä diplomityön lujuusopilliseen aihepiiriin parhaiten soveltuvat. Työhön valitut kaksi aihetta ovat koteloprofiilin kaksiosaisen uuman lujuuslaskennan selvittäminen ja siltanosturin kahdeksanpyöräisenpäätykannattajan elementtimallin suunnittelu. Diplomityössä selvitetään jatkokehityskohteisiin liittyvä teoria, mutta varsinainen ohjelmointi jätetään työn ulkopuolelle. Kaksiosaisella uumalla varustetussa koteloprofiilissa nostovaunun kulkukiskon alla olevan uuman yläosa tehdään paksummaksi, jotta uuma kestäisi nostovaunun pyöräkuormasta aiheutuvan paikallisen jännityksen, eliniin sanotun rusennusjännityksen. Rusennusjännityksen määrittäminen uumalevyissä on kaksiosaisen uuman lujuuslaskennan tärkein tehtävä. Rusennuksen aiheuttamankalvojännityksen ja jännityskeskittymien määrittämiseen erilaisissa konstruktioissa etsittiin sopivimmat menetelmät kirjallisuudesta ja standardeista. Kalvojännitys voidaan määrittää luotettavasti käyttäen joko 45 asteen sääntöä tai standardin mukaista menetelmää ja jännityskonsentraatioiden suuruus saadaan kertomallakalvojännitys jännityskonsentraatiokertoimilla. Menetelmien toimivuus verifioitiin tekemällä kymmeniä uuman elementtimalleja erilaisin dimensioin ja reunaehdoin ja vertaamalla elementtimallien tuloksia käsin laskettuihin. Käsin lasketut jännitykset saatiin vastaamaan tarkasti elementtimallien tuloksia. Kaksiosaisen uuman lommahdus- ja väsymislaskentaa tutkittiin alustavasti. Kahdeksanpyöräisiä päätykannattajia käytetään suurissa siltanostureissa pienentämään pyöräkuormia ja radan rusennusjännityksiä. Kahdeksanpyöräiselle siltanosturin päätykannattajalle suunniteltiin elementtimallit molempiin rakenteesta käytettyihin konstruktioihin: nivelöityyn ja jäykkäkehäiseen malliin. Elementtimallien rakentamisessa hyödynnettiin jo olemassa olevia malleja, jolloin niiden lisääminen ohjelmakoodiin nopeutuu ja ne ovat varmasti yhteensopivia muiden laskentamoduuleiden kanssa. Elementtimallien värähtelyanalyysin reunaehtoja tarkasteltiin. Värähtelyanalyysin reunaehtoihin ei tutkimuksen perusteella tarvitse tehdä muutoksia, mutta staattisen analyysin reunaehdot kaipaavat vielä lisätutkimusta.
Resumo:
Teaching the measurement of blood pressure for both nursing and public health nursing students The purpose of this two-phase study was to develop the teaching of blood pressure measurement within the nursing degree programmes of the Universities of Applied Sciences. The first survey phase described what and how blood pressure measurement was taught within nursing degree programmes. The second intervention phase (2004-2005) evaluated first academic year nursing and public health nursing students’ knowledge and skills results for blood pressure measurement. Additionally, the effect on the Taitoviikko experimental group students’ blood pressure measurement knowledge and skills level. A further objective was to construct models for an instrument (RRmittTest) to evaluate nursing students measurement of blood pressure (2003-2009). The research data for the survey phase were collected from teachers (total sampling, N=107, response rate 77%) using a specially developed RRmittopetus-questionnaire. Quasi-experimental study data on the RRmittTest-instrument was collected from students (purposive sampling, experimental group, n=29, control group, n=44). The RRmittTest consisted of a test of knowledge (Tietotesti) and simulation-based test (TaitoSimkäsi and Taitovideo) of skills. Measurements were made immediately after the teaching and in clinical practice. Statistical methods were used to analyse the results and responses to open-ended questions were organised and classified. Due to the small amount of materials involved and the results of distribution tests of the variables, non-parametric analytic methods were mainly used. Experimental group and control group similar knowledge and skills teaching was based on the results of the national survey phase (RRmittopetus) questionnaire results. Experimental group teaching includes the supervised Taitoviikko teaching method. During Taitoviikko students studied blood pressure measurement at the municipal hospital in a real nursing environment, guided by a teacher and a clinical nursing professional. In order to evaluate both learning and teaching the processes and components of blood pressure measurement were clearly defined as follows: the reliability of measurement instruments, activities preceding blood pressure measurement, technical execution of the measurement, recording, lifestyle guidance and measurement at home (self-monitoring). According to the survey study, blood pressure measurement is most often taught at Universities of Applied Sciences, separately, as knowledge (teaching of theory, 2 hours) and skills (classroom practice, 4 hours). The teaching was implemented largely in a classroom and was based mainly on a textbook. In the intervention phase the students had good knowledge of blood pressure measurement. However, their blood pressure measurement skills were deficient and the control group students, in particular, were highly deficient. Following in clinical practice the experimental group and control group students’ blood pressure measurement recording knowledge improve and experimental groups declined lifestyle guidance. Skills did not improve within any of the components analysed. The control groups` skills on the whole, declined statistically.There was a significant decline amongst the experimental group although only in one component measured. The results describe the learning results for first academic year students and no parallel conclusions should be drawn when considering any learning results for graduating students. The results support the use and further development of the Taitoviiko teaching method. The RRmittTest developed for the study should be assessed and the results seen from a negative perspective. This evaluation tool needs to be developed and retested.
Resumo:
The objective of the thesis was to explore the nature and characteristics of customer-related internal communication in a global industrial matrix organization during a specific customer relationship, and how it could be improved. The theoretical part of the study views the field of the concepts of intra-organizational information and knowledge sharing. The theoretical part also views the internal communications influences to customer relationships, its problematic, and the suggestions to improve internal communication in literature. The empirical part of the study was conducted with the Content Analysis and the Social Network Analysis as research methods. The data was collected by interviews and a questionnaire. Internal communication was observed first generally within the organization from the point of view of a certain business, and secondly, during a specific customer relationship at personal level and at departmental level. The results of the study describe the nature and characteristics of internal communication in the organization. The results give 13 suggestions for improving internal communication in the organization. Although the study has been done in one specific organization, it also offers insights for other organizations as well as managers to improve their internal communication.
Resumo:
The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
Bacteria can exist as planktonic, the lifestyle in which single cells exist in suspension, and as biofilms, which are surface-attached bacterial communities embedded in a selfproduced matrix. Most of the antibiotics and the methods for antimicrobial work have been developed for planktonic bacteria. However, the majority of the bacteria in natural habitats live as biofilms. Biofilms develop dauntingly fast high resistance towards conventional antibacterial treatments and thus, there is a great need to meet the demands of effective anti-biofilm therapy. In this thesis project it was attempted to fill the void of anti-biofilm screening methods by developing a platform of assays that evaluate the effect that screened compounds have on the total biomass, viability and the extracellular polysaccharide (EPS) layer of the biofilms. Additionally, a new method for studying biofilms and their interactions with compounds in a continuous flow system was developed using capillary electrochromatography (CEC). The screening platform was utilized with a screening campaign using a small library of cinchona alkaloids. The assays were optimized to be statistically robust enough for screening. The first assay, based on crystal violet staining, measures total biofilm biomass, and it was automated using a liquid handling workstation to decrease the manual workload and signal variation. The second assay, based on resazurin staining, measures viability of the biofilm, and it was thoroughly optimized for the strain used, but was then a very simple and fast method to be used for primary screening. The fluorescent resazurin probe is not toxic to the biofilms. In fact, it was also shown in this project that staining the biofilms with resazurin prior to staining with crystal violet had no effect on the latter and they can be used in sequence on the same screening plate. This sequential addition step was indeed a major improvement on the use of reagents and consumables and also shortened the work time. As a third assay in the platform a wheat germ agglutinin based assay was added to evaluate the effect a compound has on the EPS layer. Using this assay it was found that even if compounds might have clear effect on both biomass and viability, the EPS layer can be left untouched or even be increased. This is a clear implication of the importance of using several assays to be able to find “true hits” in a screening setting. In the pilot study of screening for antimicrobial and anti-biofilm effects using a cinchona alkaloid library, one compound was found to have antimicrobial effect against planktonic bacteria and prevent biofilm formation at low micromolar concentration. To eradicate biofilms, a higher concentration was needed. It was also shown that the chemical space occupied by the active compound was slightly different than the rest of the cinchona alkaloids as well as the rest of the compounds used for validatory screening during the optimization processes of the separate assays.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.