14 resultados para common method variance
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Työssä kartoitetaan käytössä olevia pilkkeen ja hakkeen keinokuivausmenetelmiä. Lisäksi arvioidaan menetelmien energiankulutusta ja kustannuksia, sekä käydään läpi kuivaajan suunnittelussa huomioon otettavia seikkoja. Työn ohessa on tehty Excel-laskentataulukko, jonka avulla voidaan arvioida lämpöyrittäjyyden kannattavuutta koko tuotantoketju huomioon ottaen. Lopussa tutkitaan kolmen erityyppisen pilkekuivurin käyttöä ja arvioidaan laskentataulukon avulla niiden vaikutuksia pilkeyrittäjän talouteen. Yleisin puupolttoaineiden keinokuivausmenetelmä on kylmäilmakuivaus. Sääriippuvuudesta ja usein epätasaisesta kuivauslaadusta johtuen se soveltuu vain pienimuotoiseen ja sivutoimiseen polttoainetuotantoon. Lisälämmityksellä parannetaan ilman kuivauskykyä, jolloin kuivaus on nopeampaa, loppukosteudet alhaisempia ja vuotuinen käyttöaika pitempi. Lämmitysratkaisun valinta riippuu kuivurin halutusta vuotuisesta käyttöajasta ja tuotantomääristä. Ammattimaiseen ja ympärivuotiseen pilketuotantoon soveltuu parhaiten korkeita, 70 - 90 °C lämpötiloja käyttävä kuivuri. Korkealämpötila-kuivurissa on tärkeää huolehtia riittävästä eristyksestä ja säädellystä ilmanvaihdosta. Suurilla polttopuun tuotantomäärillä kuljetuskustannukset korostuvat. Samalla kasvaa markkinoinnin tarve. Mainonnassa voidaan hyödyntää tehokasta kuivausmenetelmää.
Resumo:
EU on käynnistämässä ympäristöteknologioiden verifiointijärjestelmää, jonka avulla voidaan tarjota käyttäjille ja sijoittajille riippumatonta tietoa innovatiivisten teknologioiden toimivuudesta ja suorituskyvystä ja parantaa niiden markkina-asemaa. Verifioinnilla tarkoitetaan kolmannen osapuolen suorittamaa prosessia tai mekanismia, jonka avulla tuotteen toiminta ja suorituskyky voidaan todentaa. Kansallinen ympäristöteknologioiden verifiointijärjestelmä on käytössä mm. Yhdysvalloissa ja Kanadassa. Euroopassa järjestelmä otettaneen käyttöön vuonna 2011–2012. Suomessa tehdään nykyisin noin 300 pilaantuneen maan puhdistushanketta koskevaa lupa- ja ilmoituspäätöstä vuosittain. Noin 85 prosentissa kohteista käytetään kunnostusmenetelmänä massanvaihtoa. Massanvaihto tulee ainakin toistaiseksi säilymään yleisimpänä kunnostusmenetelmänä, mutta mm. in situ -menetelmien käytön arvioidaan lisääntyvän. Tämän diplomityön tavoitteena oli arvioida voidaanko verifiointia hyödyntää pilaantuneiden maiden laadukkaan käsittelyn edistämisessä ja voiko verifiointi nopeuttaa innovatiivisten kunnostusmenetelmien markkinoillepääsyä. Aihetta tarkasteltiin mm. kahden erityyppisen pilaantuneen maan ja pohjaveden kunnostusmenetelmän, reaktiivisten seinämien (in situ) ja bitumistabiloinnin (ex situ) kautta. Pilaantuneiden maiden kunnostusmenetelmien toimivuus riippuu monista eri tekijöistä, joista osaa ei voida hallita tai mallintaa luotettavasti. Verifiointi soveltuukin parhaiten laitteiden tai PIMA-menetelmiä yksinkertaisempien puhdistusmenetelmien suorituskyvyn todentamiseen. Verifiointi saattaa kuitenkin hyvin toimia PIMA-kunnostuksen kohdalla esimerkiksi tiedollisena ohjauskeinona. Reaktiivisten seinämien ja bitumistabiloinnin verifioinnin työvaiheet ovat hyvin samankaltaiset, suurimpana erona seinämien kohdalla tulee kuvata myös kohde johon seinämä on asennettu. Reaktiivisten seinämien toiminta on riippuvaista monista ympäristötekijöistä, toisin kuin erillisellä laitteistolla suoritettavan bitumistabiloinnin. Tulosten perusteella voidaan yleistää, että verifiointi soveltuu paremmin ex situ -, kuin in situ -kunnostusmenetelmille.
Resumo:
In the paper machine, it is not a desired feature for the boundary layer flows in the fabric and the roll surfaces to travel into the closing nips, creating overpressure. In this thesis, the aerodynamic behavior of the grooved roll and smooth rolls is compared in order to understand the nip flow phenomena, which is the main reason why vacuum and grooved roll constructions are designed. A common method to remove the boundary layer flow from the closing nip is to use the vacuum roll construction. The downside of the use of vacuum rolls is high operational costs due to pressure losses in the vacuum roll shell. The deep grooved roll has the same goal, to create a pressure difference over the paper web and keep the paper attached to the roll or fabric surface in the drying pocket of the paper machine. A literature review revealed that the aerodynamic functionality of the grooved roll is not very well known. In this thesis, the aerodynamic functionality of the grooved roll in interaction with a permeable or impermeable wall is studied by varying the groove properties. Computational fluid dynamics simulations are utilized as the research tool. The simulations have been performed with commercial fluid dynamics software, ANSYS Fluent. Simulation results made with 3- and 2-dimensional fluid dynamics models are compared to laboratory scale measurements. The measurements have been made with a grooved roll simulator designed for the research. The variables in the comparison are the paper or fabric wrap angle, surface velocities, groove geometry and wall permeability. Present-day computational and modeling resources limit grooved roll fluid dynamics simulations in the paper machine scale. Based on the analysis of the aerodynamic functionality of the grooved roll, a grooved roll simulation tool is proposed. The smooth roll simulations show that the closing nip pressure does not depend on the length of boundary layer development. The surface velocity increase affects the pressure distribution in the closing and opening nips. The 3D grooved roll model reveals the aerodynamic functionality of the grooved roll. With the optimal groove size it is possible to avoid closing nip overpressure and keep the web attached to the fabric surface in the area of the wrap angle. The groove flow friction and minor losses play a different role when the wrap angle is changed. The proposed 2D grooved roll simulation tool is able to replicate the grooved aerodynamic behavior with reasonable accuracy. A small wrap angle predicts the pressure distribution correctly with the chosen approach for calculating the groove friction losses. With a large wrap angle, the groove friction loss shows too large pressure gradients, and the way of calculating the air flow friction losses in the groove has to be reconsidered. The aerodynamic functionality of the grooved roll is based on minor and viscous losses in the closing and opening nips as well as in the grooves. The proposed 2D grooved roll model is a simplification in order to reduce computational and modeling efforts. The simulation tool makes it possible to simulate complex paper machine constructions in the paper machine scale. In order to use the grooved roll as a replacement for the vacuum roll, the grooved roll properties have to be considered on the basis of the web handling application.
Resumo:
This thesis examines the suitability of VaR in foreign exchange rate risk management from the perspective of a European investor. The suitability of four different VaR models is evaluated in respect to have insight if VaR is a valuable tool in managing foreign exchange rate risk. The models evaluated are historical method, historical bootstrap method, variance-covariance method and Monte Carlo simulation. The data evaluated are divided into emerging and developed market currencies to have more intriguing analysis. The foreign exchange rate data in this thesis is from 31st January 2000 to 30th April 2014. The results show that the previously mentioned VaR models performance in foreign exchange risk management is not to be considered as a single tool in foreign exchange rate risk management. The variance-covariance method and Monte Carlo simulation performs poorest in both currency portfolios. Both historical methods performed better but should also be considered as an additional tool along with other more sophisticated analysis tools. A comparative study of VaR estimates and forward prices is also included in the thesis. The study reveals that regardless of the expensive hedging cost of emerging market currencies the risk captured by VaR is more expensive and thus FX forward hedging is recommended
Resumo:
The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.
Resumo:
Yli 140 miljoonaa ihmistä kärsii kroonisesta eteisvärinästä. Monilla ihmisillä on eteisvärinää, vaikkeivat siitä itse tiedä. Hoito on yksinkertainen, mutta hoidon piiriin eivät päädy kaikki yksinkertaisten seulontamenettelyjen tuloksena. Elektrokardiogrammi (EKG) sydämen toimintojen tunnistusmenetelmänä on tällä hetkellä yleisin vaihtoehto eteisvärinän seulontaan. EKG-laitteet ovat arvokkaita ja haasteellisia yksittäisen käyttäjän arkielämän kannalta. Vaihtoehtoinen sydämen monitorointimenetelmä on ballistokardiografinen (BKG) mittaus. EKG:n ja BKG:n ominaisia piirteitä käydään lävitse ja vertaillaan vahvuuksia sekä heikkouksia näiden kahden mittausmenetelmän välillä. BKG:ta on tutkittu jo pitkään, mutta mittaukseen soveltuvia tähän käyttökohteeseen varsinaisesti suunniteltuja laitteita ei ole paljoakaan tuotteistettu markkinoille asti. Työssä tutkitaan matkapuhelimen kiihtyvyysanturin soveltuvuutta BKG-mittauksen suorittamiseen. Tällä menetelmällä on mahdollista tuoda helposti sydänmonitori lähelle ihmisiä ja jokainen voi omalla matkapuhelimellaan tarkkailla sydämensä toimintaa. Diplomityössä selvitetään erilaisten markkinoilla olevien mobiililaitteiden soveltuvuutta kiihtyvyysanturitutkimukseen. Useampaa mallia koskevan selvitystyön tuloksena valitaan parhaiten toimiva vaihtoehto, jolla jatketaan tutkimusta suunnittelemalla mittausprosessi. Ensimmäisen vaiheen mittauksissa valitaan 20 perustervettä koehenkilöä tutkimukseen. Tutkimuksen tuloksena saadaan tutkimushypoteesin mukainen tulos ja sydämen lyönnit saadaan tunnistettua kaikilla koehenkilöillä. Ensimmäisen vaiheen aikana suoritetaan myös liikehäiriötutkimusta. Tässä suoritetaan yksinkertaiset käden, jalan ja pään liikkeiden vaikutuksen arvioinnit kiihtyvyysanturisignaaliin. Lisäksi selvitetään, miten puhuminen mittauksen aikana välittyy tutkimuslaitteeseen ja suoritetaan arvio siitä, miten tämä voidaan ottaa huomioon. Ensimmäisen vaiheen havaintojen perusteella jatkokehitetään mittausprosessia ja pyritään optimoimaan tätä, jotta laajempi otosjoukko on mahdollista saavuttaa mahdollisimman vähän aikaresursseja kuluttaen. Työssä valmistellaan laajemman, 1000 koehenkilöä sisältävän, toisen tutkimusvaiheen suoritusta. Diplomityön osana valmistellaan Varsinais-Suomen Sairaanhoitopiirin eettiselle toimikunnalle lausuntohakemus tutkimukselle. Hakemuksessa käsitellään laajasti tutkimuksen suorittamista eri osa-alueilla. Toisen vaiheen tuloksen noudattaessa tutkimushypoteesia, voidaan todeta matkapuhelimen kiihtyvyysanturin olevan soveluva menetelmä sydämen toiminnan tutkimiseen.
Resumo:
Augmented Reality (AR) applications often require knowledge of the user’s position in some global coordinate system in order to draw the augmented content to its correct position on the screen. The most common method for coarse positioning is the Global Positioning System (GPS). One of the advantages of GPS is that GPS receivers can be found in almost every modern mobile device. This research was conducted in order to determine the accuracies of different GPS receivers. The tests included seven consumer-grade tablets, three external GPS modules and one professional-grade GPS receiver. All of the devices were tested with both static and mobile measurements. It was concluded that even the cheaper external GPS receivers were notably more accurate than the GPS receivers of the tested tablets. The absolute accuracy of the tablets is difficult to determine from the test results, since the results vary by a large margin between different measurements. The accuracy of the tested tablets in static measurements were between 0.30 meters and 13.75 meters.
Resumo:
Kokonaisvaltaisen laatujohtamisen malli TQM (Total Quality Management) on noussut yhdeksi merkittävimmistä konsepteista globaalissa liiketoiminnassa, missä laatu on tärkeä kilpailutekijä. Tämä diplomityö pureutuu nykyaikaiseen kokonaisvaltaisen laatujohtamisen konseptiin, joka nostaa perinteisen laatuajattelun uudelle tasolle. Moderni laatujohtamisajattelu on kasvanut koskemaan yrityksen toiminnan kaikkia osa-alueita. Työn tavoitteena on TietoEnator Käsittely ja Verkkopalvelut liiketoiminta-alueen osalta laadun sekä liiketoiminnallisen suorituskyvyn parantaminen kokonaisvaltaisesti. Ennen varsinaisen laatujohtamis-konseptin käsittelyä työ esittelee ensin yleisellä tasolla perinteistä laatu käsitettä sekä käsittelee lyhyestiICT-liiketoimintaa ja siihen liittyviä standardeja. Lopuksi tutkimus esittelee priorisoituja parannusehdotuksia ja askeleita jotka auttavat organisaatiota saavuttamaan kokonaisvaltaisen laatujohtamiskonseptin mukaisia pyrkimyksiä.
Resumo:
Fatigue life assessment of weldedstructures is commonly based on the nominal stress method, but more flexible and accurate methods have been introduced. In general, the assessment accuracy is improved as more localized information about the weld is incorporated. The structural hot spot stress method includes the influence of macro geometric effects and structural discontinuities on the design stress but excludes the local features of the weld. In this thesis, the limitations of the structural hot spot stress method are discussed and a modified structural stress method with improved accuracy is developed and verified for selected welded details. The fatigue life of structures in the as-welded state consists mainly of crack growth from pre-existing cracks or defects. Crack growth rate depends on crack geometry and the stress state on the crack face plane. This means that the stress level and shape of the stress distribution in the assumed crack path governs thetotal fatigue life. In many structural details the stress distribution is similar and adequate fatigue life estimates can be obtained just by adjusting the stress level based on a single stress value, i.e., the structural hot spot stress. There are, however, cases for which the structural stress approach is less appropriate because the stress distribution differs significantly from the more common cases. Plate edge attachments and plates on elastic foundations are some examples of structures with this type of stress distribution. The importance of fillet weld size and weld load variation on the stress distribution is another central topic in this thesis. Structural hot spot stress determination is generally based on a procedure that involves extrapolation of plate surface stresses. Other possibilities for determining the structural hot spot stress is to extrapolate stresses through the thickness at the weld toe or to use Dong's method which includes through-thickness extrapolation at some distance from the weld toe. Both of these latter methods are less sensitive to the FE mesh used. Structural stress based on surface extrapolation is sensitive to the extrapolation points selected and to the FE mesh used near these points. Rules for proper meshing, however, are well defined and not difficult to apply. To improve the accuracy of the traditional structural hot spot stress, a multi-linear stress distribution is introduced. The magnitude of the weld toe stress after linearization is dependent on the weld size, weld load and plate thickness. Simple equations have been derived by comparing assessment results based on the local linear stress distribution and LEFM based calculations. The proposed method is called the modified structural stress method (MSHS) since the structural hot spot stress (SHS) value is corrected using information on weld size andweld load. The correction procedure is verified using fatigue test results found in the literature. Also, a test case was conducted comparing the proposed method with other local fatigue assessment methods.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.
Resumo:
TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.
Resumo:
Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.
Resumo:
Mass spectrometry (MS)-based proteomics has seen significant technical advances during the past two decades and mass spectrometry has become a central tool in many biosciences. Despite the popularity of MS-based methods, the handling of the systematic non-biological variation in the data remains a common problem. This biasing variation can result from several sources ranging from sample handling to differences caused by the instrumentation. Normalization is the procedure which aims to account for this biasing variation and make samples comparable. Many normalization methods commonly used in proteomics have been adapted from the DNA-microarray world. Studies comparing normalization methods with proteomics data sets using some variability measures exist. However, a more thorough comparison looking at the quantitative and qualitative differences of the performance of the different normalization methods and at their ability in preserving the true differential expression signal of proteins, is lacking. In this thesis, several popular and widely used normalization methods (the Linear regression normalization, Local regression normalization, Variance stabilizing normalization, Quantile-normalization, Median central tendency normalization and also variants of some of the forementioned methods), representing different strategies in normalization are being compared and evaluated with a benchmark spike-in proteomics data set. The normalization methods are evaluated in several ways. The performance of the normalization methods is evaluated qualitatively and quantitatively on a global scale and in pairwise comparisons of sample groups. In addition, it is investigated, whether performing the normalization globally on the whole data or pairwise for the comparison pairs examined, affects the performance of the normalization method in normalizing the data and preserving the true differential expression signal. In this thesis, both major and minor differences in the performance of the different normalization methods were found. Also, the way in which the normalization was performed (global normalization of the whole data or pairwise normalization of the comparison pair) affected the performance of some of the methods in pairwise comparisons. Differences among variants of the same methods were also observed.