25 resultados para career adapt-abilities, adaptability, test adaptation, measurement invariance
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tämä tutkimus pyrkii selvittämään, miten toimitusketjun suorituskykyä voidaan mitata kohdeyrityksessä. Supply Chain Council (SCC) on vuonna 1996 kehittänyt Supply Chain Operations Reference (SCOR) – mallin, joka mahdollistaa myös suorituskyvyn mittaamisen. Tämän tutkimuksen tarkoituksena on soveltaa SCOR-mallin suorituskyvyn mittausmallia kohdeyrityksessä. Työ on kvalitatiivinen tapaustutkimus. Työn teoriaosassa on pääasiallisesti käsitelty toimitusketjua ja suorituskyvyn mittaamista koskevaa kirjallisuutta. Mittausjärjestelmän luominen alkaa kohdeyrityksen esittelyllä. SCOR – mallin mittarit on kohdeyrityksessä rakennettu SCC:n ehdotusten mukaisesti, jotta mittareiden tulokset olisivat käyttökelpoisia myös benchmarkkausta varten. Malli sisältää 10 SCOR – mittaria, sekä muutamia muita Haltonin omia mittareita. Lopputuloksena voidaan nähdä, että SCOR – malli antaa hyvän yleiskuvan toimitusketjun suorituskyvystä, mutta kohdeyrityksessä on silti tarvetta kehittää edelleen informatiivisempia mittareita, jotka antaisivat yksityiskohtaisempaa tietoa kohdeyrityksen johdolle.
Resumo:
Tämä työ käsittelee tuotekehitysprojektia, jonka tarkoituksena on suunnitella uusi nelitietyöntömastotrukki asiakaslähtöisesti QFD-laatutyökalua hyväksi käyttäen. Työ esittelee QFD-laatutyökalun, sekä varsinkin sen nelivaiheisen ASI:n toimintamallin. Työ antaa myös perusvalmiudet ASI:n toimintamallin käyttöön tuotekehitysprojektissa. Työssä keskitytään tarkastelemaan uuden ohjausjärjestelmän mekaanista kehittelyä. Suunnittelutyössä käytetään hyväksi ASI:n (American Supplier Institute) nelivaiheista QFD-laatutyökalua, joka on autoteollisuuden toimintatapoihin perustuva QFD-tekniikka. QFD tulee sanoista Quality Funcktion Deployment ja vapaasti suomennettuna se merkitsee asiakaslähtöinen tuotesuunnittelu. Nelivaiheinen QFD-malli koostuu seuraavista matriiseista: 1. Tuotteen suunnittelu, 2. osien suunnittelu ja kehittäminen, 3. prosessin suunnittelu sekä 4. tuotannon suunnittelu. Matriisin tulokset muodostavat aina seuraavan matriisin lähtökohdat. Ensimmäisen matriisin lähtökohdat muodostuvat asiakastarvekuvauksesta, joka saadaan haastatteluista, tutkimuksista yms. Tuotekehitysprojektissa kehitetty TEM-monitietrukki, jossa rajattomasti ympäripyöriviä tukipyöriä ohjataan CAN-väylän kautta älykkäällä ohjauslogiikalla, on markkinoilla erittäin kilpailukykyinen sekä ominaisuuksiensa että hintansa puolesta. QFD yhdessä huolellisesti analysoidun asiakastarvekuvauksen kanssa selkeyttää ja priorisoi tuotteen spesifikaation määrittelyä sekä ohjaa tuotteen kehittelyä ja tuotannollistamista. Tuotekehitysprojekti pysyy aikataulussaan ja tuotannollistamisen kynnyksellä tehtävät muutokset vähenevät selkeän jäsentelyn ansiosta.
Resumo:
The present work is a part of the large project with purpose to qualify the Flash memory for automotive application using a standardized test and measurement flow. High memory reliability and data retention are the most critical parameters in this application. The current work covers the functional tests and data retention test. The purpose of the data retention test is to obtain the data retention parameters of the designed memory, i.e. the maximum time of information storage at specified conditions without critical charge leakage. For this purpose the charge leakage from the cells, which results in decrease of cells threshold voltage, was measured after a long-time hightemperature treatment at several temperatures. The amount of lost charge for each temperature was used to calculate the Arrhenius constant and activation energy for the discharge process. With this data, the discharge of the cells at different temperatures during long time can be predicted and the probability of data loss after years can be calculated. The memory chips, investigated in this work, were 0.035 μm CMOS Flash memory testchips, designed for further use in the Systems-on-Chips for automotive electronics.
Resumo:
Avhandling visar att lindrig dyslexi påverkar läs- och skrivprestationer hos högpresterare. Särdrag träder tydligast fram i främmande språk och vid hantering av språkljud i krävande testuppgifter. Även om dyslexirelaterade problem vanligtvis är lindriga hos universitetsstudenter, är det centralt att dessa identifieras, eftersom de ses påverka akademiska prestationer. Avhandlingen lägger fram det första finlandssvenska dyslexitestet normerat för universitetsnivå (FS-DUVAN) och ger verktyg för utredning av läs- och skrivsvårigheter hos unga vuxna i Svenskfinland. Avhandlingen utforskar också språkspecifika särdrag av dyslexi hos högpresterande finlandssvenska universitetsstudenter i läs- och skrivuppgifter i svenska, finska och engelska. Detaljerade felanalyser visar att studenter med dyslexi speciellt har problem med kopplingar mellan språkljud och bokstav i det främmande språket engelska, som också i detta avseende är komplext. Resultat i komplexa kognitiva testuppgifter som förutsätter hantering av språkljud pekar på svikt i fonologisk processering, som betecknas som den huvudsakliga underliggande kognitiva nedsättningen vid utvecklingsbetingad dyslexi.
Resumo:
This thesis is a research about the recent complex spatial changes in Namibia and Tanzania and local communities’ capacity to cope with, adapt to and transform the unpredictability engaged to these processes. I scrutinise the concept of resilience and its potential application to explaining the development of local communities in Southern Africa when facing various social, economic and environmental changes. My research is based on three distinct but overlapping research questions: what are the main spatial changes and their impact on the study areas in Namibia and Tanzania? What are the adaptation, transformation and resilience processes of the studied local communities in Namibia and Tanzania? How are innovation systems developed, and what is their impact on the resilience of the studied local communities in Namibia and Tanzania? I use four ethnographic case studies concerning environmental change, global tourism and innovation system development in Namibia and Tanzania, as well as mixed-methodological approaches, to study these issues. The results of my empirical investigation demonstrate that the spatial changes in the localities within Namibia and Tanzania are unique, loose assemblages, a result of the complex, multisided, relational and evolutional development of human and non-human elements that do not necessarily have linear causalities. Several changes co-exist and are interconnected though uncertain and unstructured and, together with the multiple stressors related to poverty, have made communities more vulnerable to different changes. The communities’ adaptation and transformation measures have been mostly reactive, based on contingency and post hoc learning. Despite various anticipation techniques, coping measures, adaptive learning and self-organisation processes occurring in the localities, the local communities are constrained by their uneven power relationships within the larger assemblages. Thus, communities’ own opportunities to increase their resilience are limited without changing the relations in these multiform entities. Therefore, larger cooperation models are needed, like an innovation system, based on the interactions of different actors to foster cooperation, which require collaboration among and input from a diverse set of stakeholders to combine different sources of knowledge, innovation and learning. Accordingly, both Namibia and Tanzania are developing an innovation system as their key policy to foster transformation towards knowledge-based societies. Finally, the development of an innovation system needs novel bottom-up approaches to increase the resilience of local communities and embed it into local communities. Therefore, innovation policies in Namibia have emphasised the role of indigenous knowledge, and Tanzania has established the Living Lab network.
Resumo:
Background Multiple sclerosis (MS) is a demyelinating disease of the central nervous system, which mainly affects young adults. In Finland, approximately 2500 out of 6000 MS patients have relapsing MS and are treated with disease modifying drugs (DMD): interferon- β (INF-β-1a or INF-β-1b) and glatiramer acetate (GA). Depending on the used IFN-β preparation, 2 % to 40 % of patients develop neutralizing antibodies (NAbs), which abolish the biological effects of IFN-β, leading to reduced clinical and MRI detected efficacy. According to the Finnish Current Care Guidelines and European Federation of Neurological Societis (EFNS) guidelines, it is suggested tomeasure the presence of NAbs during the first 24 months of IFN-β therapy. Aims The aim of this thesis was to measure the bioactivity of IFN-β therapy by focusing on the induction of MxA protein (myxovirus resistance protein A) and its correlation to neutralizing antibodies (NAb). A new MxA EIA assay was set up to offer an easier and rapid method for MxA protein detection in clinical practice. In addition, the tolerability and safety of GA were evaluated in patients who haddiscontinued IFN-β therapy due to side effects and lack of efficacy. Results NAbs developed towards the end of 12 months of treatment, and binding antibodies were detectable before or parallel with them. The titer of NAb correlated negatively with the amount of MxA protein and the mean values of preinjection MxA levels never returned to true baseline in NAb negative patients, but tended to drop in the NAb positive group. The test results between MxA EIA and flow cytometric analysis showed significant correlation. GA reduced the relapse rate and was a safe and well-tolerated therapy in IFN-β-intolerant MS patients. Conclusions NAbs inhibit the induction of MxA protein, which can be used as a surrogate marker of the bioactivity of IFN-β therapy. Compared to flow cytometricanalysis and NAb assay, MxA-EIA seemed to be a sensitive and more practical method in clinical use to measure the actual bioactivity of IFN-β treatment, which is of value also from a cost-effective perspective.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
This work had two primary objectives: 1) to produce a working prototype for automated printability assessment and 2) to perform a study of available machine vision and other necessary hardware solutions. The three printability testing methods, IGT Picking,He¬liotest, and mottling, considered in this work have several different requirements and the task was to produce a single automated testing system suitable for all methods. A system was designed and built and its performance was tested using the Heliotest. Working proto¬types are important tools for implementing theoretical methods into practical systems and testing and demonstrating the methodsin real life conditions. The system was found to be sufficient for the Heliotest method. Further testing and possible modifications related to other two test methods were left for future works. A short study of available systems and solutions concerning image acquisition of machine vision was performed. The theoretical part of this study includes lighting systems, optical systems and image acquisition tools, mainly cameras and the underlying physical aspects for each portion.
Resumo:
This thesis examines the history and evolution of information system process innovation (ISPI) processes (adoption, adaptation, and unlearning) within the information system development (ISD) work in an internal information system (IS) department and in two IS software house organisations in Finland over a 43-year time-period. The study offers insights into influential actors and their dependencies in deciding over ISPIs. The research usesa qualitative research approach, and the research methodology involves the description of the ISPI processes, how the actors searched for ISPIs, and how the relationships between the actors changed over time. The existing theories were evaluated using the conceptual models of the ISPI processes based on the innovationliterature in the IS area. The main focus of the study was to observe changes in the main ISPI processes over time. The main contribution of the thesis is a new theory. The term theory should be understood as 1) a new conceptual framework of the ISPI processes, 2) new ISPI concepts and categories, and the relationships between the ISPI concepts inside the ISPI processes. The study gives a comprehensive and systematic study on the history and evolution of the ISPI processes; reveals the factors that affected ISPI adoption; studies ISPI knowledge acquisition, information transfer, and adaptation mechanisms; and reveals the mechanismsaffecting ISPI unlearning; changes in the ISPI processes; and diverse actors involved in the processes. The results show that both the internal IS department and the two IS software houses sought opportunities to improve their technical skills and career paths and this created an innovative culture. When new technology generations come to the market the platform systems need to be renewed, and therefore the organisations invest in ISPIs in cycles. The extent of internal learning and experiments was higher than the external knowledge acquisition. Until the outsourcing event (1984) the decision-making was centralised and the internalIS department was very influential over ISPIs. After outsourcing, decision-making became distributed between the two IS software houses, the IS client, and itsinternal IT department. The IS client wanted to assure that information systemswould serve the business of the company and thus wanted to co-operate closely with the software organisations.
Resumo:
Työn tarkoituksena on kerätä yhteen tiedot kaikista maailmalta löytyvistä ison LOCA:n ulospuhallusvaiheen tutkimiseen käytetyistä koelaitteistoista. Työn tarkoituksena on myös antaa pohjaa päätökselle, onko tarpeellista rakentaa uusi koelaitteisto nesterakenne-vuorovaikutuskoodien laskennan validoimista varten. Ennen varsinaisen koelaitteiston rakentamista olisi tarkoituksenmukaista myös rakentaa pienempi pilottikoelaitteisto, jolla voitaisiin testata käytettäviä mittausmenetelmiä. Sopivaa mittausdataa tarvitaan uusien CFD-koodien ja rakenneanalyysikoodien kytketyn laskennan validoimisessa. Näitä koodeja voidaan käyttää esimerkiksi arvioitaessa reaktorin sisäosien rakenteellista kestävyyttä ison LOCA:n ulospuhallusvaiheen aikana. Raportti keskittyy maailmalta löytyviin koelaitteistoihin, uuden koelaitteiston suunnitteluperusteisiin sekä aiheeseen liittyviin yleisiin asioihin. Raportti ei korvaa olemassa olevia validointimatriiseja, mutta sitä voi käyttää apuna etsittäessä validointitarkoituksiin sopivaa ison LOCA:n ulospuhallusvaiheen koelaitteistoa.
Resumo:
One of the primary goals for food packages is to protect food against harmful environment, especially oxygen and moisture. The gas transmission rate is the total gas transport through the package, both by permeation through the package material and by leakage through pinholes and cracks. The shelf life of a product can be extended, if the food is stored in a gas tight package. Thus there is a need to test gas tightness of packages. There are several tightness testing methods, and they can be broadly divided into destructive and nondestructive methods. One of the most sensitive methods to detect leaks is by using a non destructive tracer gas technique. Carbon dioxide, helium and hydrogen are the most commonly used tracer gases. Hydrogen is the lightest and the smallest of all gases, which allows it to escape rapidly from the leak areas. The low background concentration of H2 in air (0.5 ppm) enables sensitive leak detection. With a hydrogen leak detector it is also possible to locate leaks. That is not possible with many other tightness testing methods. The experimental work has been focused on investigating the factors which affect the measurement results with the H2leak detector. Also reasons for false results were searched to avoid them in upcoming measurements. From the results of these experiments, the appropriate measurement practice was created in order to have correct and repeatable results. The most important thing for good measurement results is to keep the probe of the detector tightly against the leak. Because of its high diffusion rate, the HZ concentration decreases quickly if holding the probe further away from the leak area and thus the measured H2 leaks would be incorrect and small leaks could be undetected. In the experimental part hydrogen, oxygen and water vapour transmissions through laser beam reference holes (diameters 1 100 μm) were also measured and compared. With the H2 leak detector it was possible to detect even a leakage through 1 μm (diameter) within a few seconds. Water vapour did not penetrate even the largest reference hole (100 μm), even at tropical conditions (38 °C, 90 % RH), whereas some O2 transmission occurred through the reference holes larger than 5 μm. Thus water vapour transmission does not have a significant effect on food deterioration, if the diameter of the leak is less than 100 μm, but small leaks (5 100 μm) are more harmful for the food products, which are sensitive to oxidation.
Resumo:
Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.