928 resultados para implementation analysis
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
The networking and digitalization of audio equipment has created a need for control protocols. These protocols offer new services to customers and ensure that the equipment operates correctly. The control protocols used in the computer networks are not directly applicable since embedded systems have resource and cost limitations. In this master's thesis the design and implementation of new loudspeaker control network protocols are presented. The protocol stack was required to be reliable, have short response times, configure the network automatically and support the dynamic addition and removal of loudspeakers. The implemented protocol stack was also required to be as efficient and lightweight as possible because the network nodes are fairly simple and lack processing power. The protocol stack was thoroughly tested, validated and verified. The protocols were formally described using LOTOS (Language of Temporal Ordering Specifications) and verified using reachability analysis. A prototype of the loudspeaker network was built and used for testing the operation and the performance of the control protocols. The implemented control protocol stack met the design specifications and proved to be highly reliable and efficient.
Resumo:
This contribution (presented in the first International Conference on Public Policy (ICPP) in Grenoble in June 2013) explores the phenomena of innovation in action ("innovative implementation"). To do so, we operationalize "innovative implementation" as a strategy by which (coalitions of) non-state actors seek to develop ad hoc solutions to address a given environmental issue, going beyond what is provided for in formal policy designs. Following an inductive research strategy, we elaborate a conceptual framework whose main advantage is to bring the actors and their coalition (in all their diversity) back in the analysis. More concretely, we state that perceiving implementation as broader 'social interaction processes' (De Boer & Bressers 2011) within which actors play strategic 'games' (Bardach 1977, Scharpf 1997) opens interesting lines of research to better account for their innovative and strategic behaviours. In a second step, we apply this framework to three strategies of innovative implementation in different contexts, and identify on this basis empirical regularities in the individual pathways related to the emergence and success (or failure) of these strategies.
Resumo:
Tämä diplomityö esittelee 1.6. - 4.11.2002 Mikkelin ammattikorkeakoulun YTI-tutkimuskeskuksen Informaatio- ja mediateknologian hankekokonaisuuden STRIMT-projektin puitteissa tekemäni MHP-kehitysjärjestelmähankinnan tuloksia. Digitaalitelevision MHP-sovelluskehitysjärjestelmän hankinnan tulosten lisäksi työ antaa yleiskuvan MHP-sovelluskehitystyökaluista, digitaalitelevisiostandardeista sekä digitaalitelevisioverkkojen yhteistoimintamalleista televerkkojen kanssa. Keräämieni tietojen, tekemäni tarvemäärittelyn ja suorittamani tuotevertailun perusteella valitsin tarpeisiimme sopivimmaksi järjestelmäksi Alticast Inc.-yrityksen Altifusion-tuotteen. Tuotteen valinnassa otin huomioon myös mahdolliset tulevaisuuden tarpeet aina täysimittaista lähetysjärjestelmää varten kehittämällä vaiheittaisen hankintasuunnitelman. Järjestelmää hankittaessa tärkeintä on huomioida heti alusta alkaen mahdolliset tulevat tarpeet ja varmistaa hankinnan hyödynnettävyys myös tulevaisuudessa. Järjestelmän hankinnan lisäksi vastasin sen käyttöönotosta ja järjestelmää hyödyntävän testisovelluksen laatimisesta.
Resumo:
Tutkimuksen tavoitteena oli selvittää raaka-aineena käytettävän paloa hidastavan laminaattipaperin markkinapotentiaali sekä kysyntä Euroopassa. Näiden kehitystä arvioitiin analysoimalla kysyntään vaikuttavia tekijöitä. Tutkimusmetodologiassa yhdistyivät useat lähestymistavat, pääasiassa käytettiin kuvailevaa ja ennustavaa tutkimusotetta. Tutkimus perustui sekä primaari että sekundaaritietoon. Primaaritietoa hankittiin tuotteen käyttäjiltä, myyntiedustajilta sekä haastattelemalla tuottajayrityksen henkilökuntaa. Sekundaaritietoa kerättiin myös, mutta tutkimuksen tavoitteisiin liittyviä lähteitä ei ollut runsaasti saatavilla. Tästä syystä primaaritiedolla oli tutkimuksessa hieman tärkeämpi rooli kuin sekundaaritiedolla, mikä on yleistä teollisessa markkinatutkimuksessa. Tuotteen tulevaisuuden näkymät vaikuttavat melko hyviltä. Teoreettinen markkinapotentiaali on suuri verrattuna nykyiseen myyntimäärään, myyntimäärän kasvattaminen vaatii kuitenkin tiettyjä toimenpiteitä. Tulevaisuudessa huomiota tulisi kiinnittää tuotekuvaan, hinnoitteluun ja laadun kokonaisvaltaiseen maksimointiin. Tutkimuksessa havaittiin suuntauksia kysynnän kasvusta tulevien parin vuoden aikana. Myös teoreettinen markkinapotentiaali voisi kasvaa, koska paloa hidastavien laminaattien kysyntä vaikuttaa kasvavan Euroopassa erityisesti rakennusalalla.
Resumo:
Tutkimus tarkastelee Luoteis-Venäjän liikennelogistiikkaklusteria. Tarkoitus on selvittää klusterin nykyinen rakenne ja kilpailukyky sekä klusterin tarjoamat liiketoimintamahdollisuudet suomalaisille logistiikkayrityksille. Työssä käsitellään neljää perusliikennemuotoa: rautatie-, maantie-, meri- ja sisävesi-, sekä ilmaliikennettä. Tutkimuksen aineisto on kerätty tutkimusta varten laadituista kyselyistä, haastatteluista sekä aiemmin julkaistusta materiaalista. Venäjä on suunnitellut kehittävänsä voimakkaasti liikenneinfrastruktuuria, mm. julkaisemalla protektionistisen liikennestrategiasuunnitelman. Ongelmana ovat olleet toteutukset, jotka ovat jääneet yleensä puutteellisiksi. Tällä hetkellä todellista kilpailukykyä löytyy ainoastaan rautatieliikenteestä, muut kolme liikennemuotoa omaavat potentiaalisen kilpailukyvyn. Venäjällä on mahdollisuus hyötyä laajasta pinta-alastaan Aasian ja Euroopan liikenteen yhdistäjänä. Yksi konkreettisimmista esimerkeistä on Trans Siperian rautatie, joka kaipaisi vielä lisäkehitystä. Suomi on toiminut Venäjän liikenteessä arvotavaran kauttakulkumaana, vuonna 2003 noin 30–40 % Venäjän tuonnin arvosta kulki Suomen kautta. Venäjälle tullaan tuomaan arvotavaraa vielä useita vuosia, mutta reittien osalta kilpailu on tiukentunut. Suomalaisten yritysten liiketoimintamahdollisuuksiin esitetään kaksi mallia: kauttakulkuliikenteen lisäarvologistiset (VAL) operaatiot Suomessa tai etabloituminen Venäjän logistisiin ketjuihin. Suomalaisten olisi syytä parantaa yhteistyötään yritysten ja yliopistojen ym. koulutuslaitosten välillä. Myös yhteistyökumppaneiden hakeminen esimerkiksi Ruotsista voisi tuoda merkittäviä etuja. Suomalaista osaamista voitaisiin hyödyntää parhaiten etabloitumalla Venäjän markkinoille, esimerkiksi keskittymällä Venäjän logististen ketjujen johtamiseen. Myös VAL palveluiden johtamiseen Venäjällä olisi erittäin hyvä tilaisuus, koska Venäjän oma tietotaito logistiikassa ei ole vielä kehittynyt kansainväliselle tasolle, mutta kustannustaso on alhaisempi kuin Suomessa.
Resumo:
Tässä luomistyössä on esitetty tutkimus informaation suojaamisen menetelmien osalta paikallisissa ja ryhmäkuntaisissa verkoissa. Tutkimukseen kuuluu nykyaikaisten kryptagraafisten järjestelmien, Internetin/Intranetin ohjelmointikeinojen ja pääsyoikeuksien jakelumenetelmien analyysi. Tutkimusten perusteella on laadittu ohjelmiston prototyyppi HTML-tiedostojen suojaamista varten. Ohjelmiston laatimisprosessi on sisältänyt vaatimusten, järjestelmän ja suojelukomponenttien suunnittelun ja protytyypin testauksen. Ohjelmiston realisoinnin jälkeen kirjoitettiin käyttöohjeet. Ohjelmiston prototyyppi suojaa informaatiota HTML-tiedoston koko käytön aikana ja eri yrityksissä voidaan käyttää sitä pienien laajennuksien jälkeen.
Resumo:
Introduction. Genetic epidemiology is focused on the study of the genetic causes that determine health and diseases in populations. To achieve this goal a common strategy is to explore differences in genetic variability between diseased and nondiseased individuals. Usual markers of genetic variability are single nucleotide polymorphisms (SNPs) which are changes in just one base in the genome. The usual statistical approach in genetic epidemiology study is a marginal analysis, where each SNP is analyzed separately for association with the phenotype. Motivation. It has been observed, that for common diseases the single-SNP analysis is not very powerful for detecting genetic causing variants. In this work, we consider Gene Set Analysis (GSA) as an alternative to standard marginal association approaches. GSA aims to assess the overall association of a set of genetic variants with a phenotype and has the potential to detect subtle effects of variants in a gene or a pathway that might be missed when assessed individually. Objective. We present a new optimized implementation of a pair of gene set analysis methodologies for analyze the individual evidence of SNPs in biological pathways. We perform a simulation study for exploring the power of the proposed methodologies in a set of scenarios with different number of causal SNPs under different effect sizes. In addition, we compare the results with the usual single-SNP analysis method. Moreover, we show the advantage of using the proposed gene set approaches in the context of an Alzheimer disease case-control study where we explore the Reelin signal pathway.
Resumo:
In this study the performance measurement, a part of the research and development of the RNC, was improved by implementing counter testing to the Nokia Automation System. The automation of counter testing is a feature the customer ordered, because performing counter testing manually is rather complex. The objective was to implement an automated counter testing system, which once configured correctly, would manage to run the testing and perform the analysis. The requirements for the counter testing were first studied. It was investigated if the auto-mation of the feature was feasible in the meetings with the customer. The basic functionality required for the automation was also drawn. The technologies used in the architecture of the Nokia Automation System were studied. Based on the results of the study, a new technology, wxWidgets, was introduced. The new technology was necessary to facilitate the implementing of the required feature. Finally the implementation of the counter testing was defined and implemented. The result of this study was the automation of the counter testing method developed as a new feature for the Nokia Automation System. The feature meets the specifications and requirements set by the customer. The performing of the counter testing feature is totally automated. Only configuration of the test cases is done by the user. The customer has presented new requests to further develop the feature and there are plans by the Nokia Automation System developers to implement those in the near future. The study describes the implementation of the counter testing feature introduced. The results of the study give guidelines for further developing the feature.
Resumo:
This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.
Resumo:
The main objective of this dissertation is to create new knowledge on an administrative innovation, its adoption, diffusion and finally its effectiveness. In this dissertation the administrative innovation is approached through a widely utilized management philosophy, namely the total quality management (TQM) strategy. TQM operationalizes a self-assessment procedure, which is based on continual improvement principles and measuring the improvements. This dissertation also captures the theme of change management as it analyzes the adoption and diffusion of the administrative innovation. It identifies innovation characteristics as well as organisational and individual factors explaining the adoption and implementation. As a special feature, this study also explores the effectiveness of the innovation based on objective data. For studying the administrative innovation (TQM model), a multinational Case Company provides a versatile ground for a deep, longitudinal analysis. The Case Company started the adoption systematically in the mid 1980s in some of its units. As part of their strategic planning today, the procedure is in use throughout the entire global company. The empirical story begins from the innovation adoption decision that was made in the Case Company over 22 years ago. In order to be able to capture the right atmosphere and backgrounds leading to the adoption decision, key informants from that time were interviewed, since the main target was to clarify the dynamics of how an administrative innovation develops. In addition, archival material was collected and studied, available memos and data relating to the innovation, innovation adoption and later to the implementation contained altogether 20500 pages of documents. A survey was furthermore conducted at the end of 2006 focusing on questions related to the innovation, organization and leadership characteristics and the response rate totalled up to 54%. For measuring the effectiveness of the innovation implementation, the needed longitudinal objective performance data was collected. This data included the profit unit level experience of TQM, the development of the self assessment scores per profit unit and performance data per profit unit measured with profitability, productivity and customer satisfaction. The data covered the years 1995-2006. As a result, the prerequisites for the successful adoption of an administrative innovation were defined, such as the top management involvement, support of the change agents and effective tools for implementation and measurement. The factors with the greatest effect on the depth of the implementation were the timing of the adoption and formalization. The results also indicated that the TQM model does have an effect on the company performance measured with profitability, productivity and customer satisfaction. Consequently this thesis contributes to the present literature (i) by taking into its scope an administrative innovation and focusing on the whole innovation implementation process, from the adoption, through diffusion until its consequences, (ii) because the studied factors with an effect on the innovation adoption and diffusion are multifaceted and grouped into individual, organizational and environmental factors, and a strong emphasis is put on the role of the individual change agents and (iii) by measuring the depth and consistency of the administrative innovation. This deep analysis was possible due to the availability of longitudinal data with triangulation possibilities.
Resumo:
Human biomonitoring (HBM) is an effective tool for assessing actual exposure to chemicals that takes into account all routes of intake. Although hair analysis is considered to be an optimal biomarker for assessing mercury exposure, the lack of harmonization as regards sampling and analytical procedures has often limited the comparison of data at national and international level. The European-funded projects COPHES and DEMOCOPHES developed and tested a harmonized European approach to Human Biomonitoring in response to the European Environment and Health Action Plan. Herein we describe the quality assurance program (QAP) for assessing mercury levels in hair samples from more than 1800 mother-child pairs recruited in 17 European countries. To ensure the comparability of the results, standard operating procedures (SOPs) for sampling and for mercury analysis were drafted and distributed to participating laboratories. Training sessions were organized for field workers and four external quality-assessment exercises (ICI/EQUAS), followed by the corresponding web conferences, were organized between March 2011 and February 2012. ICI/EQUAS used native hair samples at two mercury concentration ranges (0.20-0.71 and 0.80-1.63) per exercise. The results revealed relative standard deviations of 7.87-13.55% and 4.04-11.31% for the low and high mercury concentration ranges, respectively. A total of 16 out of 18 participating laboratories the QAP requirements and were allowed to analyze samples from the DEMOCOPHES pilot study. Web conferences after each ICI/EQUAS revealed this to be a new and effective tool for improving analytical performance and increasing capacity building. The procedure developed and tested in COPHES/DEMOCOPHES would be optimal for application on a global scale as regards implementation of the Minamata Convention on Mercury.
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
BACKGROUND: Enhanced recovery after surgery (ERAS) programmes have been shown to decrease complications and hospital stay. The cost-effectiveness of such programmes has been demonstrated for colorectal surgery. This study aimed to assess the economic outcomes of a standard ERAS programme for pancreaticoduodenectomy. METHODS: ERAS for pancreaticoduodenectomy was implemented in October 2012. All consecutive patients who underwent pancreaticoduodenectomy until October 2014 were recorded. This group was compared in terms of costs with a cohort of consecutive patients who underwent pancreaticoduodenectomy between January 2010 and October 2012, before ERAS implementation. Preoperative, intraoperative and postoperative real costs were collected for each patient via the hospital administration. A bootstrap independent t test was used for comparison. ERAS-specific costs were integrated into the model. RESULTS: The groups were well matched in terms of demographic and surgical details. The overall complication rate was 68 per cent (50 of 74 patients) and 82 per cent (71 of 87 patients) in the ERAS and pre-ERAS groups respectively (P = 0·046). Median hospital stay was lower in the ERAS group (15 versus 19 days; P = 0·029). ERAS-specific costs were euro922 per patient. Mean total costs were euro56 083 per patient in the ERAS group and euro63 821 per patient in the pre-ERAS group (P = 0·273). The mean intensive care unit (ICU) and intermediate care costs were euro9139 and euro13 793 per patient for the ERAS and pre-ERAS groups respectively (P = 0·151). CONCLUSION: ERAS implementation for pancreaticoduodenectomy did not increase the costs in this cohort. Savings were noted in anaesthesia/operating room, medication and laboratory costs. Fewer patients in the ERAS group required an ICU stay.