178 resultados para Costing methods
Resumo:
Aikuispotilaan kotisyntyisen keuhkokuumeen etiologinen diagnostiikka mikrobiologisilla pikamenetelmillä Tausta. Keuhkokuume on vakava sairaus, johon sairastuu Suomessa vuosittain n. 60 000 aikuista. Huolimatta siitä, että taudin hoito on kehittynyt, siihen liittyy yhä merkittävä, 6-15%:n kuolleisuus. Alahengitystieinfektion aiheuttajamikrobien tunnistaminen on myös edelleen haasteellista. Tavoitteet. Tämän työn tavoitteena oli tutkia Turun yliopistollisessa keskussairaalassa hoidettujen aikuispotilaiden keuhkokuumeen etiologiaa sekä selvittää uusien mikrobiologisten pikamenetelmi¬en hyödyllisyyttä taudinaiheuttajan toteamisessa. Aineisto. Osatöiden I ja III aineisto koostui 384 Turun yliopistollisen keskussairaalaan infektio-osastolla hoidetusta keuhkokuumepotilaasta. Osatyössä I tutkittiin keuhkokuumeen aiheuttaja¬mikrobeja käyttämällä perinteisten menetelmien lisäksi antigeeniosoitukseen ja PCR-tekniikkaan perustuvia pikamenetelmiä. Osatyö II käsitti 231 potilaasta koostuvan alaryhmän, jossa tutkittiin potilaiden nielun limanäytteestä rinovirusten ja enterovirusten esiintyvyyttä. Osatyössä III potilailta tutkittiin plasman C-reaktiivisen proteiinin (CRP) pitoisuus ensimmäisten viiden sairaalahoitopäi¬vän aikana. Laajoja tilastotieteellisiä analyysejä käyttämällä selvitettiin CRP:n käyttökelpoisuutta sairauden vaikeusasteen arvioinnissa ja komplikaatioiden kehittymisen ennustamisessa. Osatyössä IV 68 keuhkokuumepotilaan sairaalaan tulovaiheessa otetuista näytteistä määritettiin neutrofiilien pintareseptorien ekspressio. Osatyössä V analysoitiin sisätautien vuodeosastoilla vuosina 1996-2000 keuhkokuumepotilaille tehtyjen keuhkohuuhtelunäytteiden laboratoriotutkimustulokset. Tulokset. Keuhkokuumeen aiheuttaja löytyi 209 potilaalta, aiheuttajamikrobeja löydettiin kaikkiaan 230. Näistä aiheuttajista 135 (58.7%) löydettiin antigeenin osoituksella tai PCR-menetelmillä. Suu¬rin osa, 95 (70.4%), todettiin pelkästään kyseisillä pikamenetelmillä. Respiratorinen virus todettiin antigeeniosoituksella 11.1% keuhkokuumepotilaalla. Eniten respiratorisia viruksia löytyi vakavaa keuhkokuumetta sairastavilta potilailta (20.3%). 231 keuhkokuumepotilaan alaryhmässä todettiin PCR-menetelmällä picornavirus 19 (8.2%) potilaalla. Respiratorinen virus löytyi tässä potilasryh¬mässä kaiken kaikkiaan 47 (20%) potilaalta. Näistä 17:llä (36%) löytyi samanaikaisesti bakteerin aiheuttama infektio. CRP-tasot olivat sairaalaan tulovaiheessa merkitsevästi korkeammat vakavaa keuhkokuumetta (PSI-luokat III-V) sairastavilla potilailla kuin lievää keuhkokuumetta (PSI-luokat I-II) sairastavilla potilailla (p <0.001). Yli 100 mg/l oleva CRP-taso neljän päivän kuluttua sairaa¬laan tulosta ennusti keuhkokuumeen komplikaatiota tai huonoa hoitovastetta. Neutrofiilien komple¬menttireseptorin ekspressio oli pneumokokin aiheuttamaa keuhkokuumetta sairastavilla merkitse¬västi korkeampi kuin influenssan aiheuttamaa keuhkokuumetta sairastavilla. BAL-näytteistä vain yhdessä 71:stä (1.3%) todettiin diagnostinen bakteerikasvu kvantitatiivisessa viljelyssä. Uusilla menetelmilläkin keuhkokuumeen aiheuttaja löytyi vain 9.8% BAL-näytteistä. Päätelmät. Uusilla antigeeniosoitus- ja PCR-menetelmillä keuhkokuumeen etiologia voidaan saada selvitettyä nopeasti. Lisäksi näitä menetelmiä käyttämällä taudin aiheuttajamikrobi löytyi huomattavasti suuremmalta osalta potilaista kuin pelkästään tavanomaisia menetelmiä käyttämällä. Pikamenetelmien hyödyllisyys vaihteli taudin vaikeusasteen mukaan. Respiratorinen virus löytyi huomattavan usein keuhkokuumetta sairastavilta potilailta, ja näiden potilaiden taudinkuva oli usein vaikea. Tulovaiheen korkeaa CRP-tasoa voidaan käyttää lisäkeinona arvioitaessa keuhkokuumeen vaikeutta. CRP on erityisen hyödyllinen arvioitaessa hoitovastetta ja riskiä komplikaatioiden ke¬hittymiseen. Neutrofiilien komplementtireseptorin ekspression tutkiminen näyttää lupaavalta pi¬kamenetelmältä erottamaan bakteerien ja virusten aiheuttamat taudit toisistaan. Antimikrobihoitoa saavilla potilailla BAL-tutkimuksen löydökset olivat vähäiset ja vaikuttivat hoitoon vain harvoin.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.
Resumo:
The aim of this thesis was to analyze the background information of an activity-based costing system, which is being used in a domestic forest industry company. The reports produced by the system have not been reliable, and this has caused the utilization of the system to diminish. The study was initiated by examining the theory of activity-based costing. It was also discovered, that the system produces management accounting information and therefore also that theory was introduced briefly. Next the possible sources of errors were examined. The significance of these errors was evaluated and waste handling was chosen as a subject of further study. The problem regarding waste handling was that there is no waste compensation in current model. When paper or board machine produces waste, it can be used as raw material in the process. However, at the moment the product, which is being produced, at the time does not get any compensation. The use of compensation has not been possible due to not knowing the quantity of process waste. As a result of the study a calculatory model, which enables calculating the quantity of process waste based on the data from the mill system, was introduced. This, for one, enables starting to use waste compensation in the future.
Resumo:
Strategic development of distribution networks plays a key role in the asset management in electricity distribution companies. Owing to the capital-intensive nature of the field and longspan operations of companies, the significance of a strategy is emphasised. A well-devised strategy combines awareness of challenges posed by the operating environment and the future targets of the distribution company. Economic regulation, ageing infrastructure, scarcity of resources and tightening supply requirements with challenges created by the climate change put a pressure on the strategy work. On the other hand, technology development related to network automation and underground cabling assists in answering these challenges. This dissertation aims at developing process knowledge and establishing a methodological framework by which key issues related to network development can be addressed. Moreover, the work develops tools by which the effects of changes in the operating environment on the distribution business can be analysed in the strategy work. To this end, the work discusses certain characteristics of the distribution business and describes the strategy process at a principle level. Further, the work defines the subtasks in the strategy process and presents the key elements in the strategy work and long-term network planning. The work delineates the factors having either a direct or indirect effect on strategic planning and development needs in the networks; in particular, outage costs constitute an important part of the economic regulation of the distribution business, reliability being thus a key driver in network planning. The dissertation describes the methodology and tools applied to cost and reliability analyses in the strategy work. The work focuses on determination of the techno-economic feasibility of different network development technologies; these feasibility surveys are linked to the economic regulation model of the distribution business, in particular from the viewpoint of reliability of electricity supply and allowed return. The work introduces the asset management system developed for research purposes and to support the strategy work, the calculation elements of the system and initial data used in the network analysis. The key elements of this asset management system are utilised in the dissertation. Finally, the study addresses the stages of strategic decision-making and compilation of investment strategies. Further, the work illustrates implementation of strategic planning in an actual distribution company environment.
Resumo:
Stratospheric ozone can be measured accurately using a limb scatter remote sensing technique at the UV-visible spectral region of solar light. The advantages of this technique includes a good vertical resolution and a good daytime coverage of the measurements. In addition to ozone, UV-visible limb scatter measurements contain information about NO2, NO3, OClO, BrO and aerosols. There are currently several satellite instruments continuously scanning the atmosphere and measuring the UVvisible region of the spectrum, e.g., the Optical Spectrograph and Infrared Imager System (OSIRIS) launched on the Odin satellite in February 2001, and the Scanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY (SCIAMACHY) launched on Envisat in March 2002. Envisat also carries the Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument, which also measures limb-scattered sunlight under bright limb occultation conditions. These conditions occur during daytime occultation measurements. The global coverage of the satellite measurements is far better than any other ozone measurement technique, but still the measurements are sparse in the spatial domain. Measurements are also repeated relatively rarely over a certain area, and the composition of the Earth’s atmosphere changes dynamically. Assimilation methods are therefore needed in order to combine the information of the measurements with the atmospheric model. In recent years, the focus of assimilation algorithm research has turned towards filtering methods. The traditional Extended Kalman filter (EKF) method takes into account not only the uncertainty of the measurements, but also the uncertainty of the evolution model of the system. However, the computational cost of full blown EKF increases rapidly as the number of the model parameters increases. Therefore the EKF method cannot be applied directly to the stratospheric ozone assimilation problem. The work in this thesis is devoted to the development of inversion methods for satellite instruments and the development of assimilation methods used with atmospheric models.
Resumo:
Hajautetuissa organisaatioissa määräaikainen tai pysyvä joukko ihmisiä työskentelee toisistaan erillään yhteisen tavoitteen saavuttamiseksi käyttäen apunaan tieto- ja viestintäteknologiaa. Tämän työn tavoitteena on tarkastella, minkälaisia hajautetun työn hallinnan hyviä käytäntöjä on olemassa. Lisäksi työn toisena tavoitteena on tutkia, miten hajautetun työn kustannukset eroavat perinteisen työn kustannuksista. Työn kohdeyrityksenä toimii IT-palveluyhtiö, jossa hajautettua työskentelytapaa on hyödynnetty jo jonkin aikaa. Työn teoriaosuudessa on käsitelty hajautetun työn sekä toimintolaskennan periaatteet. Näitä teoriaosuudessa käsiteltyjä asioita on sovellettu työn empiriaosuuden www-kyselyssä, joka liittyy hajautettuun työhön ja hajautetun työn hallinnan hyviin käytäntöihin, sekä kustannusrakennemallissa, jonka avulla hajautetun ja perinteisen työskentelytavan kustannusrakenteet on mallinnettu toimintolaskennan keinoin. Työn kohdeyrityksen työntekijät ovat kokeneet työyhteisönsä jonkin verran hajautuneiksi. Hajautetun työn haasteet ovat näkyneet hajautetun työn etuja enemmän työn kohdeyrityksessä sekä työntekijöiden päivittäisessä työskentelyssä. Työssä käsitellyistä hajautetun työn hallinnan hyvistä käytännöistä yli puolta on hyödynnetty kohdeyrityksessä melko paljon ja muutamaa käytäntöä on hyödynnetty vähemmän. Työssä toteutetun kustannusrakennemallin perusteella voidaan todeta, että hajautetulla työskentelytavalla on mahdollista saavuttaa noin 12 %:n kustannusetu perinteiseen työskentelytapaan verrattuna.
Resumo:
The nutrient load to the Gulf of Finland has started to increase as a result of the strong economic recovery in agriculture and livestock farming in the Leningrad region. Also sludge produced from municipal wastewater treatment plant of the Leningrad region causes the great impact on the environment, but still the main options for its treatment is disposal on the sludge beds or Landfills. The aim of this study was to evaluate the implementation of possible joint treatment methods of manure form livestock and poultry enterprises and sewage sludge produced from municipal wastewater treatment plants in the Leningrad region. The study is based on published data. The most attention was put on the anaerobic digestion and incineration methods. The manure and sewage sludge generation for the whole Leningrad region and energy potential produced from their treatment were estimated. The calculations showed that total amount of sewage sludge generation is 1 348 000 t/a calculated on wet matter and manure generation is 3 445 000 t/a calculated on wet matter. The potential heat release from anaerobic digestion process and incineration process is 4 880 000 GJ/a and 5 950 000 GJ/a, respectively. Furthermore, the work gives the overview of the general Russian and Finnish legislation concerning manure and sewage sludge treatment. In the Gatchina district it was chosen the WWTP and livestock and poultry enterprises for evaluation of the centralized treatment plant implementation based on anaerobic digestion and incineration methods. The electricity and heat power of plant based on biogas combustion process is 4.3 MW and 7.8 MW, respectively. The electricity and heat power of plant based on manure and sewage sludge incineration process is 3.0 MW and 6.1 MW, respectively.
Resumo:
Agile coaching of a project team is one way to aid learning of the agile methods. The objective of this thesis is to present the agile coaching plan and to follow how complying the plan affects to the project teams. Furthermore, the agile methods are followed how they work in the projects. Two projects are used to help the research. From the thesis point of view, the task for the first project is to coach the project team and two new coaches. The task for the second project is also to coach the project team, but this time so that one of the new coaches acts as the coach. The agile methods Scrum process and Extreme programming are utilized by the projects. In the latter, the test driven development, continuous integration and pair programming are concentrated more precisely. The results of the work are based on the observations from the projects and the analysis derived from the observations. The results are divided to the effects of the coaching and to functionality of the agile methods in the projects. Because of the small sample set, the results are directional. The presented plan, to coach the agile methods, needs developing, but the results of the functionality of the agile methods are encouraging.
Resumo:
Welding is one of the most important process of modern industry. Welding technology is used in the manufacture and repair a wide variety of products from different metals and alloys. In this thesis the different aspects of arc welding were discussed, such as stability and control of welding arc, power supplies for arc welding (especially the welding inverters because it is the most modern welding power source). All parameters of power source have influence on the arc parameters and its by-turn influence on quality. The ways of control for arc welding inverter power sources have been considered. Calculations and modeling in Matlab/Simulink were done for PI control method. All parameters of power source have influence on the arc parameters and its by-turn influence on quality.
Resumo:
Tämän diplomityön tavoitteena oli rakentaa pienelle metallituotteita valmistavalle yritykselle kustannuslaskentamalli, joka olisi helppo käyttää ja ylläpitää ja joka huomioisi käytettävissä olevat resurssit ja jossa hyödynnettäisiin mahdollisimman paljon yrityksen olemassa olevia tietokoneohjelmia. Tavoitteena oli erityisesti kustannusten kohdistettavuuden parantaminen eri tuotteille ja yrityksen kustannuslaskentakäytännön dokumentointi. Kehitettävän kustannuslaskentamallin perustaksi valittiin nykytila-analyysin, valmistettavien tuotteiden, tuotantomenetelmien, yrityksen kustannusrakenteen, käytettävien resurssien ja yrityksen käytössä olevien tietokoneohjemien perusteella lisäyslaskenta. Työssä kuvattiin kustannuslaskentamallin rakentamisvaiheita ja konetuntihintojen määrittelyä. Valmista laskentamallia testattiin laskemalla viidelle erilaiselle tuotteelle omakustannushinnat. Lopuksi pohdittiin jatkokehitystarpeita.