63 resultados para Variance monitoring
Resumo:
Condition monitoring systems for physical assets are constantly becoming more and more common in the industrial sector. At the same time an increasing portion of asset monitoring systems are being remotely supported. As global competitors are actively developing solutions for condition monitoring and condition-based maintenance, which it enables, Wärtsilä too feels the pressure to provide customers with more sophisticated condition-based maintenance solutions. The main aim of this thesis study is to consider Wärtsilä remote condition monitoring solutions and how they relate to similar solutions from other suppliers and end customers’ needs, in the context of offshore assets. A theoretical study is also included in the thesis, where the concepts of condition monitoring, condition-based maintenance, maintenance management and physical asset management are introduced.
Resumo:
Aims:This study was carried out to evaluate the feasibility of two different methods to determine free flap perfusion in cancer patients undergoing major reconstructive surgery. The hypotheses was that low perfusion in the flap is associated with flap complications. Patients and methods: Between August 2002 and June 2008 at the Department of Otorhinolaryngology – Head and Neck Surgery, Department of Surgery, and at the PET Centre, Turku, 30 consecutive patients with 32 free flaps were included in this study. The perfusion of the free microvascular flaps was assessed with positron emission tomography (PET) and radioactive water ([15O] H2O) in 40 radiowater injections in 33 PET studies. Furthermore, 24 free flaps were monitored with a continuous tissue oxygen measurement using flexible polarographic catheters for an average of three postoperative days. Results: Of the 17 patients operated on for head and neck (HN) cancer and reconstructed with 18 free flaps, three re-operations were carried out due to poor tissue oxygenation as indicated by ptiO2 monitoring results and three other patients were reoperated on for postoperative hematomas in the operated area. Blood perfusion assessed with PET (BFPET) was above 2.0 mL / min / 100 g in all flaps and a low flap-to-muscle BFPET ratio appeared to correlate with poor survival of the flap. Survival in this group of HN cancer patients was 9.0 months (median, range 2.4-34.2) after a median follow-up of 11.9 months (range 1.0-61.0 months). Seven HN patients of this group are alive without any sign of recurrence and one patient has died of other causes. All of the 13 breast reconstruction patients included in the study are alive and free of disease at a median follow-up time of 27.4 months (range 13.9-35.7 months). Re-explorations were carried out in three patients due data provided by ptiO2 monitoring and one re-exploration was avoided on the basis of adequate blood perfusion assessed with PET. Two patients had donorsite morbidity and 3 patients had partial flap necrosis or fat necrosis. There were no total flap losses. Conclusions: PtiO2 monitoring is a feasible method of free flap monitoring when flap temperature is monitored and maintained close to the core temperature. When other monitoring methods give controversial results or are unavailable, [15O] H2O PET technique is feasible in the evaluation of the perfusion of the newly reconstructed free flaps.
Resumo:
Selective papers of the workshop on "Development of models and forest soil surveys for monitoring of soil carbon", Koli, Finland, April 5-9 2006.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
Wireless sensor networks and its applications have been widely researched and implemented in both commercial and non commercial areas. The usage of wireless sensor network has developed its market from military usage to daily use of human livings. Wireless sensor network applications from monitoring prospect are used in home monitoring, farm fields and habitant monitoring to buildings structural monitoring. As the usage boundaries of wireless sensor networks and its applications are emerging there are definite ongoing research, such as lifetime for wireless sensor network, security of sensor nodes and expanding the applications with modern day scenarios of applications as web services. The main focus in this thesis work is to study and implement monitoring application for infrastructure based sensor network and expand its usability as web service to facilitate mobile clients. The developed application is implemented for wireless sensor nodes information collection and monitoring purpose enabling home or office environment remote monitoring for a user.
Resumo:
Centrifugal pumps are widely used in industrial and municipal applications, and they are an important end-use application of electric energy. However, in many cases centrifugal pumps operate with a significantly lower energy efficiency than they actually could, which typically has an increasing effect on the pump energy consumption and the resulting energy costs. Typical reasons for this are the incorrect dimensioning of the pumping system components and inefficiency of the applied pump control method. Besides the increase in energy costs, an inefficient operation may increase the risk of a pump failure and thereby the maintenance costs. In the worst case, a pump failure may lead to a process shutdown accruing additional costs. Nowadays, centrifugal pumps are often controlled by adjusting their rotational speed, which affects the resulting flow rate and output pressure of the pumped fluid. Typically, the speed control is realised with a frequency converter that allows the control of the rotational speed of an induction motor. Since a frequency converter can estimate the motor rotational speed and shaft torque without external measurement sensors on the motor shaft, it also allows the development and use of sensorless methods for the estimation of the pump operation. Still today, the monitoring of pump operation is based on additional measurements and visual check-ups, which may not be applicable to determine the energy efficiency of the pump operation. This doctoral thesis concentrates on the methods that allow the use of a frequency converter as a monitoring and analysis device for a centrifugal pump. Firstly, the determination of energy-efficiency- and reliability-based limits for the recommendable operating region of a variable-speed-driven centrifugal pump is discussed with a case study for the laboratory pumping system. Then, three model-based estimation methods for the pump operating location are studied, and their accuracy is determined by laboratory tests. In addition, a novel method to detect the occurrence of cavitation or flow recirculation in a centrifugal pump by a frequency converter is introduced. Its sensitivity compared with known cavitation detection methods is evaluated, and its applicability is verified by laboratory measurements for three different pumps and by using two different frequency converters. The main focus of this thesis is on the radial flow end-suction centrifugal pumps, but the studied methods can also be feasible with mixed and axial flow centrifugal pumps, if allowed by their characteristics.
Resumo:
The mechanical properties of aluminium alloys are strongly influenced by the alloying elements and their concentration. In the case of aluminium alloy EN AW-6060 the main alloying elements are magnesium and silicon. The first goal of this thesis was to determine stability, repeatability and sensitivity as figures of merit of the in-situ melt identification technique. In this study the emissions from the laser welding process were monitored with a spectrometer. With the information produced by the spectrometer, quantitative analysis was conducted to determine the figures of merit. The quantitative analysis concentrated on magnesium and aluminium emissions and their relation. The results showed that the stability of absolute intensities was low, but the normalized magnesium emissions were quite stable. The repeatability of monitoring magnesium emissions was high (about 90 %). Sensitivity of the in-situ melt identification technique was also high. As small as 0.5 % change in magnesium content was detected by the spectrometer. The second goal of this study was to determine the loss of mass during deep penetration laser welding. The amount of magnesium in the material was measured before and after laser welding to determine the loss of magnesium. This study was conducted for aluminium alloy with nominal magnesium content of 0-10 % and for standard material EN AW-6060 that was welded with filler wire AlMg5. It was found that while the magnesium concentration in the material changed, the loss of magnesium remained fairly even. Also by feeding filler wire, the behaviour was similar. Thirdly, the reason why silicon had not been detected in the emission spectrum needed to be explained. Literature research showed that the amount of energy required for silicon to excite is considerably higher compared to magnesium. The energy input in the used welding process is insufficient to excite the silicon atoms.
Resumo:
This thesis is done as a part of project called FuncMama that is a project between Technical Research Centre of Finland (VTT), Oulu University (OY), Lappeenranta University of Technology (LUT) and Finnish industrial partners. Main goal of the project is to manufacture electric and mechanical components from mixed materials using laser sintering. Aim of this study was to create laser sintered pieces from ceramic material and monitor the sintering event by using spectrometer. Spectrometer is a device which is capable to record intensity of different wavelengths in relation with time. In this study the monitoring of laser sintering was captured with the equipment which consists of Ocean Optics spectrometer, optical fiber and optical lens (detector head). Light from the sintering process hit first to the lens system which guides the light in to the optical fibre. Optical fibre transmits the light from the sintering process to the spectrometer where wavelengths intensity level information is detected. The optical lens of the spectrometer was rigidly set and did not move along with the laser beam. Data which was collected with spectrometer from the laser sintering process was converted with Excel spreadsheet program for result’s evaluation. Laser equipment used was IPG Photonics pulse fibre laser. Laser parameters were kept mainly constant during experimental part and only sintering speed was changed. That way it was possible to find differences in the monitoring results without fear of too many parameters mixing together and affecting to the conclusions. Parts which were sintered had one layer and size of 5 x 5 mm. Material was CT2000 – tape manufactured by Heraeus which was later on post processed to powder. Monitoring of different sintering speeds was tested by using CT2000 reference powder. Moreover tests how different materials effect to the process monitoring were done by adding foreign powder Du Pont 951 which had suffered in re-grinding and which was more reactive than CT2000. By adding foreign material it simulates situation where two materials are accidently mixed together and it was studied if that can be seen with the spectrometer. It was concluded in this study that with the spectrometer it is possible to detect changes between different laser sintering speeds. When the sintering speed is lowered the intensity level of light is higher from the process. This is a result of higher temperature at the sintering spot and that can be noticed with the spectrometer. That indicates it could be possible to use spectrometer as a tool for process observation and support the idea of having system that can help setting up the process parameter window. Also important conclusion was how well the adding of foreign material could be seen with the spectrometer. When second material was added a significant intensity level raise could be noticed in that part where foreign material was mixed. That indicates it is possible to see if there are any variations in the material or if there are more materials mixed together. Spectrometric monitoring of laser sintering could be useful tool for process window observation and temperature controlling of the sintering process. For example if the process window for specific material is experimentally determined to get wanted properties and satisfying sintering speed. It is possible if the data is constantly recorded that the results can show faults in the part texture between layers. Changes between the monitoring data and the experimentally determined values can then indicate changes in the material being generated by material faults or by wrong process parameters. The results of this study show that spectrometer could be one possible tool for monitoring. But to get in that point where this all can be made possible much more researching is needed.
Resumo:
The focus in this thesis is to study both technical and economical possibilities of novel on-line condition monitoring techniques in underground low voltage distribution cable networks. This thesis consists of literature study about fault progression mechanisms in modern low voltage cables, laboratory measurements to determine the base and restrictions of novel on-line condition monitoring methods, and economic evaluation, based on fault statistics and information gathered from Finnish distribution system operators. This thesis is closely related to master’s thesis “Channel Estimation and On-line Diagnosis of LV Distribution Cabling”, which focuses more on the actual condition monitoring methods and signal theory behind them.
Resumo:
In this thesis a control system for an intelligent low voltage energy grid is presented, focusing on the control system created by using a multi-agent approach which makes it versatile and easy to expand according to the future needs. The control system is capable of forecasting the future energy consumption and decisions making on its own without human interaction when countering problems. The control system is a part of the St. Petersburg State Polytechnic University’s smart grid project that aims to create a smart grid for the university’s own use. The concept of the smart grid is interesting also for the consumers as it brings new possibilities to control own energy consumption and to save money. Smart grids makes it possible to monitor the energy consumption in real-time and to change own habits to save money. The intelligent grid also brings possibilities to integrate the renewable energy sources to the global or the local energy production much better than the current systems. Consumers can also sell their extra power to the global grid if they want.
Resumo:
Over the past decade, organizations worldwide have begun to widely adopt agile software development practices, which offer greater flexibility to frequently changing business requirements, better cost effectiveness due to minimization of waste, faster time-to-market, and closer collaboration between business and IT. At the same time, IT services are continuing to be increasingly outsourced to third parties providing the organizations with the ability to focus on their core capabilities as well as to take advantage of better demand scalability, access to specialized skills, and cost benefits. An output-based pricing model, where the customers pay directly for the functionality that was delivered rather than the effort spent, is quickly becoming a new trend in IT outsourcing allowing to transfer the risk away from the customer while at the same time offering much better incentives for the supplier to optimize processes and improve efficiency, and consequently producing a true win-win outcome. Despite the widespread adoption of both agile practices and output-based outsourcing, there is little formal research available on how the two can be effectively combined in practice. Moreover, little practical guidance exists on how companies can measure the performance of their agile projects, which are being delivered in an output-based outsourced environment. This research attempted to shed light on this issue by developing a practical project monitoring framework which may be readily applied by organizations to monitor the performance of agile projects in an output-based outsourcing context, thus taking advantage of the combined benefits of such an arrangement Modified from action research approach, this research was divided into two cycles, each consisting of the Identification, Analysis, Verification, and Conclusion phases. During Cycle 1, a list of six Key Performance Indicators (KPIs) was proposed and accepted by the professionals in the studied multinational organization, which formed the core of the proposed framework and answered the first research sub-question of what needs to be measured. In Cycle 2, a more in-depth analysis was provided for each of the suggested Key Performance Indicators including the techniques for capturing, calculating, and evaluating the information provided by each KPI. In the course of Cycle 2, the second research sub-question was answered, clarifying how the data for each KPI needed to be measured, interpreted, and acted upon. Consequently, after two incremental research cycles, the primary research question was answered describing the practical framework that may be used for monitoring the performance of agile IT projects delivered in an output-based outsourcing context. This framework was evaluated by the professionals within the context of the studied organization and received positive feedback across all four evaluation criteria set forth in this research, including the low overhead of data collection, high value of provided information, ease of understandability of the metric dashboard, and high generalizability of the proposed framework.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
The Pasvik monitoring programme was created in 2006 as a result of the trilateral cooperation, and with the intention of following changes in the environment under variable pollution levels. Water quality is one of the basic elements of the Programme when assessing the effects of the emissions from the Pechenganikel mining and metallurgical industry (Kola GMK). The Metallurgic Production Renovation Programme was implemented by OJSC Kola GMK to reduce emissions of sulphur and heavy metal concentrated dust. However, the expectations for the reduction in emissions from the smelter in the settlement Nikel were not realized. Nevertheless, Kola GMK has found that the modernization programme’s measures do not provide the planned reductions of sulfur dioxide emissions. In this report, temporal trends in water chemistry during 2000–2009 are examined on the basis of the data gathered from Lake Inari, River Pasvik and directly connected lakes, as well as from 26 small lakes in three areas: Pechenganikel (Russia), Jarfjord (Norway) and Vätsäri (Finland). The lower parts of the Pasvik watercourse are impacted by both atmospheric pollution and direct wastewater discharge from the Pechenganikel smelter and the settlement of Nikel. The upper section of the watercourse, and the small lakes and streams which are not directly linked to the Pasvik watercourse, only receive atmospheric pollution. The data obtained confirms the ongoing pollution of the river and water system. Copper (Cu), nickel (Ni) and sulphates are the main pollution components. The highest levels were observed close to the smelters. The most polluted water source of the basin is the River Kolosjoki, as it directly receives the sewage discharge from the smelters and the stream connecting the Lakes Salmijarvi and Kuetsjarvi. The concentrations of metals and sulphates in the River Pasvik are higher downstream from the Kuetsjarvi Lake. There has been no fall in the concentrations of pollutants in Pasvik watercourse over the last 10 years. Ongoing recovery from acidification has been evident in the small lakes of the Jarfjord and Vätsäri areas during the 2000s. The buffering capacity of these lakes has improved and the pH has increased. The reason for this recovery is that sulphate deposition has decreased, which is also evident in the water quality. However, concentrations of some metals, especially Ni and Cu, have risen during the 2000s. Ni concentrations have increased in all three areas, and Cu concentrations in the Pechenganickel and Jarfjord areas, which are located closer to the smelters. Emission levels of Ni and Cu did not fall during 2000s. In fact, the emission levels of Ni compounds even increased compared to the 1990s.
Resumo:
Fan systems are responsible for approximately 10% of the electricity consumption in industrial and municipal sectors, and it has been found that there is energy-saving potential in these systems. To this end, variable speed drives (VSDs) are used to enhance the efficiency of fan systems. Usually, fan system operation is optimized based on measurements of the system, but there are seldom readily installed meters in the system that can be used for the purpose. Thus, sensorless methods are needed for the optimization of fan system operation. In this thesis, methods for the fan operating point estimation with a variable speed drive are studied and discussed. These methods can be used for the energy efficient control of the fan system without additional measurements. The operation of these methods is validated by laboratory measurements and data from an industrial fan system. In addition to their energy consumption, condition monitoring of fan systems is a key issue as fans are an integral part of various production processes. Fan system condition monitoring is usually carried out with vibration measurements, which again increase the system complexity. However, variable speed drives can already be used for pumping system condition monitoring. Therefore, it would add to the usability of a variablespeed- driven fan system if the variable speed drive could be used as a condition monitoring device. In this thesis, sensorless detection methods for three lifetime-reducing phenomena are suggested: these are detection of the fan contamination build-up, the correct rotational direction, and the fan surge. The methods use the variable speed drive monitoring and control options for the detection along with simple signal processing methods, such as power spectrum density estimates. The methods have been validated by laboratory measurements. The key finding of this doctoral thesis is that a variable speed drive can be used on its own as a monitoring and control device for the fan system energy efficiency, and it can also be used in the detection of certain lifetime-reducing phenomena.