16 resultados para 090602 Control Systems Robotics and Automation
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The RPC Detector Control System (RCS) is the main subject of this PhD work. The project, involving the Lappeenranta University of Technology, the Warsaw University and INFN of Naples, is aimed to integrate the different subsystems for the RPC detector and its trigger chain in order to develop a common framework to control and monitoring the different parts. In this project, I have been strongly involved during the last three years on the hardware and software development, construction and commissioning as main responsible and coordinator. The CMS Resistive Plate Chambers (RPC) system consists of 912 double-gap chambers at its start-up in middle of 2008. A continuous control and monitoring of the detector, the trigger and all the ancillary sub-systems (high voltages, low voltages, environmental, gas, and cooling), is required to achieve the operational stability and reliability of a so large and complex detector and trigger system. Role of the RPC Detector Control System is to monitor the detector conditions and performance, control and monitor all subsystems related to RPC and their electronics and store all the information in a dedicated database, called Condition DB. Therefore the RPC DCS system has to assure the safe and correct operation of the sub-detectors during all CMS life time (more than 10 year), detect abnormal and harmful situations and take protective and automatic actions to minimize consequential damages. The analysis of the requirements and project challenges, the architecture design and its development as well as the calibration and commissioning phases represent themain tasks of the work developed for this PhD thesis. Different technologies, middleware and solutions has been studied and adopted in the design and development of the different components and a big challenging consisted in the integration of these different parts each other and in the general CMS control system and data acquisition framework. Therefore, the RCS installation and commissioning phase as well as its performance and the first results, obtained during the last three years CMS cosmic runs, will be
Resumo:
Työn tavoitteena oli selvittää uuden robottihitsaussolun käyttöönotto siten, että se tapahtuu mahdollisimman tehokkaasti ja taloudellisesti. Uudella robotilla on tarkoitus hitsata nykyisin käsinhitsattavia kauhakuormaajien takarunkoja sekä alihankintana tilattavia kauhoja. Tuotannon kotiuttamisella alihankinnasta ja hitsauksen robotisoinnilla pyritään nostamaan omaa tuotantovolyymiä ja pienentämään valmistuskustannuksia. Työn teoriaosuudessa selvitettiin tyypilliset robotiikkaan liittyvät asiat, kuten robotit, niiden ohjaus ja ohjelmointi sekä perusteet hitsauksen robotisoinnista. Lisäksi käsiteltiin hitsattavan tuotteen suunnittelu- ja valmistusnäkökohtia ja hitsauksen kustannuslaskennan perusteet. Työn käytännön osuudessa tehtiin kartoitus kahden valitun takarunkomallin soveltuvuudesta robottihitsaukseen ja muutosehdotuksia, joilla voidaan parantaa runkojen robottihitsattavuutta. Lisäksi käytiin läpi hitsauksen nykytilanne osavalmistuksesta aina hitsauksen jälkeiseen viimeistelyyn. Hitsauksen kustannusten selvittämistä varten tehtiin taulukkolaskentaohjelma, jolla tehtiin esimerkkinä kuvitteellinen kustannussäätölaskelma. Tuottavuusmittari laadittiin niin, että sillä voidaan mitata sekä robotin että kokohitsaustapahtuman tehokkuutta pitkällä ja lyhyellä aikavälillä. Näiden lisäksi laadittiin sisäinen ohje robottihitsattavien kappaleiden silloitukseen sekä suunnittelijoille ohjeistus huomioitavista asioista suunniteltaessa kappaletta robottihitsaukseen.
Resumo:
Terveydenhuollossa käytetään nykyisin informaatioteknologian (IT) mahdollisuuksia parantamaan hoidon laatua, vähentämään hoitoon liittyviä kuluja sekä yksinkertaistamaan ja selkeyttämään laakareiden työnkulkua. Tietojärjestelmät, jotka edustavat jokaisen IT-ratkaisun ydintä, täytyy kehittää täyttämään lukuisia vaatimuksia, ja yksi niistä on kyky integroitua saumattomasti toisten tietojärjestelmien kanssa. Järjestelmäintegraatio on kuitenkin yhä haastava tehtävä, vaikka sita varten on kehitetty useita standardeja. Tässä työssä kuvataan vastakehitetyn lääketieteellisen tietojärjestelmän liittymäratkaisu. Työssä pohditaan vaatimuksia, jotka tällaiselle sovellukselle asetetaan, ja myös tapa, jolla vaatimukset toteutuvat on esitetty. Liittymaratkaisu on jaettu kahteen osaan, tietojärjestelmaliittymään ja "liittymakoneeseen" (interfacing engine). Edellinen on käsittää perustoiminnallisuuden, jota tarvitaan vastaanottamaan ja lähettämään tietoa toisiin järjestelmiin, kun taas jälkimmäinen tarjoaa tuen tuotantoympäristössa käytettäville standardeille. Molempien osien suunnitelu on esitelty perusteellisesti tässä työssä. Ongelma ratkaistiin modulaarisen ja geneerisen suunnittelun avulla. Tämä lähestymistapa osoitetaan työssä kestäväksi ja joustavaksi ratkaisuksi, jota voidaan käyttää tarkastelemaan laajaa valikoimaa liittymäratkaisulle asetettuja vaatimuksia. Lisaksi osoitetaan kuinka tehty ratkaisu voidaan joustavuutensa ansiosta helposti mukauttaa vaatimuksiin, joita ei ole etukäteen tunnistettu, ja siten saavutetaan perusta myös tulevaisuuden tarpeille
Resumo:
The aim of this master´s thesis is to study which processes increase the auxiliary power consumption in carbon capture and storage processes and if it is possible to reduce the auxiliary power consumption with variable speed drives. Also the cost of carbon capture and storage is studied. Data about auxiliary power consumption in carbon capture is gathered from various studies and estimates made by various research centres. Based on these studies a view is presented how the power auxiliary power consumption is divided between different processes in carbon capture processes. In a literary study, the operation of three basic carbon capture systems is described. Also different methods to transport carbon dioxide and carbon dioxide storage options are described in this section. At the end of the thesis processes that consume most of the auxiliary power are defined and possibilities to reduce the auxiliary power consumption are evaluated. Cost of carbon capture, transport and storage are also evaluated at this point and in the case that the carbon capture and storage systems are fully deployed. According to the results, it can be estimated what are the processes are where variable speed drives can be used and what kind of cost and power consumption reduction could be achieved. Results also show how large a project carbon capture and storage is if it is fully deployed.
Resumo:
This master’s thesis studies the case company’s current purchase invoice process and the challenges that are related to it. Like most of other master’s thesis this study consists of both theoretical- and empirical parts. The purpose of this work is to combine theoretical and empirical parts together so that the theoretical part brings value to the empirical case study. The case company’s main business is frequency converters for both low voltage AC & DC drives and medium voltage AC Drives which are used across all industries and applications. The main focus of this study is on the current invoice process modelling. When modelling the existing process with discipline and care, current challenges can be understood better. Empirical study relays heavily on interviews and existing, yet fragmented, data. This, along with own calculations and analysis, creates the foundation for the empirical part of this master’s thesis.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
Johdon ohjausjärjestelmät koostuvat yksittäisistä järjestelmistä, jotka muodostavat yrityksen ohjausjärjestelmäkokonaisuuden. Yksittäiset järjestelmät ovat sekä konkreettisia, kuten budjetit, mutta myös käsitteellisempiä tekijöitä, kuten henkilöstön ohjaus. Jokaisella yrityksellä on oma ohjausjärjestelmäkokonaisuutensa, joka riippuu esimerkiksi yrityksen toimialasta, strategiasta ja koosta. Menestyäkseen yrityksen on löydettävä mahdollisimman toimiva ja yhteensopiva ohjausjärjestelmäkokonaisuus juuri kyseiselle yritykselle. Tämän tutkielman ensisijaisena tavoitteena on selvittää miten yritysten eroavaisuudet vaikuttavat johdon ohjausjärjestelmiin. Tutkielma tehdään selvittämällä ensin mitä johdon ohjausjärjestelmät ovat ja tämän jälkeen haastattelemalla erilaisten yritysten talousjohtajia niiden ohjausjärjestelmistä. Tämän jälkeen vertaillaan tuloksia hallitsevaan teoriaan ja yritysten ohjausjärjestelmiä toisiinsa. Tutkielmassa havaittiin selviä eroja yritysten ohjausjärjestelmien välillä.
Resumo:
The Laboratory of Intelligent Machine researches and develops energy-efficient power transmissions and automation for mobile construction machines and industrial processes. The laboratory's particular areas of expertise include mechatronic machine design using virtual technologies and simulators and demanding industrial robotics. The laboratory has collaborated extensively with industrial actors and it has participated in significant international research projects, particularly in the field of robotics. For years, dSPACE tools were the lonely hardware which was used in the lab to develop different control algorithms in real-time. dSPACE's hardware systems are in widespread use in the automotive industry and are also employed in drives, aerospace, and industrial automation. But new competitors are developing new sophisticated systems and their features convinced the laboratory to test new products. One of these competitors is National Instrument (NI). In order to get to know the specifications and capabilities of NI tools, an agreement was made to test a NI evolutionary system. This system is used to control a 1-D hydraulic slider. The objective of this research project is to develop a control scheme for the teleoperation of a hydraulically driven manipulator, and to implement a control algorithm between human and machine interaction, and machine and task environment interaction both on NI and dSPACE systems simultaneously and to compare the results.
Resumo:
Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.
Resumo:
Energy consumption and energy efficiency have become an issue. Energy consumption is rising all over the world and because of that, and the climate change, energy is becoming more and more expensive. Buildings are major consumers of energy, and inside the buildings the major consumers are heating, ventilation and air-conditioning systems. They usually run at constant speed without efficient control. In most cases HVAC equipment is also oversized. Traditionally heating, ventilation and air-conditioning systems have been sized to meet conditions that rarely occur. The theory part in this thesis represents the basics of life cycle costs and calculations for the whole life cycle of a system. It also represents HVAC systems, equipment, systems controls and ways to save energy in these systems. The empirical part of this thesis represents life cycle cost calculations for HVAC systems. With these calculations it is possible to compute costs for the whole life cycle for the wanted variables. Life cycle costs make it possible to compare which variable causes most of the costs from the whole life point of view. Life cycle costs were studied through two real life cases which were focused on two different kinds of HVAC systems. In both of these cases the renovations were already made, so that the comparison between the old and the new, now existing system would be easier. The study indicates that energy can be saved in HVAC systems by using variable speed drive as a control method.
Researching Manufacturing Planning and Control system and Master Scheduling in a manufacturing firm.
Resumo:
The objective of this thesis is to research Manufacturing Planning and Control (MPC) system and Master Scheduling (MS) in a manufacturing firm. The study is conducted at Ensto Finland Corporation, which operates on a field of electrical systems and supplies. The paper consists of theoretical and empirical parts. The empirical part is based on weekly operating at Ensto and includes inter-firm material analysis, learning and meetings. Master Scheduling is an important module of an MPC system, since it is beneficial on transforming strategic production plans based on demand forecasting into operational schedules. Furthermore, capacity planning tools can remarkably contribute to production planning: by Rough-Cut Capacity Planning (RCCP) tool, a MS plan can be critically analyzed in terms of available key resources in real manufacturing environment. Currently, there are remarkable inefficiencies when it comes to Ensto’s practices: the system is not able to take into consideration seasonal demand and react on market changes on time; This can cause significant lost sales. However, these inefficiencies could be eliminated through the appropriate utilization of MS and RCCP tools. To utilize MS and RCCP tools in Ensto’s production environment, further testing in real production environment is required. Moreover, data accuracy, appropriate commitment to adapting and learning the new tools, and continuous developing of functions closely related to MS, such as sales forecasting, need to be ensured.
Resumo:
The purpose of this study is to explore the possibilities of utilizing business intelligence (BI)systems in management control (MC). The topic of this study is explored trough four researchquestions. Firstly, what kind of management control systems (MCS) use or could use the data and information enabled by the BI system? Secondly, how the BI system is or could be utilized? Thirdly, has BI system enabled new forms of control or changed old ones? The fourth and final research question is whether the BI system supports some forms of control that the literature has not thought of, or is the BI system not used for some forms of control the literature suggests it should be used? The study is conducted as an extensive case study. Three different organizations were interviewed for the study. For the theoretical basis of the study, central theories in the field of management control are introduced. The term business intelligence is discussed in detail and the mechanisms for governance of business intelligence are presented. A literature analysis of the uses of BI for management control is introduced. The theoretical part of the study ends in the construction of a framework for business intelligence in management control. In the empirical part of the study the case organizations, their BI systems, and the ways they utilize these systems for management control are presented. The main findings of the study are that BI systems can be utilized in the fields suggested in the literature, namely in planning, cybernetic, reward, boundary, and interactive control. The systems are used both as the data or information feeders and directly as the tools. Using BI systems has also enabled entirely new forms of control in the studied organizations, most significantly in the area of interactive control. They have also changed the old control systems by making the information more readily available to the whole organization. No evidence of the BI systems being used for forms of control that the literature had not suggested was found. The systems were mostly used for cybernetic control and interactive control, whereas the support for other types of control was not as prevalent. The main contribution of the study to the existing literature is the insight provided into how BI systems, both theoretically and empirically, are used for management control. The framework for business intelligence in management control presented in the study can also be utilized in further studies about the subject.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.