21 resultados para Discrete event system
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.
Resumo:
Recent developments in power electronics technology have made it possible to develop competitive and reliable low-voltage DC (LVDC) distribution networks. Further, islanded microgrids—isolated small-scale localized distribution networks— have been proposed to reliably supply power using distributed generations. However, islanded operations face many issues such as power quality, voltage regulation, network stability, and protection. In this thesis, an energy management system (EMS) that ensures efficient energy and power balancing and voltage regulation has been proposed for an LVDC island network utilizing solar panels for electricity production and lead-acid batteries for energy storage. The EMS uses the master/slave method with robust communication infrastructure to control the production, storage, and loads. The logical basis for the EMS operations has been established by proposing functionalities of the network components as well as by defining appropriate operation modes that encompass all situations. During loss-of-powersupply periods, load prioritizations and disconnections are employed to maintain the power supply to at least some loads. The proposed EMS ensures optimal energy balance in the network. A sizing method based on discrete-event simulations has also been proposed to obtain reliable capacities of the photovoltaic array and battery. In addition, an algorithm to determine the number of hours of electric power supply that can be guaranteed to the customers at any given location has been developed. The successful performances of all the proposed algorithms have been demonstrated by simulations.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Työn tavoitteena oli tutkia kuljetusketjun automatisoinnin kehittämistä tapahtumapohjaisen simuloinnin avulla. Työn teoria osassa käsitellään yleisellä tasolla simuloinnin teoriaa, siihen liittyviä käsitteitä ja erityisesti simulointiprojektin läpiviennin vaiheita. Tässä osassa saadaan vastaukset seuraaviin kysymyksiin: - Mitä tapahtumapohjaisella simuloinnilla tarkoitetaan? - Mitkä ovat simuloinnin hyödyt ja rajoitteet? - Mitä käyttökohteita simuloinnilla on? - Minkälaisia ohjelmia simulointiin käytetään? - Mitä vaiheita simulointiprojekti sisältää? Soveltavassa osassa tutkitaan kuinka tapahtumapohjaista simulointia hyödynnettiin Veto-Ketju -projektissa ja millaisia tuloksia rakennetulla simulointimallilla saavutettiin. Veto-Ketju -projekti käsittelee kuljetusketjun sekä varasto- ja satamakäsittelyn tehostamista automatisoinnin avulla. Projektin vetäjänä ja automaattisen tavarankäsittelyjärjestelmän toimittajana on Pesmel Oy ja satamatoimintojen ja logistiikan asiantuntijana SysOpen-konserniin kuuluva suunnittelu- ja konsulttitoimisto EP-Logistics Oy. Paperiteollisuudesta mukana ovat UPM-Kymmene ja M-real, satamaoperaattorina toimii Rauma Stevedoring ja kuljetusoperaattorina VR Cargo. Veto-Ketju –projektin simulointitutkimuksessa varmistettiin suunnitellun automaattisen junavaunujen purkausjärjestelmän toiminta ennen sen käyttöönottoa ja tutkittiin erilaisten toimintatapojen vaikutuksia satamatoimintoihin, satamassa tarvittavien resurssien määrään ja määritettiin tarvittavien varastojen koko. Simulointimallin avulla pystyttiin osoittamaan selkeästi erilaisten toimintavaihtoehtojen erot. Tällä tavoin saatiin tuotetuksi lisää tietoa päätöksenteon tueksi muun muassa järjestelmän sijoituspaikan ja mahdollisen uuden varaston rakentamisen suhteen.
Resumo:
Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.
Resumo:
Combating climate change is one of the key tasks of humanity in the 21st century. One of the leading causes is carbon dioxide emissions due to usage of fossil fuels. Renewable energy sources should be used instead of relying on oil, gas, and coal. In Finland a significant amount of energy is produced using wood. The usage of wood chips is expected to increase in the future significantly, over 60 %. The aim of this research is to improve understanding over the costs of wood chip supply chains. This is conducted by utilizing simulation as the main research method. The simulation model utilizes both agent-based modelling and discrete event simulation to imitate the wood chip supply chain. This thesis concentrates on the usage of simulation based decision support systems in strategic decision-making. The simulation model is part of a decision support system, which connects the simulation model to databases but also provides a graphical user interface for the decisionmaker. The main analysis conducted with the decision support system concentrates on comparing a traditional supply chain to a supply chain utilizing specialized containers. According to the analysis, the container supply chain is able to have smaller costs than the traditional supply chain. Also, a container supply chain can be more easily scaled up due to faster emptying operations. Initially the container operations would only supply part of the fuel needs of a power plant and it would complement the current supply chain. The model can be expanded to include intermodal supply chains as due to increased demand in the future there is not enough wood chips located close to current and future power plants.
Resumo:
Transportation plays a major role in the gross domestic product of various nations. There are, however, many obstacles hindering the transportation sector. Cost-efficiency along with proper delivery times, high frequency and reliability are not a straightforward task. Furthermore, environmental friendliness has increased the importance of the whole transportation sector. This development will change roles inside the transportation sector. Even now, but especially in the future, decisions regarding the transportation sector will be partly based on emission levels and other externalities originating from transportation in addition to pure transportation costs. There are different factors, which could have an impact on the transportation sector. IMO’s sulphur regulation is estimated to increase the costs of short sea shipping in the Baltic Sea. Price development of energy could change the roles of different transport modes. Higher awareness of the environmental impacts originating from transportation could also have an impact on the price level of more polluting transport modes. According to earlier research, increased inland transportation, modal shift and slowsteaming can be possible results of these changes in the transportation sector. Possible changes in the transportation sector and ways to settle potential obstacles are studied in this dissertation. Furthermore, means to improve cost-efficiency and to decrease environmental impacts originating from transportation are researched. Hypothetical Finnish dry port network and Rail Baltica transport corridor are studied in this dissertation. Benefits and disadvantages are studied with different methodologies. These include gravitational models, which were optimized with linear integer programming, discrete-event and system dynamics simulation, an interview study and a case study. Geographical focus is on the Baltic Sea Region, but the results can be adapted to other geographical locations with discretion. Results indicate that the dry port concept has benefits, but optimization regarding the location and the amount of dry ports plays an important role. In addition, the utilization of dry ports for freight transportation should be carefully operated, since only a certain amount of total freight volume can be cost-efficiently transported through dry ports. If dry ports are created and located without proper planning, they could actually increase transportation costs and delivery times of the whole transportation system. With an optimized dry port network, transportation costs can be lowered in Finland with three to five dry ports. Environmental impacts can be lowered with up to nine dry ports. If more dry ports are added to the system, the benefits become very minor, i.e. payback time of investments becomes extremely long. Furthermore, dry port network could support major transport corridors such as Rail Baltica. Based on an analysis of statistics and interview study, there could be enough freight volume available for Rail Baltica, especially, if North-West Russia is part of the Northern end of the corridor. Transit traffic to and from Russia (especially through the Baltic States) plays a large role. It could be possible to increase transit traffic through Finland by connecting the potential Finnish dry port network and the studied transport corridor. Additionally, sulphur emission regulation is assumed to increase the attractiveness of Rail Baltica in the year 2015. Part of the transit traffic could be rerouted along Rail Baltica instead of the Baltic Sea, since the price level of sea transport could increase due to the sulphur regulation. Both, the hypothetical Finnish dry port network and Rail Baltica transport corridor could benefit each other. The dry port network could gain more market share from Russia, but also from Central Europe, which is the other end of Rail Baltica. In addition, further Eastern countries could also be connected to achieve higher potential freight volume by rail.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
Rautateillä käytettävät tavaravaunut ovat vanhenemassa hyvin nopeasti; tämä koskee niin Venäjää, Suomea, Ruotsia kuin laajemminkin Eurooppaa. Venäjällä ja Euroopassa on käytössä runsaasti vaunuja, jotka ovat jo ylittäneet niille suositeltavan käyttöiän. Silti niitä käytetään kuljetuksissa, kun näitä korvaavia uusia vaunuja ei ole tarpeeksi saatavilla. Uusimmat vaunut ovat yleensä vaunuja vuokraavien yritysten tai uusien rautatieoperaattorien hankkimia - tämä koskee erityisesti Venäjää, jossa vaunuvuokraus on noussut erittäin suosituksi vaihtoehdoksi. Ennusteissa kerrotaan vaunupulan kasvavan ainakin vuoteen 2010 saakka. Jos rautateiden suosio rahtikuljetusmuotona kasvaa, niin voimistuva vaunukysyntä jatkuu huomattavan paljon pidemmän aikaa. Euroopan ja Venäjän vaunukannan tilanne näkyy myös sitä palvelevan konepajateollisuuden ongelmina - yleisesti ottaen alan eurooppalaiset yritykset ovat heikosti kannattavia ja niiden liikevaihto ei juuri kasva, venäläiset ja ukrainalaiset yritykset ovat olleet samassa tilanteessa, joskin aivan viime vuosina tilanne on osassa kääntynyt paremmaksi. Kun näiden maanosien yritysten liikevaihtoa, voittoa ja omistaja-arvoa verrataan yhdysvaltalaisiin kilpailijoihin, huomataan että jälkimmäisten suoriutuminen on huomattavan paljon parempaa, ja näillä yrityksillä on myös kyky maksaa osinkoja omistajilleen. Tutkimuksen tarkoituksena oli kehittää uuden tyyppinen kuljetusvaunu Suomen, Venäjän sekä mahdollisesti myös Kiinan väliseen liikenteeseen. Vaunutyypin tarkoituksena olisi kyetä toimimaan monikäyttöisenä, niin raaka-aineiden kuin konttienkin kuljetuksessa, tasapainottaen kuljetusmuotojen aiheuttamaa kuljetuspaino-ongelmaa. Kehitystyön pohjana käytimme yli 1000 venäläisen vaunutyypin tietokantaa, josta valitsimme Data Envelopment Analysis -menetelmällä soveliaimmat vaunut kontinkuljetukseen (lähemmin tarkastelimme n. 40 vaunutyyppiä), jättäen mahdollisimman vähän tyhjää tilaa junaan, mutta silti kyeten kantamaan valitun konttilastin. Kun kantokykyongelmia venäläisissä vaunuissa ei useinkaan ole, on vertailu tehtävissä tavarajunan pituuden ja kokonaispainon perusteella. Simuloituamme yhdistettyihin kuljetuksiin soveliasta vaunutyyppiä käytännössä löytyvässä kuljetusverkostossa (esim. raakapuuta Suomeen tai Kiinaan ja kontteja takaisin Venäjän suuntaan), huomasimme lyhemmän vaunupituuden sisältävän kustannusetua, erityisesti raakaainekuljetuksissa, mutta myös rajanylityspaikkojen mahdollisesti vähentyessä. Lyhempi vaunutyyppi on myös joustavampi erilaisten konttipituuksien suhteen (40 jalan kontin käyttö on yleistynyt viime vuosina). Työn lopuksi ehdotamme uuden vaunutyypin tuotantotavaksi verkostomaista lähestymistapaa, jossa osa vaunusta tehtäisiin Suomessa ja osa Venäjällä ja/tai Ukrainassa. Vaunutyypin tulisi olla rekisteröity Venäjälle, sillä silloin sitä voi käyttää Suomen ja Venäjän, kuten myös soveltuvin osin Venäjän ja Kiinan välisessä liikenteessä.
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
Cloud Computing paradigm is continually evolving, and with it, the size and the complexity of its infrastructure. Assessing the performance of a Cloud environment is an essential but strenuous task. Modeling and simulation tools have proved their usefulness and powerfulness to deal with this issue. This master thesis work contributes to the development of the widely used cloud simulator CloudSim and proposes CloudSimDisk, a module for modeling and simulation of energy-aware storage in CloudSim. As a starting point, a review of Cloud simulators has been conducted and hard disk drive technology has been studied in detail. Furthermore, CloudSim has been identified as the most popular and sophisticated discrete event Cloud simulator. Thus, CloudSimDisk module has been developed as an extension of CloudSim v3.0.3. The source code has been published for the research community. The simulation results proved to be in accordance with the analytic models, and the scalability of the module has been presented for further development.
Resumo:
This thesis describes the development of a software requirements specification for a user-centric event management system. The system is set to satisfy three goals: adding value for the event attendees, adding value for the event organizer, and reducing the costs of arranging and running an event. The requirements are identified by researching the prescriptive traits of event business and the current state of the case company and its environment. First the professional and human needs for events are scrutinized. Second, some recent reports about the current trends in the event business are reviewed. Then the event life cycle is presented using the model of new service development, and online promotion of events and especially word-of-mouth marketing receive special attention. Events are also regarded from the perspective of social networks and social media. The case company’s current state and its competitors are reviewed to formulate the needs which the system should fulfil. Then the currently available solutions for social media oriented event management are reviewed. The result is a set of functional and non-functional requirements. The functional requirements are categorized into social media, social networking, event personalization, event management, and system administration features. The specified features and non-functional requirements satisfy the three goals set for the system.
Resumo:
B2B document handling is moving from paper to electronic networks and electronic domain very rapidly. Moving, handling and transforming large electronic business documents requires a lot from the systems handling them. This paper explores new technologies such as SOA, event-driven systems and ESB and a scalable, event-driven enterprise service bus is created to demonstrate these new approaches to message handling. As an end result, we have a small but fully functional messaging system with several different components. This is the first larger Java-project done in-house, so on the side we developed our own set of best practices of Java development, setting up configurations, tools, code repositories and class naming and much more.
Resumo:
This thesis examines the history and evolution of information system process innovation (ISPI) processes (adoption, adaptation, and unlearning) within the information system development (ISD) work in an internal information system (IS) department and in two IS software house organisations in Finland over a 43-year time-period. The study offers insights into influential actors and their dependencies in deciding over ISPIs. The research usesa qualitative research approach, and the research methodology involves the description of the ISPI processes, how the actors searched for ISPIs, and how the relationships between the actors changed over time. The existing theories were evaluated using the conceptual models of the ISPI processes based on the innovationliterature in the IS area. The main focus of the study was to observe changes in the main ISPI processes over time. The main contribution of the thesis is a new theory. The term theory should be understood as 1) a new conceptual framework of the ISPI processes, 2) new ISPI concepts and categories, and the relationships between the ISPI concepts inside the ISPI processes. The study gives a comprehensive and systematic study on the history and evolution of the ISPI processes; reveals the factors that affected ISPI adoption; studies ISPI knowledge acquisition, information transfer, and adaptation mechanisms; and reveals the mechanismsaffecting ISPI unlearning; changes in the ISPI processes; and diverse actors involved in the processes. The results show that both the internal IS department and the two IS software houses sought opportunities to improve their technical skills and career paths and this created an innovative culture. When new technology generations come to the market the platform systems need to be renewed, and therefore the organisations invest in ISPIs in cycles. The extent of internal learning and experiments was higher than the external knowledge acquisition. Until the outsourcing event (1984) the decision-making was centralised and the internalIS department was very influential over ISPIs. After outsourcing, decision-making became distributed between the two IS software houses, the IS client, and itsinternal IT department. The IS client wanted to assure that information systemswould serve the business of the company and thus wanted to co-operate closely with the software organisations.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.