973 resultados para Circuits hidràulics
Resumo:
This master’s thesis is focused on optimizing the parameters of a distribution transformer with respect to low voltage direct current (LVDC) distribution system. One of the main parts of low voltage direct current (LVDC) distribution system is transformer. It is studied from several viewpoints like filtering capabilities of harmonics caused by rectifier, losses and short circuit current limiting Determining available short circuit currents is one of the most important aspects of designing power distribution systems. Short circuits and their effects must be considered in selecting electrical equipment, circuit protection and other devices.
Resumo:
Implantation of deep brain stimulation (DBS) electrodes via stereotactic neurosurgery has become a standard procedure for the treatment of Parkinson's disease. More recently, the range of neuropsychiatric conditions and the possible target structures suitable for DBS have greatly increased. The former include obsessive compulsive disease, depression, obesity, tremor, dystonia, Tourette's syndrome and cluster-headache. In this article we argue that several of the target structures for DBS (nucleus accumbens, posterior inferior hypothalamus, nucleus subthalamicus, nuclei in the thalamus, globus pallidus internus, nucleus pedunculopontinus) are located at strategic positions within brain circuits related to motivational behaviors, learning, and motor regulation. Recording from DBS electrodes either during the operation or post-operatively from externalized leads while the patient is performing cognitive tasks tapping the functions of the respective circuits provides a new window on the brain mechanisms underlying these functions. This is exemplified by a study of a patient suffering from obsessive-compulsive disease from whom we recorded in a flanker task designed to assess action monitoring processes while he received a DBS electrode in the right nucleus accumbens. Clear error-related modulations were obtained from the target structure, demonstrating a role of the nucleus accumbens in action monitoring. Based on recent conceptualizations of several different functional loops and on neuroimaging results we suggest further lines of research using this new window on brain functions.
Resumo:
Internet s’ha alçat en poc temps com el mitjà més utilitzat pels turistes per a planificar, organitzar i comprar un viatge, és per això que es proposa donar les mateixes facilitats en el destí. La Publicitat Dinàmica o “Digital Signage” és un nou servei de comunicació que consisteix en un conjunt de tecnologies i aplicacions informàtiques que permeten emetre missatges multimèdia i comunicar-se així d’una manera innovadora amb el públic objectiu de cada empresa, si s’afegeix un sistema independent, multimèdia i interactiu que pot utilitzar-se per a proporcionar informació i/o permetre la realització de transaccions es potencia al màxim el servei. D’aquesta manera es proposa crear una Xarxa Digital Multimèdia de Kioscs Interactius recolzats amb una pantalla de plasma per a la tecnologia Digital Signage. La ubicació escollida estratègicament és en un dels punts de major afluència turística, tal com l’entrada dels hotels. Així es tracta de crear circuits tancats en àrees geogràfiques on es troben els principals nuclis turístics de Mallorca. La possibilitat d’accedir a segments de població altament interessants per al producte o servei es multiplica al ser una manera fàcil, eficaç i altament suggestiva de promocionar el què es pretén. Un avantatge és la simplicitat de la infraestructura tecnològica que es necessita, el dispositiu mitjançant el qual es visualitzaran els missatges serà una pantalla de plasma convencional, i un terminal de punt de venda instal.lat en un lloc de pas. Cada mòdul està connectat a la xarxa ADSL mitjançant un servidor local a Internet. La connexió a la xarxa és imprescindible per a que el manteniment i actualització dels continguts es puguin efectuar remotament. L’objectiu principal d’aquest treball és estudiar la viabilitat de la implantació de la xarxa, mitjançant la realització d’un estudi de mercat on s’analitzen els grups claus per a la implantació: els hotelers, la indústria turística i el Govern Balear. S’identifiquen els beneficis que aportarà al nou servei i les repercussions que tendrà la seva instal.lació. Entre els resultats més destacats d’aquest estudi cal remarcar l’acceptació que ha tengut la idea entre els hotelers entrevistats i la resposta positiva de la indústria turística. Es reconeix: una millora de la imatge del sector, l’ús com a eina de promoció turística pel Govern, i la contribució a la sostenibilitat econòmica pel fet que augmenta la competitivitat de les empreses i això millora la qualitat del servei.
Resumo:
Un transmissor d’AM (modulació per amplitud), utilitza una de les moltes tècniques de modulació existents avui en dia. És molta la importància que té la modulació de senyals i aquests en són alguns exemples: -Facilita la propagació del senyal per cable o per aire. -Ordena l’espectre, distribuint Canals a les diferents informacions. -Disminueix la dimensió de les antenes. -Optimitza l’ample de banda de cada canal. -Evita interferències entre Canals. -Protegeix la informació de les degradacions per soroll. -Defineix la qualitat de la informació transmesa. L’objectiu principal d’aquest treball, serà realitzar un transmissor d’AM utilitzant components electrònics disponibles al mercat. Això es realitzarà mitjançant diversos procediments de disseny. Es realitzarà un procediment de disseny teòric, tot utilitzant els “datasheets” dels diferents components. Es realitzarà un procediment de disseny mitjançant la simulació, gràcies al qual es podrà provar el disseny del dispositiu i realitzar-ne algunes parts impossibles a reproduir teòricament. I finalment es realitzarà el dispositiu a la pràctica. Entre les conclusions més rellevants obtingudes en aquest treball, voldríem destacar la importància de la simulació per poder dissenyar circuits de radiofreqüència. En aquest treball s’ha demostrat que gràcies a una bona simulació, el primer prototip de dispositiu creat ens ha funcionat a la perfecció. D’altre banda, també comentar la importància d’un disseny adequat d’antena per poder aprofitar al màxim el rendiment del nostre dispositiu. Per concloure, la realització d’un aparell transmissor aporta unes nocions equilibrades d’electrònica i telecomunicacions importants per al disseny de dispositius de comunicació.
Resumo:
Poder mesurar i enregistrar diferents tipus de magnituds com pressió, força, temperatura etc. s’ha convertit en una necessitat per moltes aplicacions actuals. Aquestes magnituds poden tenir procedències molt diverses, tals com l’entorn, o poden ser generades per sistemes mecànics, elèctrics, etc. Per tal de poder adquirir aquestes magnituds, s’utilitzen els sistemes d’adquisició de dades. Aquests sistemes, prenen mostres analògiques del món real, i les transformen en dades digitals que poden ser manipulades per un sistema electrònic. Pràcticament qualsevol magnitud es pot mesurar utilitzant el sensor adient. Una magnitud molt utilitzada en sistemes d’adquisició de dades, és la temperatura. Els sistemes d’adquisició de temperatures estan molt generalitzats, i podem trobar-los com a sistemes, on l’objectiu és mostrar les dades adquirides, o podem trobar-los formant part de sistemes de control, aportant uns inputs necessaris per el seu correcte funcionament, garantir-ne l’estabilitat, seguretat etc. Aquest projecte, promogut per l’empresa Elausa, s’encarregarà d’adquirir, el senyal d’entrada de 2 Termoparells. Aquests mesuraran temperatures de circuits electrònics, que es trobaran dintre la càmera climàtica de Elausa, sotmesos a diferents condicions de temperatura, per tal de rebre l’homologació del circuit. El sistema haurà de poder mostrar les dades adquirides en temps real, i emmagatzemar-les en un PC que estarà ubicat en una oficina, situada a uns 30 m de distància de la sala on es farà el test. El sistema constarà d’un circuit electrònic que adquirirà, i condicionarà el senyal de sortida dels termoparells, per adaptar-lo a la tensió d’entrada d’un convertidor analògic digital, del microcontrolador integrat en aquesta placa. Seguidament aquesta informació, s’enviarà a través d’un mòdul transmissor de radiofreqüència, cap al PC on es visualitzaran les dades adquirides. Els objectius plantejats són els següents: - Dissenyar el circuit electrònic d’adquisició i condicionament del senyal. - Dissenyar, fabricar i muntar el circuit imprès de la placa d’adquisició. - Realitzar el programa de control del microcontrolador. - Realitzar el programa per presentar i desar les dades en un PC. - El sistema ha d’adquirir 2 temperatures, a través de Termoparells amb un rang d’entrada de -40ºC a +240ºC - S’ha de transmetre les dades via R.F. Els resultats del projecte han estat satisfactoris i s’han complert els objectius plantejats.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Hydrogen peroxide and chlorine are compared as possible disinfectants for water-cooling circuits. To this purpose, samples taken from the cooling system of a steel making plant were treated (at 25ºC and pH values of 5.5 and 8.5) with varying amounts of the two oxidizing agents (0.0 mg/L, 2.0 mg/L and 6.0 mg/L). The results were evaluated through bacterial counting and measurement of corrosion rates upon AISI1020 carbon steel coupons. Bacterial removal and corrosion effects proved to be similar and satisfactory for both reagents.
Resumo:
BACKGROUND: The Cancer Fast-track Programme's aim was to reduce the time that elapsed between well-founded suspicion of breast, colorectal and lung cancer and the start of initial treatment in Catalonia (Spain). We sought to analyse its implementation and overall effectiveness. METHODS: A quantitative analysis of the programme was performed using data generated by the hospitals on the basis of seven fast-track monitoring indicators for the period 2006-2009. In addition, we conducted a qualitative study, based on 83 semistructured interviews with primary and specialised health professionals and health administrators, to obtain their perception of the programme's implementation. RESULTS: About half of all new patients with breast, lung or colorectal cancer were diagnosed via the fast track, though the cancer detection rate declined across the period. Mean time from detection of suspected cancer in primary care to start of initial treatment was 32 days for breast, 30 for colorectal and 37 for lung cancer (2009). Professionals associated with the implementation of the programme showed that general practitioners faced with suspicion of cancer had changed their conduct with the aim of preventing lags. Furthermore, hospitals were found to have pursued three specific implementation strategies (top-down, consensus-based and participatory), which made for the cohesion and sustainability of the circuits. CONCLUSION: The programme has contributed to speeding up diagnostic assessment and treatment of patients with suspicion of cancer, and to clarifying the patient pathway between primary and specialised care.
Resumo:
Most climate change projections show important decreases in water availability in the Mediterranean region by the end of this century. We assess those main climate change impacts on water resources in three medium-sized catchments with varying climatic conditions in north-eastern Spain. A combination of hydrological modelling and climate projections with B1 and A2 IPCC emission scenarios is performed to infer future stream flows. The largest reduction (22-48% for 2076-2100) of stream flows is expected in the headwaters of the two wettest catchments, while lower decreases (22-32% for 2076-2100) are expected in the drier one. In all three catchments, autumn and summer are the seasons with the most notable projected decreases in stream flow, 50% and 34%, respectively (2076-2100). Thus, ecological flows might be noticeably impacted by climate change in the catchments, especially in the headwaters of those wet catchments.
Resumo:
FPGA- piirit ovat viime vuosina kehittyneet tehokkaammiksi, mutta samalla niiden hinta on laskenut tasolle, jolloin ne ovat vaihtoehto yhä useampiin sovelluksiin. Kandidaatintyöni aiheena oli suunnitella ja mahdollisesti toteuttaa sulautettu laite, joka laskisi signaalissa esiintyvien pulssien lukumäärää. Sitä käytettäisiin mitattaessa kipinöintiä sähkömoottorin laakeroinnissa. Kipinät havaitaan moottorin ulkopuolelta UHF- antennilla. Antennisignaalista poimittavat pulssit ovat hyvin nopeita, joten digitaaliselta logiikalta vaaditaan myös erityistä nopeutta. Tämän takia laitetta lähdettiin toteuttamaan esimerkiksi mikrokontrollerin sijasta FPGA- piirin avulla. Pulssilaskurin toteutus onnistui suhteellisen vaivattomasti FPGAlla, ja sen toimivuutta käytännössä päästiin testaamaan todellisissa olosuhteissa.
Resumo:
In this work, noise and aromatic hydrocarbons levels of indoor and outdoor karting circuits located in Rio de Janeiro were assessed. The sampling was perfomed using active charcoal cartridges, followed by solvent desorption and analysis by gas chromatography with mass spectrometry detection. This study demonstrated that the karting circuits, venues for entertainment, were a major source of air pollution with the detection of considerable amounts of these compounds (2.0 to 19.7 µg m-3 of benzene; 4.1 to 41.1 µg m-3 of toluene; 2.8 to 36.2 µg m-3 of ethylbenzene; 0.7 to 36.2 µg m-3 of xylenes) and high noise levels.
Resumo:
The computer is a useful tool in the teaching of upper secondary school physics, and should not have a subordinate role in students' learning process. However, computers and computer-based tools are often not available when they could serve their purpose best in the ongoing teaching. Another problem is the fact that commercially available tools are not usable in the way the teacher wants. The aim of this thesis was to try out a novel teaching scenario in a complicated subject in physics, electrodynamics. The didactic engineering of the thesis consisted of developing a computer-based simulation and training material, implementing the tool in physics teaching and investigating its effectiveness in the learning process. The design-based research method, didactic engineering (Artigue, 1994), which is based on the theoryof didactical situations (Brousseau, 1997), was used as a frame of reference for the design of this type of teaching product. In designing the simulation tool a general spreadsheet program was used. The design was based on parallel, dynamic representations of the physics behind the function of an AC series circuit in both graphical and numerical form. The tool, which was furnished with possibilities to control the representations in an interactive way, was hypothesized to activate the students and promote the effectiveness of their learning. An effect variable was constructed in order to measure the students' and teachers' conceptions of learning effectiveness. The empirical study was twofold. Twelve physics students, who attended a course in electrodynamics in an upper secondary school, participated in a class experiment with the computer-based tool implemented in three modes of didactical situations: practice, concept introduction and assessment. The main goal of the didactical situations was to have students solve problems and study the function of AC series circuits, taking responsibility for theirown learning process. In the teacher study eighteen Swedish speaking physics teachers evaluated the didactic potential of the computer-based tool and the accompanying paper-based material without using them in their physics teaching. Quantitative and qualitative data were collected using questionnaires, observations and interviews. The result of the studies showed that both the group of students and the teachers had generally positive conceptions of learning effectiveness. The students' conceptions were more positive in the practice situation than in the concept introduction situation, a setting that was more explorative. However, it turned out that the students' conceptions were also positive in the more complex assessment situation. This had not been hypothesized. A deeper analysis of data from observations and interviews showed that one of the students in each pair was more active than the other, taking more initiative and more responsibilityfor the student-student and student-computer interaction. These active studentshad strong, positive conceptions of learning effectiveness in each of the threedidactical situations. The group of less active students had a weak but positive conception in the first iv two situations, but a negative conception in the assessment situation, thus corroborating the hypothesis ad hoc. The teacher study revealed that computers were seldom used in physics teaching and that computer programs were in short supply. The use of a computer was considered time-consuming. As long as physics teaching with computer-based tools has to take place in special computer rooms, the use of such tools will remain limited. The affordance is enhanced when the physical dimensions as well as the performance of the computer are optimised. As a consequence, the computer then becomes a real learning tool for each pair of students, smoothly integrated into the ongoing teaching in the same space where teaching normally takes place. With more interactive support from the teacher, the computer-based parallel, dynamic representations will be efficient in promoting the learning process of the students with focus on qualitative reasoning - an often neglected part of the learning process of the students in upper secondary school physics.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
Tässä diplomityössä määritellään biopolttoainetta käyttävän voimalaitoksen käytönaikainen tuotannon optimointimenetelmä. Määrittelytyö liittyy MW Powerin MultiPower CHP –voimalaitoskonseptin jatkokehitysprojektiin. Erilaisten olemassa olevien optimointitapojen joukosta valitaan tarkoitukseen sopiva, laitosmalliin ja kustannusfunktioon perustuva menetelmä, jonka tulokset viedään automaatiojärjestelmään PID-säätimien asetusarvojen muodossa. Prosessin mittaustulosten avulla lasketaan laitoksen energia- ja massataseet, joiden tuloksia käytetään seuraavan optimointihetken lähtötietoina. Optimoinnin kohdefunktio on kustannusfunktio, jonka termit ovat voimalaitoksen käytöstä aiheutuvia tuottoja ja kustannuksia. Prosessia optimoidaan säätimille annetut raja-arvot huomioiden niin, että kokonaiskate maksimoituu. Kun laitokselle kertyy käyttöikää ja historiadataa, voidaan prosessin optimointia nopeuttaa hakemalla tilastollisesti historiadatasta nykytilanteen olosuhteita vastaava hetki. Kyseisen historian hetken katetta verrataan kustannusfunktion optimoinnista saatuun katteeseen. Paremman katteen antavan menetelmän laskemat asetusarvot otetaan käyttöön prosessin ohjausta varten. Mikäli kustannusfunktion laskenta eikä historiadatan perusteella tehty haku anna paranevaa katetta, niiden laskemia asetusarvoja ei oteta käyttöön. Sen sijaan optimia aletaan hakea deterministisellä optimointialgoritmilla, joka hakee nykyhetken ympäristöstä paremman katteen antavia säätimien asetusarvoja. Säätöjärjestelmä on mahdollista toteuttaa myös tulevaisuutta ennustavana. Työn käytännön osuudessa voimalaitosmalli luodaan kahden eri mallinnusohjelman avulla, joista toisella kuvataan kattilan ja toisella voimalaitosprosessin toimintaa. Mallinnuksen tuloksena saatuja prosessiarvoja hyödynnetään lähtötietoina käyttökatteen laskennassa. Kate lasketaan kustannusfunktion perusteella. Tuotoista suurimmat liittyvät sähkön ja lämmön myyntiin sekä tuotantotukeen, ja suurimmat kustannukset liittyvät investoinnin takaisinmaksuun ja polttoaineen ostoon. Kustannusfunktiolle tehdään herkkyystarkastelu, jossa seurataan katteen muutosta prosessin teknisiä arvoja muutettaessa. Tuloksia vertaillaan referenssivoimalaitoksella suoritettujen verifiointimittausten tuloksiin, ja havaitaan, että tulokset eivät ole täysin yhteneviä. Erot johtuvat sekä mallinnuksen puutteista että mittausten lyhyehköistä tarkasteluajoista. Automatisoidun optimointijärjestelmän käytännön toteutusta alustetaan määrittelemällä käyttöön otettava optimointitapa, siihen liittyvät säätöpiirit ja tarvittavat lähtötiedot. Projektia tullaan jatkamaan järjestelmän ohjelmoinnilla, testauksella ja virityksellä todellisessa voimalaitosympäristössä ja myöhemmin ennustavan säädön toteuttamisella.