818 resultados para Fuzzy-logic
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
A new method for decision making that uses the ordered weighted averaging (OWA) operator in the aggregation of the information is presented. It is used a concept that it is known in the literature as the index of maximum and minimum level (IMAM). This index is based on distance measures and other techniques that are useful for decision making. By using the OWA operator in the IMAM, we form a new aggregation operator that we call the ordered weighted averaging index of maximum and minimum level (OWAIMAM) operator. The main advantage is that it provides a parameterized family of aggregation operators between the minimum and the maximum and a wide range of special cases. Then, the decision maker may take decisions according to his degree of optimism and considering ideals in the decision process. A further extension of this approach is presented by using hybrid averages and Choquet integrals. We also develop an application of the new approach in a multi-person decision-making problem regarding the selection of strategies.
Resumo:
The research of condition monitoring of electric motors has been wide for several decades. The research and development at universities and in industry has provided means for the predictive condition monitoring. Many different devices and systems are developed and are widely used in industry, transportation and in civil engineering. In addition, many methods are developed and reported in scientific arenas in order to improve existing methods for the automatic analysis of faults. The methods, however, are not widely used as a part of condition monitoring systems. The main reasons are, firstly, that many methods are presented in scientific papers but their performance in different conditions is not evaluated, secondly, the methods include parameters that are so case specific that the implementation of a systemusing such methods would be far from straightforward. In this thesis, some of these methods are evaluated theoretically and tested with simulations and with a drive in a laboratory. A new automatic analysis method for the bearing fault detection is introduced. In the first part of this work the generation of the bearing fault originating signal is explained and its influence into the stator current is concerned with qualitative and quantitative estimation. The verification of the feasibility of the stator current measurement as a bearing fault indicatoris experimentally tested with the running 15 kW induction motor. The second part of this work concentrates on the bearing fault analysis using the vibration measurement signal. The performance of the micromachined silicon accelerometer chip in conjunction with the envelope spectrum analysis of the cyclic bearing faultis experimentally tested. Furthermore, different methods for the creation of feature extractors for the bearing fault classification are researched and an automatic fault classifier using multivariate statistical discrimination and fuzzy logic is introduced. It is often important that the on-line condition monitoring system is integrated with the industrial communications infrastructure. Two types of a sensor solutions are tested in the thesis: the first one is a sensor withcalculation capacity for example for the production of the envelope spectra; the other one can collect the measurement data in memory and another device can read the data via field bus. The data communications requirements highly depend onthe type of the sensor solution selected. If the data is already analysed in the sensor the data communications are needed only for the results but in the other case, all measurement data need to be transferred. The complexity of the classification method can be great if the data is analysed at the management level computer, but if the analysis is made in sensor itself, the analyses must be simple due to the restricted calculation and memory capacity.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
Diplomityö käsittelee kiinteistön palautetiedon analysointia. Tässä diplomityössä palautetiedolla tarkoitetaan kiinteistön talotekniikkaan liittyviä teknisiä ja inhimillisiä tietoja, joiden perusteella rakennuksen ja sen tekniikan toimivuudesta voidaan tehdä päätelmiä. Tiedot kerätään järjestelmistä mittauspisteiden kautta etävalvomoon, josta niitä voidaan tarkastella ja edelleen analysoida. Valvomosta saatavan tiedon rinnalle pyritään suunnitteluvaiheessa kokoamaan valvottavasta kohteesta laskennallisia energiankulutusmalleja. Myös kiinteistön asukkaat, rakennuksen ominaisuudet ja ympäristö muodostavat osan palautetiedosta. Kiinteistön palautetieto voidaankin jakaa tekniseen ja inhimilliseen tietoon, ja edelleen analysoinnin kannalta staattiseen tai dynaamiseen tietoon. Tässä diplomityössä selvitettiin palautetiedon keruuta etävalvomolla yhdistämällä tähän rakentamisen laadunvalvonnan työkaluja ja sumeaa logiikkaa. Työn käytännön osio koostuu kiinteistön ja siihen yhdistetyn etävalvomon välillä kulkevan tiedon käsittelystä, mittauspisteiden määrittämisestä ja tietojen analysoinnista. Työssä esitellään kolme esimerkkikohdetta Helsingin seudulta. Työssä laadittiin arviointimalli, joka sisältää kaiken kiinteistön palautetietoon liittyvän aineiston ja sen analysointitavat, sekä erityisesti uutena asiana myös sumeaa logiikkaa. Tiedon analysointi etenee oletusarvojen muodostamisesta, simuloinnin laatimisesta ja sen vertaamisesta todellisiin tietoihin ja edelleen näistä tehtäviin päätelmiin. Tavanomaisten teknisten tietojen lisäksi sumean logiikan avulla tuotiin esille poikkeamia selittäviä tekijöitä mm. kiinteistön asukkaiden ominaisuuksista.
Resumo:
Tämän työn tarkoituksena oli löytää keinoja erään leijukerroskattilan typenoksidipäästöjen vähentämiseksi. Koska päästöt olivat jo alunperin alhaiset leijukerrostekniikan ja hybridin SNCR/SCR –typenpoistolaitteiston ansiosta, päätettiin päästöjä lähteä vähentämään parantamalla ammoniakkiruiskutuksen säätöä. Alkuperäinen ammoniakkiruiskutuksen säätö oli liian hidas, jotta satunnaisten häiriöiden aiheuttamat typenoksidipiikit olisi pystytty poistamaan. Ammoniakkiruiskutusta parannettiin lisäämällä jokaiseen ammoniakkilinjaan mäntäpumput, joiden avulla ammoniakkia voidaan syöttää sinne, missä sitä eniten tarvitaan. Ammoniakkiruiskutuksen säätöön kehitettiin uusi sumeaan logiikkaan perustuva säätäjä. Myös muita kehittyneitä säätömenetelmiä kuten neuroverkkoa hyödynnettiin säätäjän kehityksessä. Ammoniakkiruiskutuksen säätäjää testattiin menestyksekkäästi Ruotsissa Brista Kraftin Märstassa sijaitsevalla voimalaitoksella
Resumo:
Polttoaine asettaa puitteet kattilasuunnittelulle. Kiertoleijukattilakonseptin valinta kytkeytyy kiinteästi mitoitusarvoihin ja polttoaineen ominaisuuksiin. Asiakkaan vaatimuk-set kattilalle asettavat lähtökohdan kattilasuunnittelulle. Suorituskyky, kustannukset ja luotettavuus ovat asiakaslähtöisiä tekijöitä, joiden painotukset vaikuttavat kattilakonseptin valintaan. Korkeat lämpötilat tulistimien alueella tekevät tulistinjärjestelystä vaikean ja määräävän osan kattilakonseptin valintaa. Konvektiotulistimien altistuminen kuumille savukaasuille tekee niistä herkkiä likaantumiselle ja korroosiolle. Mitoitusarvojen ja tulistimien rakenteen oikeanlaisella valinnalla voidaan näitä polttoaineperäisiä ongelmia ehkäistä. Lisäksi kiertoleijukattiloissa käytetyt tulipesän ulkopuoliset tulistimet soveltuvat konvektiotulistimia korkeammille lämpötiloille huonolaatuisillakin polttoaineilla. Tässä työssä rakennettu asiantuntijajärjestelmä valitsee alustavan kattilakonseptin mitoitusta varten käyttäjän antamien vähäisten lähtötietojen pohjalta.
Resumo:
Kuvien laatu on tutkituimpia ja käytetyimpiä aiheita. Tässä työssä tarkastellaan värin laatu ja spektrikuvia. Työssä annetaan yleiskuva olemassa olevista pakattujen ja erillisten kuvien laadunarviointimenetelmistä painottaen näiden menetelmien soveltaminen spektrikuviin. Tässä työssä esitellään spektriväriulkomuotomalli värikuvien laadunarvioinnille. Malli sovelletaan spektrikuvista jäljennettyihin värikuviin. Malli pohjautuu sekä tilastolliseen spektrikuvamalliin, joka muodostaa yhteyden spektrikuvien ja valokuvien parametrien välille, että kuvan yleiseen ulkomuotoon. Värikuvien tilastollisten spektriparametrien ja fyysisten parametrien välinen yhteys on varmennettu tietokone-pohjaisella kuvamallinnuksella. Mallin ominaisuuksien pohjalta on kehitetty koekäyttöön tarkoitettu menetelmä värikuvien laadunarvioinnille. On kehitetty asiantuntija-pohjainen kyselymenetelmä ja sumea päättelyjärjestelmä värikuvien laadunarvioinnille. Tutkimus osoittaa, että spektri-väri –yhteys ja sumea päättelyjärjestelmä soveltuvat tehokkaasti värikuvien laadunarviointiin.
Resumo:
A new method for decision making that uses the ordered weighted averaging (OWA) operator in the aggregation of the information is presented. It is used a concept that it is known in the literature as the index of maximum and minimum level (IMAM). This index is based on distance measures and other techniques that are useful for decision making. By using the OWA operator in the IMAM, we form a new aggregation operator that we call the ordered weighted averaging index of maximum and minimum level (OWAIMAM) operator. The main advantage is that it provides a parameterized family of aggregation operators between the minimum and the maximum and a wide range of special cases. Then, the decision maker may take decisions according to his degree of optimism and considering ideals in the decision process. A further extension of this approach is presented by using hybrid averages and Choquet integrals. We also develop an application of the new approach in a multi-person decision-making problem regarding the selection of strategies.
Resumo:
This article reflects the analysis of personal and social competences through the study and analysis of creative tensión in engineering students, using a computer application called Cycloid. The main objective was to compare the students' creative tensión by asigning them the task of being the project leader of a given project: their own university major. The process consisted of evaluating, through special surveys, a group of students to know the current situation of competences, using fuzzy logic analysis. From this self-knowledge, provided by the survey, students can know their strong and weak characteristics regarding their study habits. Results showed that tolerance to stress and to language courses are the weaker points. This application is useful for the design of study strategies that students themselves can do to better face their courses
Resumo:
This thesis studies the properties and usability of operators called t-norms, t-conorms, uninorms, as well as many valued implications and equivalences. Into these operators, weights and a generalized mean are embedded for aggregation, and they are used for comparison tasks and for this reason they are referred to as comparison measures. The thesis illustrates how these operators can be weighted with a differential evolution and aggregated with a generalized mean, and the kinds of measures of comparison that can be achieved from this procedure. New operators suitable for comparison measures are suggested. These operators are combination measures based on the use of t-norms and t-conorms, the generalized 3_-uninorm and pseudo equivalence measures based on S-type implications. The empirical part of this thesis demonstrates how these new comparison measures work in the field of classification, for example, in the classification of medical data. The second application area is from the field of sports medicine and it represents an expert system for defining an athlete's aerobic and anaerobic thresholds. The core of this thesis offers definitions for comparison measures and illustrates that there is no actual difference in the results achieved in comparison tasks, by the use of comparison measures based on distance, versus comparison measures based on many valued logical structures. The approach has been highly practical in this thesis and all usage of the measures has been validated mainly by practical testing. In general, many different types of operators suitable for comparison tasks have been presented in fuzzy logic literature and there has been little or no experimental work with these operators.
Resumo:
Tässä työssä kuvataan menetelmä, jonka avulla on mahdollista sorvausprosessista mitattujen signaalien perusteella muokata lastuamisprosessin parametreja siten, että prosessissa mahdollisesti esiintyvät ongelmatilanteet korjataan. Työ on tehty osana Feedchip-tutkimushanketta ja tukeutuu tutkimushankkeessa aiemmin tehtyyn työhön vaadittavien korjaustoimenpiteiden, signaaleja mittaavien antureiden instrumentoinnin sekä alustavan ongelmatilanteiden ominaispiirteiden signaaleista tunnistuksen osalta. Tämä työ keskittyy esittelemään toiminnot, joiden avulla aiemmat tulokset voidaan koota yhteen kokonaisuuteen. Järjestelmän toiminta edellyttää sen osien toiminnan korkean tason koordinointia. Lisäksi määritellään päättelyjärjestelmä, joka kykenee mitatuista arvoista tunnistettujen ongelmatilanteiden esiintymisasteiden perusteella määrittämään tarvittavat toimenpiteet ongelmatilanteiden poistamiseksi. Kandidaatintyön rinnalla toteutetaan ohjelmisto Lappeenrannan teknillisen yliopiston konepajatekniikan laboratorion sorvausjärjestelmän yhteyteen rakennetun prototyyppilaitteiston ohjaamiseksi.
Resumo:
New economic and enterprise needs have increased the interest and utility of the methods of the grouping process based on the theory of uncertainty. A fuzzy grouping (clustering) process is a key phase of knowledge acquisition and reduction complexity regarding different groups of objects. Here, we considered some elements of the theory of affinities and uncertain pretopology that form a significant support tool for a fuzzy clustering process. A Galois lattice is introduced in order to provide a clearer vision of the results. We made an homogeneous grouping process of the economic regions of Russian Federation and Ukraine. The obtained results gave us a large panorama of a regional economic situation of two countries as well as the key guidelines for the decision-making. The mathematical method is very sensible to any changes the regional economy can have. We gave an alternative method of the grouping process under uncertainty.
Resumo:
Suositusmenetelmien tarkoituksena on auttaa käyttäjää löytämään häntä kiinnostavia asioita ja välttämään asioita, joista hän ei pitäisi. Suositusmenetelmät antavat suosituk- set yleensä terävinä lukuina. Tässä työssä kehitetään suositusmenetelmä, joka antaa suo- situkset arvosanojen sumeina jäsenyysasteina. Menetelmän antamat suositukset voidaan myös perustella käyttäjälle. Menetelmä kuuluu pääosin yhteisösuodatusmenetelmiin, jois- sa suositukset tehdään käyttäjien antamien arvosanojen perusteella, mutta myös tietoa elokuvien tyylilajeista hyödynnetään suositustarkkuuden parantamiseksi. Sumeiden suo- situsten suositeltavuusjärjestyksen laskemiseen esitetään myös menetelmä. Käyttäjien elokuville antamat arvosanat voidaan käsittää sumeana datana. Käyttäjä voi kuvata arvosanaa esimerkiksi ilmaisulla ”noin 4”. Tästä syystä on loogista esittää suo- situksetkin sumeina lukuina. Tällöin käyttäjälle voidaan antaa tietoa suosituksen tark- kuudesta ja mahdollisista ristiriidoista. Epävarmojen suositusten tapauksessa käyttäjä voi painottaa enemmän muita tietolähteitä. Kokeiden perusteella kehitetty menetelmä antaa joissa tapauksissa selvästi vertailtavia menetelmiä parempia suosituksia, kun taas toisissa tapauksissa suositukset ovat selvästi heikompia.
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.