59 resultados para efficient algorithm

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selostus: Siniset liimapyydykset ovat keltaisia liimapyydyksiä tehokkaampia peltoluteen tarkkailussa

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tartraatti-resistentin happaman fosfataasin hiljentäminen RNAi menetelmällä: odottamaton vaikutus monosyytti-makrofagi linjan soluissa RNA interferenssi (RNAi) eli RNA:n hiljentyminen löydettiin ensimmäisenä kasveissa, ja 2000-luvulla RNAi menetelmä on otettu käyttöön myös nisäkässoluissa. RNAi on mekanismi, jossa lyhyet kaksi juosteiset RNA molekyylit eli siRNA:t sitoutuvat proteiinikompleksiin ja sitoutuvat komplementaarisesti proteiinia koodaavaan lähetti RNA:han katalysoiden lähetti RNA:n hajoamisen. Tällöin RNA:n koodaamaa proteiinia ei solussa tuoteta. Tässä työssä on RNA interferenssi menetelmän avuksi kehitetty uusi siRNA molekyylien suunnittelualgoritmi siRNA_profile, joka etsii lähetti RNA:sta geenin hiljentämiseen sopivia kohdealueita. Optimaalisesti suunnitellulla siRNA molekyylillä voi olla mahdollista saavuttaa pitkäaikainen geenin hiljeneminen ja spesifinen kohdeproteiinin määrän aleneminen solussa. Erilaiset kemialliset modifikaatiot, mm. 2´-Fluoro-modifikaatio, siRNA molekyylin riboosirenkaassa lisäsivät siRNA molekyylin stabiilisuutta veren plasmassa sekä siRNA molekyylin tehokkuutta. Nämä ovat tärkeitä siRNA molekyylien ominaisuuksia kun RNAi menetelmää sovelletaan lääketieteellisiin tarkoituksiin. Tartraatti-resistentti hapan fosfataasi (TRACP) on entsyymi, joka esiintyy luunsyöjäsoluissa eli osteoklasteissa, antigeenejä esittelevissä dendiriittisissä soluissa sekä eri kudosten makrofageissa, jotka ovat syöjäsoluja. TRACP entsyymin biologista tehtävää ei ole saatu selville, mutta oletetaan että TRACP entsyymin kyvyllä tuottaa reaktiivisia happiradikaaleja on tehtävä sekä luuta hajoittavissa osteoklasteissa sekä antigeenia esittelevissä dendriittisissä soluissa. Makrofageilla, jotka yliekpressoivat TRACP entsyymiä, on myös solunsisäinen reaktiivisten happiradikaalien tuotanto sekä bakteerin tappokyky lisääntynyt. TRACP-geenin hiljentämiseen tarkoitetut spesifiset DNA ja siRNA molekyylit aiheuttivat monosyytti-makrofagilinjan soluviljelymallissa TRACP entsyymin tuoton lisääntymistä odotusten vastaisesti. DNA ja RNA molekyylien vaikutusta TRACP entsyymin tuoton lisääntymiseen tutkittiin myös Tolllike reseptori 9 (TLR9) poistogeenisestä hiirestä eristetyissä monosyyttimakrofaagisoluissa. TRACP entsyymin tuoton lisääntyminen todettiin sekvenssistä ja TLR9:stä riippumattomaksi vasteeksi solun ulkopuolisia DNA ja RNA molekyylejä vastaan. Havainto TRACP entsyymin tuoton lisääntymisestä viittaa siihen, että TRACP entsyymillä on tehtävä solun immuunipuolustusjärjestelmässä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän tutkielman tavoitteena on selvittää Venäjän, Slovakian, Tsekin, Romanian, Bulgarian, Unkarin ja Puolan osakemarkkinoiden heikkojen ehtojen tehokkuutta. Tämä tutkielma on kvantitatiivinen tutkimus ja päiväkohtaiset indeksin sulkemisarvot kerättiin Datastreamin tietokannasta. Data kerättiin pörssien ensimmäisestä kaupankäyntipäivästä aina vuoden 2006 elokuun loppuun saakka. Analysoinnin tehostamiseksi dataa tutkittiin koko aineistolla, sekä kahdella aliperiodilla. Osakemarkkinoiden tehokkuutta on testattu neljällä tilastollisella metodilla, mukaan lukien autokorrelaatiotesti ja epäparametrinen runs-testi. Tavoitteena on myös selvittääesiintyykö kyseisillä markkinoilla viikonpäiväanomalia. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Viikonpäiväanomalia on löydettävissä kaikilta edellä mainituilta osakemarkkinoilta paitsi Tsekin markkinoilta. Merkittävää, positiivista tai negatiivista autokorrelaatiota, on löydettävissä kaikilta osakemarkkinoilta, myös Ljung-Box testi osoittaa kaikkien markkinoiden tehottomuutta täydellä periodilla. Osakemarkkinoiden satunnaiskulku hylätään runs-testin perusteella kaikilta muilta paitsi Slovakian osakemarkkinoilla, ainakin tarkastellessa koko aineistoa tai ensimmäistä aliperiodia. Aineisto ei myöskään ole normaalijakautunut minkään indeksin tai aikajakson kohdalla. Nämä havainnot osoittavat, että kyseessä olevat markkinat eivät ole heikkojen ehtojen mukaan tehokkaita

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation is proposed. The nonlinear elastic forces of the beam element are obtained using a continuum mechanics approach without employing a local element coordinate system. In this study, linear polynomials are used to interpolate both the transverse and longitudinal components of the displacement. This is different from other absolute nodal-coordinate-based beam elements where cubic polynomials are used in the longitudinal direction. The accompanying defects of the phenomenon known as shear locking are avoided through the adoption of selective integration within the numerical integration method. The proposed element is verified using several numerical examples, and the results are compared to analytical solutions and the results for an existing shear deformable beam element. It is shown that by using the proposed element, accurate linear and nonlinear static deformations, as well as realistic dynamic behavior, can be achieved with a smaller computational effort than by using existing shear deformable two-dimensional beam elements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän diplomityön tarkoituksena oli selvittää kustannustehokkaita keinoja uuteaineiden vähentämiseksi koivusulfaattimassasta. Uuteaineet voivat aiheuttaa ongelmia muodostaessaan saostumia prosessilaitteisiin. Saostumat aiheuttavat tukkeumia ja mittaushäiriöitä, mutta irrotessaan ne myös huonontavat sellun laatua. Lopputuotteeseen joutuessaan ne voivat lisäksi aiheuttaa haju- ja makuhaittoja, joilla on erityistä merkitystä esimerkiksi valmistettaessa elintarvikekartonkeja. Tämä työ tehtiin Stora Enson sellutehtaalla, Enocell Oy:llä, Uimaharjussa. Teoriaosassa käsiteltiin uuteaineiden koostumusta ja niiden aiheuttamia ongelmia sellu– ja paperitehtaissa. Lisäksi koottiin aikaisempien tehdaskokeiden fysikaalisia ja kemiallisia keinoja vähentää koivu-uutetta. Tarkastelualueina olivat puunkäsittely, keitto, pesemö ja valkaisu. Kokeellisessa osassa suoritettiin esikokeita laboratorio- ja tehdasmittakaavassa, jotta saavutettaisiin käytännöllistä tietoa itse lopuksi tehtävää tehdasmittakaavan koetta varten. Laboratoriokokeissa tutkittiin mm. keiton kappaluvun, lisäaineiden ja hartsisaippuan vaikutusta koivu-uutteeseen. Lisäksi suoritettiin myös happo- (A) ja peretikkahappovaiheen (Paa) laboratoriokokeet. Tehdasmittakaavassa tarkasteltiin mm. keiton kappaluvun, pesemön lämpötilan, A-vaiheen, valkaisun peroksidi- ja Paa-vaiheen vaikutusta koivu-uutteeseen. Uutteenpoistotehokkuutta eri menetelmien välillä vertailtiin niin määrällisesti kuin rahallisesti. Uutteenpoistotehokkuudella mitattuna vertailuvaihe oli tehokkain pesemön loppuvaiheessa ja valkaisun alkuvaiheessa. Pesemön loppuvaiheessa uutteenpoistoreduktiot olivat noin 30 % ja valkaisun alkuvaiheessa 40 %. Peroksidivaihe oli tehokkain käytettynä valkaisun loppuvaiheessa noin 40 % reduktiolla. Kustannustehokkuudella mitattuna tehokkaimmaksi osoittautui A-vaihe yhdessä peroksidivaiheen kanssa. Säästöt vertailujaksoon verrattuna olivat noin 0.3 €/ADt. Lisäksi kyseinen yhdistelmä osoittautui hyväksi keinoksi säilyttää uutetaso alle maksimirajan kuitulinja 2:lla, kun kuitulinjalla 1 tuotettiin samanaikaisesti armeeraussellua.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Puhelinmuistio on yksi matkapuhelimen käytetyimmistä ominaisuuksista. Puhelinmuistion tulee siksi olla kaikissa tilanteissa mahdollisimman nopeasti käytettävissä. Tämä edellyttää puhelinmuistiopalvelimelta tehokkaita tietorakenteita ja lajittelualgoritmeja. Nokian matkapuhelimissa puhelinmuistiopalvelin käyttää hakurakenteena järjestettyjä taulukoita. Työn tavoitteena oli kehittää puhelinmuistiopalvelimen hakutaulukoiden lajittelu mahdollisimman nopeaksi. Useita eri lajittelualgoritmeja vertailtiin ja niiden suoritusaikoja analysoitiin eri tilanteissa. Insertionsort-lajittelualgoritmin todettiin olevan nopein algoritmi lähes järjestyksessä olevien taulukoiden lajitteluun. Analyysin perusteella Quicksort-algoritmi lajittelee nopeimmin satunnaisessa järjestyksessä olevat taulukot. Quicksort-insertionsort –hybridialgoritmin havaittiin olevan paras lajittelualgoritmi puhelinmuistion lajitteluun. Sopivalla parametroinnilla tämä algoritmi on nopea satunnaisessa järjestyksessä olevalle aineistolle. Se kykenee hyödyntämään lajiteltavassa aineistossa valmiina olevaa järjestystä. Algoritmi ei kasvata merkittävästi muistinkulutusta. Uuden algoritmin ansiosta hakutaulukoiden lajittelu nopeutuu parhaimmillaan useita kymmeniä prosentteja.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yritysidentiteetistä on monta eri näkemystä ja yleisesti hyväksyttyä määritelmää ei ole olemassa. Monia eri näkemyksiä käsitellään tässä tutkielmassa. Vaikka yritysidentiteettiä ei olekaan helppo mitata, on tätä varten kuitenkin kehitetty useampia metodeja. Identiteetin viestintä vaati strategisia päätöksiä ennen kuin viestintää voidaan tehdä. Viestinnän integrointi on avainasemassa identiteetin viestinnässä. Hyvin hoidettu ja kommunikoitu yritysidentiteetti voi johtaa useisiin hyötyihin organisaatiolle. Kuitenkaan nämä hyödyt eivät näy kovin nopeasti, koska yritysidentiteetin viestintä on pitkän tähtäimen prosessi.