26 resultados para simple algorithms
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Selostus: Yksinkertainen viljelymenetelmä naudan alkioiden aikaviivenauhoitusta varten
Resumo:
Summary
Resumo:
Abstract
Resumo:
Tämän diplomityön tarkoituksena on tutkia, mitä vaaditaan uutisten samanlaisuuden automaattiseen tunnistamiseen. Uutiset ovat tekstipohjaisia uutisia, jotka on haettu eri uutislähteistä. Uutisista on tarkoitus tunnistaa ensinnäkin ne uutiset, jotka tarkoittavat samaa asiaa, sekä ne uutiset, jotka eivät ole aivan sama asia, mutta liittyvät kuitenkin toisiinsa. Tässä diplomityössä tutkitaan, millä algoritmeilla tämä tunnistus onnistuu tehokkaimmin sekä suomalaisessa, että englanninkielisessä tekstissä. Diplomityössä vertaillaan valmiita algoritmeja. Tavoitteena on valita sellainen algoritmiyhdistelmä, että 90 % vertailluista uutisista tunnistuu oikein. Tutkimuksessa käytetään 2 eri ryhmittelyalgoritmia, sekä 3 eri stemmaus-algoritmia. Näitä algoritmeja vertaillaan sekä uutisten tunnistustehokkuuden, että niiden suorituskyvyn suhteen. Parhaimmaksi stemmaus-algoritmiksi osoittautui sekä suomen-, että englanninkielisten uutisten vertailussa Porterin algoritmi. Ryhmittely-algoritmeista tehokkaammaksi osoittautui yksinkertaisempi erilaisiin tunnuslukuihin perustuva algoritmi.
Resumo:
IP-verkkojen hyvin tunnettu haitta on, että nämä eivät pysty takaamaan tiettyä palvelunlaatua (Quality of Service) lähetetyille paketeille. Seuraavat kaksi tekniikkaa pidetään lupaavimpina palvelunlaadun tarjoamiselle: Differentiated Services (DiffServ) ja palvelunlaatureititys (QoS Routing). DiffServ on varsin uusi IETF:n määrittelemä Internetille tarkoitettu palvelunlaatumekanismi. DiffServ tarjoaa skaalattavaa palvelujen erilaistamista ilman viestintää joka hypyssä ja per-flow –tilan ohjausta. DiffServ on hyvä esimerkki hajautetusta verkkosuunnittelusta. Tämän palvelutasomekanismin tavoite on viestintäjärjestelmien suunnittelun yksinkertaistaminen. Verkkosolmu voidaan rakentaa pienestä hyvin määritellystä rakennuspalikoiden joukosta. Palvelunlaatureititys on reititysmekanismi, jolla liikennereittejä määritellään verkon käytettävissä olevien resurssien pohjalta. Tässä työssä selvitetään uusi palvelunlaatureititystapa, jota kutsutaan yksinkertaiseksi monitiereititykseksi (Simple Multipath Routing). Tämän työn tarkoitus on suunnitella palvelunlaatuohjain DiffServille. Tässä työssä ehdotettu palvelunlaatuohjain on pyrkimys yhdistää DiffServ ja palvelunlaatureititysmekanismeja. Työn kokeellinen osuus keskittyy erityisesti palvelunlaatureititysalgoritmeihin.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Tässä diplomityössä optimoitiin nelivaiheinen 1 MWe höyryturbiinin prototyyppimalli evoluutioalgoritmien avulla sekä tutkittiin optimoinnista saatuja kustannushyötyjä. Optimoinnissa käytettiin DE – algoritmia. Optimointi saatiin toimimaan, mutta optimoinnissa käytetyn laskentasovelluksen (semiempiirisiin yhtälöihin perustuvat mallit) luonteesta johtuen optimoinnin tarkkuus CFD – laskennalla suoritettuun tarkastusmallinnukseen verrattuna oli jonkin verran toivottua pienempi. Tulosten em. epätarkkuus olisi tuskin ollut vältettävissä, sillä ongelma johtui puoliempiirisiin laskentamalleihin liittyvistä lähtöoletusongelmista sekä epävarmuudesta sovitteiden absoluuttisista pätevyysalueista. Optimoinnin onnistumisen kannalta tällainen algebrallinen mallinnus oli kuitenkin välttämätöntä, koska esim. CFD-laskentaa ei olisi mitenkään voitu tehdä jokaisella optimointiaskeleella. Optimoinnin aikana ongelmia esiintyi silti konetehojen riittävyydessä sekä sellaisen sopivan rankaisumallin löytämisessä, joka pitäisi algoritmin matemaattisesti sallitulla alueella, muttei rajoittaisi liikaa optimoinnin edistymistä. Loput ongelmat johtuivat sovelluksen uutuudesta sekä täsmällisyysongelmista sovitteiden pätevyysalueiden käsittelyssä. Vaikka optimoinnista saatujen tulosten tarkkuus ei ollut aivan tavoitteen mukainen, oli niillä kuitenkin koneensuunnittelua edullisesti ohjaava vaikutus. DE – algoritmin avulla suoritetulla optimoinnilla saatiin turbiinista noin 2,2 % enemmän tehoja, joka tarkoittaa noin 15 000 € konekohtaista kustannushyötyä. Tämä on yritykselle erittäin merkittävä konekohtainen kustannushyöty. Loppujen lopuksi voitaneen sanoa, etteivät evoluutioalgoritmit olleet parhaimmillaan prototyyppituotteen optimoinnissa. Evoluutioalgoritmeilla teknisten laitteiden optimoinnissa piilee valtavasti mahdollisuuksia, mutta se vaatii kypsän sovelluskohteen, joka tunnetaan jo entuudestaan erinomaisesti tai on yksinkertainen ja aukottomasti laskettavissa.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Programming and mathematics are core areas of computer science (CS) and consequently also important parts of CS education. Introductory instruction in these two topics is, however, not without problems. Studies show that CS students find programming difficult to learn and that teaching mathematical topics to CS novices is challenging. One reason for the latter is the disconnection between mathematics and programming found in many CS curricula, which results in students not seeing the relevance of the subject for their studies. In addition, reports indicate that students' mathematical capability and maturity levels are dropping. The challenges faced when teaching mathematics and programming at CS departments can also be traced back to gaps in students' prior education. In Finland the high school curriculum does not include CS as a subject; instead, focus is on learning to use the computer and its applications as tools. Similarly, many of the mathematics courses emphasize application of formulas, while logic, formalisms and proofs, which are important in CS, are avoided. Consequently, high school graduates are not well prepared for studies in CS. Motivated by these challenges, the goal of the present work is to describe new approaches to teaching mathematics and programming aimed at addressing these issues: Structured derivations is a logic-based approach to teaching mathematics, where formalisms and justifications are made explicit. The aim is to help students become better at communicating their reasoning using mathematical language and logical notation at the same time as they become more confident with formalisms. The Python programming language was originally designed with education in mind, and has a simple syntax compared to many other popular languages. The aim of using it in instruction is to address algorithms and their implementation in a way that allows focus to be put on learning algorithmic thinking and programming instead of on learning a complex syntax. Invariant based programming is a diagrammatic approach to developing programs that are correct by construction. The approach is based on elementary propositional and predicate logic, and makes explicit the underlying mathematical foundations of programming. The aim is also to show how mathematics in general, and logic in particular, can be used to create better programs.
Resumo:
Den snart 200 år gamla vetenskapsgrenen organisk synteskemi har starkt bidragit till moderna samhällens välfärd. Ett av flaggskeppen för den organiska synteskemin är utvecklingen och produktionen av nya läkemedel och speciellt de aktiva substanserna däri. Därmed är det viktigt att utveckla nya syntesmetoder, som kan tillämpas vid framställningen av farmaceutiskt relevanta målstrukturer. I detta sammanhang är den ultimata målsättningen dock inte endast en lyckad syntes av målmolekylen, utan det är allt viktigare att utveckla syntesrutter som uppfyller kriterierna för den hållbara utvecklingen. Ett av de centralaste verktygen som en organisk kemist har till förfogande i detta sammanhang är katalys, eller mera specifikt möjligheten att tillämpa olika katalytiska reaktioner vid framställning av komplexa målstrukturer. De motsvarande industriella processerna karakteriseras av hög effektivitet och minimerad avfallsproduktion, vilket naturligtvis gynnar den kemiska industrin samtidigt som de negativa miljöeffekterna minskas avsevärt. I denna doktorsavhandling har nya syntesrutter för produktion av finkemikalier med farmaceutisk relevans utvecklats genom att kombinera förhållandevis enkla transformationer till nya reaktionssekvenser. Alla reaktionssekvenser som diskuteras i denna avhandling påbörjades med en metallförmedlad allylering av utvalda aldehyder eller aldiminer. De erhållna produkterna innehållende en kol-koldubbelbindning med en närliggande hydroxyl- eller aminogrupp modifierades sedan vidare genom att tillämpa välkända katalytiska reaktioner. Alla syntetiserade molekyler som presenteras i denna avhandling karakteriseras som finkemikalier med hög potential vid farmaceutiska tillämpningar. Utöver detta tillämpades en mängd olika katalytiska reaktioner framgångsrikt vid syntes av dessa molekyler, vilket i sin tur förstärker betydelsen för de katalytiska verktygen i organiska kemins verktygslåda.
Resumo:
The objective of this thesis work is to develop and study the Differential Evolution Algorithm for multi-objective optimization with constraints. Differential Evolution is an evolutionary algorithm that has gained in popularity because of its simplicity and good observed performance. Multi-objective evolutionary algorithms have become popular since they are able to produce a set of compromise solutions during the search process to approximate the Pareto-optimal front. The starting point for this thesis was an idea how Differential Evolution, with simple changes, could be extended for optimization with multiple constraints and objectives. This approach is implemented, experimentally studied, and further developed in the work. Development and study concentrates on the multi-objective optimization aspect. The main outcomes of the work are versions of a method called Generalized Differential Evolution. The versions aim to improve the performance of the method in multi-objective optimization. A diversity preservation technique that is effective and efficient compared to previous diversity preservation techniques is developed. The thesis also studies the influence of control parameters of Differential Evolution in multi-objective optimization. Proposals for initial control parameter value selection are given. Overall, the work contributes to the diversity preservation of solutions in multi-objective optimization.