940 resultados para Evolutionary optimization methods
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Establishing intercompany relationships: Motives and methods for successful collaborative engagement
Resumo:
This study explores the early phases of intercompany relationship building, which is a very important topic for purchasing and business development practitioners as well as for companies' upper management. There is a lot ofevidence that a proper engagement with markets increases a company's potential for achieving business success. Taking full advantage of the market possibilities requires, however, a holistic view of managing related decision-making chain. Most literature as well as the business processes of companies are lacking this holism. Typically they observe the process from the perspective of individual stages and thus lead to discontinuity and sub-optimization. This study contains a comprehensive introduction to and evaluation of literature related to various steps of the decision-making process. It is studied from a holistic perspective ofdetermining a company's vertical integration position within its demand/ supplynetwork context; translating the vertical integration objectives to feasible strategies and objectives; and operationalizing the decisions made through engagement with collaborative intercompany relationships. The empirical part of the research has been conducted in two sections. First the phenomenon of intercompany engagement is studied using two complementary case studies. Secondly a survey hasbeen conducted among the purchasing and business development managers of several electronics manufacturing companies, to analyze the processes, decision-makingcriteria and success factors of engagement for collaboration. The aim has been to identify the reasons why companies and their management act the way they do. As a combination of theoretical and empirical research an analysis has been produced of what would be an ideal way of engaging with markets. Based on the respective findings the study concludes by proposing a holistic framework for successful engagement. The evidence presented throughout the study demonstrates clear gaps, discontinuities and limitations in both current research and in practical purchasing decision-making chains. The most significant discontinuity is the identified disconnection between the supplier selection process and related criteria and the relationship success factors.
Resumo:
Evoluutioalgoritmit ovat viime vuosina osoittautuneet tehokkaiksi menetelmiksi globaalien optimointitehtävien ratkaisuun. Niiden vahvuutena on etenkin yleiskäyttöisyys ja kyky löytää globaali ratkaisu juuttumatta optimoitavan tavoitefunktion paikallisiin optimikohtiin. Tässä työssä on tavoitteena kehittää uusi, normaalijakaumaan perustuva mutaatio-operaatio differentiaalievoluutioalgoritmiin, joka on eräs uusimmista evoluutiopohjaisista optimointialgoritmeista. Menetelmän oletetaan vähentävän entisestään sekä populaation ennenaikaisen suppenemisen, että algoritmin tilojen juuttumisen riskiä ja se on teoreettisesti osoitettavissa suppenevaksi. Tämä ei päde alkuperäisen differentiaalievoluution tapauksessa, koska on voitu osoittaa, että sen tilanmuutokset voivat pienellä todennäköisyydellä juuttua. Työssä uuden menetelmän toimintaa tarkastellaan kokeellisesti käyttäen testiongelmina monirajoiteongelmia. Rajoitefunktioiden käsittelyyn käytetään Jouni Lampisen kehittämää, Pareto-optimaalisuuden periaatteeseen perustuvaa menetelmää. Samalla saadaan kerättyä lisää kokeellista näyttöä myös tämän menetelmän toiminnasta. Kaikki käytetyt testiongelmat kyettiin ratkaisemaan sekä alkuperäisellä differentiaalievoluutiolla, että uutta mutaatio-operaatiota käyttävällä versiolla. Uusi menetelmä osoittautui kuitenkin luotettavammaksi sellaisissa tapauksissa, joissa alkuperäisellä algoritmilla oli vaikeuksia. Lisäksi useimmat ongelmat kyettiin ratkaisemaan luotettavasti pienemmällä populaation koolla kuin alkuperäistä differentiaalievoluutiota käytettäessä. Uuden menetelmän käyttö myös mahdollistaa paremmin sellaisten kontrolliparametrien käytön, joilla hausta saadaan rotaatioinvariantti. Laskennallisesti uusi menetelmä on hieman alkuperäistä differentiaalievoluutiota raskaampi ja se tarvitsee yhden kontrolliparametrin enemmän. Uusille kontrolliparametreille määritettiin kuitenkin mahdollisimman yleiskäyttöiset arvot, joita käyttämällä on mahdollista ratkaista suuri joukko erilaisia ongelmia.
Resumo:
Gene turnover rates and the evolution of gene family sizes are important aspects of genome evolution. Here, we use curated sequence data of the major chemosensory gene families from Drosophila-the gustatory receptor, odorant receptor, ionotropic receptor, and odorant-binding protein families-to conduct a comparative analysis among families, exploring different methods to estimate gene birth and death rates, including an ad hoc simulation study. Remarkably, we found that the state-of-the-art methods may produce very different rate estimates, which may lead to disparate conclusions regarding the evolution of chemosensory gene family sizes in Drosophila. Among biological factors, we found that a peculiarity of D. sechellia's gene turnover rates was a major source of bias in global estimates, whereas gene conversion had negligible effects for the families analyzed herein. Turnover rates vary considerably among families, subfamilies, and ortholog groups although all analyzed families were quite dynamic in terms of gene turnover. Computer simulations showed that the methods that use ortholog group information appear to be the most accurate for the Drosophila chemosensory families. Most importantly, these results reveal the potential of rate heterogeneity among lineages to severely bias some turnover rate estimation methods and the need of further evaluating the performance of these methods in a more diverse sampling of gene families and phylogenetic contexts. Using branch-specific codon substitution models, we find further evidence of positive selection in recently duplicated genes, which attests to a nonneutral aspect of the gene birth-and-death process.
Resumo:
The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
Diplomityössä esitetään menetelmä populaation monimuotoisuuden mittaamiseen liukulukukoodatuissa evoluutioalgoritmeissa, ja tarkastellaan kokeellisesti sen toimintaa. Evoluutioalgoritmit ovat populaatiopohjaisia menetelmiä, joilla pyritään ratkaisemaan optimointiongelmia. Evoluutioalgoritmeissa populaation monimuotoisuuden hallinta on välttämätöntä, jotta suoritettu haku olisi riittävän luotettavaa ja toisaalta riittävän nopeaa. Monimuotoisuuden mittaaminen on erityisen tarpeellista tutkittaessa evoluutioalgoritmien dynaamista käyttäytymistä. Työssä tarkastellaan haku- ja tavoitefunktioavaruuden monimuotoisuuden mittaamista. Toistaiseksi ei ole ollut olemassa täysin tyydyttäviä monimuotoisuuden mittareita, ja työn tavoitteena on kehittää yleiskäyttöinen menetelmä liukulukukoodattujen evoluutioalgoritmien suhteellisen ja absoluuttisen monimuotoisuuden mittaamiseen hakuavaruudessa. Kehitettyjen mittareiden toimintaa ja käyttökelpoisuutta tarkastellaan kokeellisesti ratkaisemalla optimointiongelmia differentiaalievoluutioalgoritmilla. Toteutettujen mittareiden toiminta perustuu keskihajontojen laskemiseen populaatiosta. Keskihajonnoille suoritetaan skaalaus, joko alkupopulaation tai nykyisen populaation suhteen, riippuen lasketaanko absoluuttista vai suhteellista monimuotoisuutta. Kokeellisessa tarkastelussa havaittiin kehitetyt mittarit toimiviksi ja käyttökelpoisiksi. Tavoitefunktion venyttäminen koordinaattiakseleiden suunnassa ei vaikuta mittarin toimintaan. Myöskään tavoitefunktion kiertäminen koordinaatistossa ei vaikuta mittareiden tuloksiin. Esitetyn menetelmän aikakompleksisuus riippuu lineaarisesti populaation koosta, ja mittarin toiminta on siten nopeaa suuriakin populaatioita käytettäessä. Suhteellinen monimuotoisuus antaa vertailukelpoisia tuloksia riippumatta parametrien lukumäärästä tai populaation koosta.
Resumo:
The genus Prunus L. is large and economically important. However, phylogenetic relationships within Prunus at low taxonomic level, particularly in the subgenus Amygdalus L. s.l., remain poorly investigated. This paper attempts to document the evolutionary history of Amygdalus s.l. and establishes a temporal framework, by assembling molecular data from conservative and variable molecular markers. The nuclear s6pdh gene in combination with the plastid trnSG spacer are analyzed with bayesian and maximum likelihood methods. Since previous phylogenetic analysis with these markers lacked resolution, we additionally analyzed 13 nuclear SSR loci with the δµ2 distance, followed by an unweighted pair group method using arithmetic averages algorithm. Our phylogenetic analysis with both sequence and SSR loci confirms the split between sections Amygdalus and Persica, comprising almonds and peaches, respectively. This result is in agreement with biogeographic data showing that each of the two sections is naturally distributed on each side of the Central Asian Massif chain. Using coalescent based estimations, divergence times between the two sections strongly varied when considering sequence data only or combined with SSR. The sequence-only based estimate (5 million years ago) was congruent with the Central Asian Massif orogeny and subsequent climate change. Given the low level of differentiation within the two sections using both marker types, the utility of combining microsatellites and data sequences to address phylogenetic relationships at low taxonomic level within Amygdalus is discussed. The recent evolutionary histories of almond and peach are discussed in view of the domestication processes that arose in these two phenotypically-diverging gene pools: almonds and peaches were domesticated from the Amygdalus s.s. and Persica sections, respectively. Such economically important crops may serve as good model to study divergent domestication process in close genetic pool.
Resumo:
AbstractObjective:The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients' body mass index.Materials and Methods:The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality.Results:An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes.Conclusion:The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient.
Resumo:
Työssä pyrittiin etsimään differentiaalievoluutioalgoritmilla kaksiakseliselle, välijäähdytyksellä, välipoltolla ja rekuperaattorilla varustetulle mikrokaasuturbiinille sellaiset kompressorien painesuhteet ja rekuperaattorin rekuperaatioaste, että saavutettaisiin mandollisimman hyvä osakuormahyötysuhteen säilyvyys. Osakuormatehon säätömenetelmäksi oli valittu pyörimisnopeussäädön ja turbiinien sisääntulolämpötilan alentamisen yhdistelmä, jossa generaattorilla varustetun akselin pyörimisnopeus sekä molempien turbiinien sisääntulolämpötilat olivat toisistaan riippumatta vapaasti säädettävissä. Työssä löydettiin optimaalinen säätömenetelmien yhdistelmä, jolla saavutetaan parempi osakuormahyötysuhteen säilyvyys, kuin millään käytetyistä menetelmistä yksinään. Lisäksi havaittiin, ettei optimaalinen säätömenetelmä merkittävästi riipu koneikolle valituista suunnittelupisteen parametreista. Osakuormahyötysuhteen säilyvyyden kannalta optimaalinen koneikko ei merkittävästi poikennut suunnittelupisteen hyötysuhteen kannalta optimaalisesta.
Resumo:
Russian and Baltic electricity markets are in the process of reformation and development on the way for competitive and transparent market. Nordic market also undergoes some changes on the way to market integration. Old structure and practices have been expired whereas new laws and rules come into force. The master thesis describes structure and functioning of wholesale electricity markets, cross-border connections between different countries. Additionally methods of cross-border trading using different methods of capacity allocation are disclosed. The main goal of present thesis is to study current situation at different electricity markets and observe changes coming into force as well as the capacity and electricity balances forecast in order to optimize short term power trading between countries and estimate the possible profit for the company.
Resumo:
In this thesis (TFG) the results of the comparison between different methods to obtain a recombinant protein, by orthologous and heterologous expression, are exposed. This study will help us to identify the best way to express and purify a recombinant protein that will be used for biotechnology applications. In the first part of the project the goal was to find the best expression and purification system to obtain the recombinant protein of interest. To achieve this objective, a system expression in bacteria and in yeast was designed. The DNA was cloned into two different expression vectors to create a fusion protein with two different tags, and the expression of the protein was induced by IPTG or glucose. Additionally, in yeast, two promoters where used to express the protein, the one corresponding to the same protein (orthologous expression), and the ENO2 promoter (heterologous expression). The protein of interest is a NAD-dependent enzyme so, in a second time, its specific activity was evaluated by coenzyme conversion. The results of the TFG suggest that, comparing the model organisms, bacteria are more efficient than yeast because the quantity of protein obtained is higher and better purified. Regarding yeast, comparing the two expression mechanisms that were designed, heterologous expression works much better than the orthologous expression, so in case that we want to use yeast as expression model for the protein of interest, ENO2 will be the best option. Finally, the enzymatic assays, done to compare the effectiveness of the different expression mechanisms respect to the protein activity, revealed that the protein purified in yeast had more activity in converting the NAD coenzyme.
Resumo:
Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications
Resumo:
Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory task. Literature provides surveys based on summaries of previous studies, which are often based on theoretical descriptions of the methods. An engineer, however, requires some evidence from experimental evaluations in order to make the appropriate decision when selecting a technique for a problem. This is what we have done in this paper: experimentally analyzed a set of representative state-of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation problem. This is an optimization problem in which drivers should be assigned to transport services, fulfilling some constraints and minimizing some function cost. The experimental results have provided us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime behavior, computational time, memory requirements, parameters, and free downloadable tools. Based on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is also helpful for other engineers facing a similar problem
Resumo:
Stratospheric ozone can be measured accurately using a limb scatter remote sensing technique at the UV-visible spectral region of solar light. The advantages of this technique includes a good vertical resolution and a good daytime coverage of the measurements. In addition to ozone, UV-visible limb scatter measurements contain information about NO2, NO3, OClO, BrO and aerosols. There are currently several satellite instruments continuously scanning the atmosphere and measuring the UVvisible region of the spectrum, e.g., the Optical Spectrograph and Infrared Imager System (OSIRIS) launched on the Odin satellite in February 2001, and the Scanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY (SCIAMACHY) launched on Envisat in March 2002. Envisat also carries the Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument, which also measures limb-scattered sunlight under bright limb occultation conditions. These conditions occur during daytime occultation measurements. The global coverage of the satellite measurements is far better than any other ozone measurement technique, but still the measurements are sparse in the spatial domain. Measurements are also repeated relatively rarely over a certain area, and the composition of the Earth’s atmosphere changes dynamically. Assimilation methods are therefore needed in order to combine the information of the measurements with the atmospheric model. In recent years, the focus of assimilation algorithm research has turned towards filtering methods. The traditional Extended Kalman filter (EKF) method takes into account not only the uncertainty of the measurements, but also the uncertainty of the evolution model of the system. However, the computational cost of full blown EKF increases rapidly as the number of the model parameters increases. Therefore the EKF method cannot be applied directly to the stratospheric ozone assimilation problem. The work in this thesis is devoted to the development of inversion methods for satellite instruments and the development of assimilation methods used with atmospheric models.