16 resultados para Tuning algorithm
Resumo:
This research has been focused at the development of a tuned systematic design methodology, which gives the best performance in a computer aided environment and utilises a cross-technological approach, specially tested with and for laser processed microwave mechanics. A tuned design process scheme is also presented. Because of the currently large production volumes of microwave and radio frequency mechanics even slight improvements of design methodologies or manufacturing technologies would give reasonable possibilities for cost reduction. The typical number of required iteration cycles could be reduced to one fifth of normal. The research area dealing with the methodologies is divided firstly into a function-oriented, a performance-oriented or a manufacturability-oriented product design. Alternatively various approaches can be developed for a customer-oriented, a quality-oriented, a cost-oriented or an organisation-oriented design. However, the real need for improvements is between these two extremes. This means that the effective methodology for the designers should not be too limited (like in the performance-oriented design) or too general (like in the organisation-oriented design), but it should, include the context of the design environment. This is the area where the current research is focused. To test the developed tuned design methodology for laser processing (TDMLP) and the tuned optimising algorithm for laser processing (TOLP), seven different industrial product applications for microwave mechanics have been designed, CAD-modelled and manufactured by using laser in small production series. To verify that the performance of these products meets the required level and to ensure the objectiveness ofthe results extensive laboratory tests were used for all designed prototypes. As an example a Ku-band horn antenna can be laser processed from steel in 2 minutes at the same time obtaining a comparable electrical performance of classical aluminium units or the residual resistance of a laser joint in steel could be limited to 72 milliohmia.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
The improvement of the dynamics of flexible manipulators like log cranes often requires advanced control methods. This thesis discusses the vibration problems in the cranes used in commercial forestry machines. Two control methods, adaptive filtering and semi-active damping, are presented. The adaptive filter uses a part of the lowest natural frequency of the crane as a filtering frequency. The payload estimation algorithm, filtering of control signal and algorithm for calculation of the lowest natural frequency of the crane are presented. The semi-active damping method is basedon pressure feedback. The pressure vibration, scaled with suitable gain, is added to the control signal of the valve of the lift cylinder to suppress vibrations. The adaptive filter cuts off high frequency impulses coming from the operatorand semi-active damping suppresses the crane?s oscillation, which is often caused by some external disturbance. In field tests performed on the crane, a correctly tuned (25 % tuning) adaptive filter reduced pressure vibration by 14-17 % and semi-active damping correspondingly by 21-43%. Applying of these methods require auxiliary transducers, installed in specific points in the crane, and electronically controlled directional control valves.
Resumo:
Diplomityö tarkastelee säikeistettyä ohjelmointia rinnakkaisohjelmoinnin ylemmällä hierarkiatasolla tarkastellen erityisesti hypersäikeistysteknologiaa. Työssä tarkastellaan hypersäikeistyksen hyviä ja huonoja puolia sekä sen vaikutuksia rinnakkaisalgoritmeihin. Työn tavoitteena oli ymmärtää Intel Pentium 4 prosessorin hypersäikeistyksen toteutus ja mahdollistaa sen hyödyntäminen, missä se tuo suorituskyvyllistä etua. Työssä kerättiin ja analysoitiin suorituskykytietoa ajamalla suuri joukko suorituskykytestejä eri olosuhteissa (muistin käsittely, kääntäjän asetukset, ympäristömuuttujat...). Työssä tarkasteltiin kahdentyyppisiä algoritmeja: matriisioperaatioita ja lajittelua. Näissä sovelluksissa on säännöllinen muistinkäyttökuvio, mikä on kaksiteräinen miekka. Se on etu aritmeettis-loogisissa prosessoinnissa, mutta toisaalta huonontaa muistin suorituskykyä. Syynä siihen on nykyaikaisten prosessorien erittäin hyvä raaka suorituskyky säännöllistä dataa käsiteltäessä, mutta muistiarkkitehtuuria rajoittaa välimuistien koko ja useat puskurit. Kun ongelman koko ylittää tietyn rajan, todellinen suorituskyky voi pudota murto-osaan huippusuorituskyvystä.
Resumo:
Työn tavoitteena oli selvittää Stora Enso Oyj:llä käytössä olevan Fenix myynnin- ja logistiikanhallintajärjestelmän logistiikkapalveluiden suorituskyky, tuottaa asiakasohjelmisto suorituskykymittauksista muodostuneen tiedon hallintaan sekä tuottaa toteuttamissuunnitelma suorituskyvyn parantamiseksi. Suorituskyky mitattiin käyttämällä TUXEDOn tarjoamia ominaisuuksia. Suorituskykymittausten tuloksien arviointia varten rakennettiin asiakasohjelmisto, jolla pystyttiin tuottamaan tarvittavat yhteenvetotiedot palveluiden kestoista ja rakenteista. Valmiita ratkaisuja ei ollut tarjolla, joten kaikki tarvittavat ohjelmistot on rakennettu osana tätä työtä. Kaikki komponenttiliittymät toteutettiin siten, että myös muitakin kuin logistiikkaan liittyviä palveluita voidaan tarvittaessa mitata. Mittausten tuloksena saatuja keskimääräisiä suoritusaikoja käytettiin hyväksi toteuttamissuunnitelmaa tehdessä. Toteutussuunnitelma sisältää useiden logistiikka-alueiden kehittämisideoita, joilla Fenixin logistiikkapalveluiden suorituskykyä voidaan tehostaa., ja nykyinen järjestelmän toimintanopeus pystytään säilyttämään tulevaisuudessa. Toteuttamissuunnitelmassa esitettyjä toimenpiteitä tullaan toteuttamaan TietoEnator Oyj:ssä vuoden 2003 aikana.
Resumo:
Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.
Resumo:
In the Russian Wholesale Market, electricity and capacity are traded separately. Capacity is a special good, the sale of which obliges suppliers to keep their generating equipment ready to produce the quantity of electricity indicated by the System Operator. The purpose of the formation of capacity trading was the maintenance of reliable and uninterrupted delivery of electricity in the wholesale market. The price of capacity reflects constant investments in construction, modernization and maintenance of power plants. So, the capacity sale creates favorable conditions to attract investments in the energy sector because it guarantees the investor that his investments will be returned.
Resumo:
In this work a fuzzy linear system is used to solve Leontief input-output model with fuzzy entries. For solving this model, we assume that the consumption matrix from di erent sectors of the economy and demand are known. These assumptions heavily depend on the information obtained from the industries. Hence uncertainties are involved in this information. The aim of this work is to model these uncertainties and to address them by fuzzy entries such as fuzzy numbers and LR-type fuzzy numbers (triangular and trapezoidal). Fuzzy linear system has been developed using fuzzy data and it is solved using Gauss-Seidel algorithm. Numerical examples show the e ciency of this algorithm. The famous example from Prof. Leontief, where he solved the production levels for U.S. economy in 1958, is also further analyzed.
Resumo:
The objective of the this research project is to develop a novel force control scheme for the teleoperation of a hydraulically driven manipulator, and to implement an ideal transparent mapping between human and machine interaction, and machine and task environment interaction. This master‘s thesis provides a preparatory study for the present research project. The research is limited into a single degree of freedom hydraulic slider with 6-DOF Phantom haptic device. The key contribution of the thesis is to set up the experimental rig including electromechanical haptic device, hydraulic servo and 6-DOF force sensor. The slider is firstly tested as a position servo by using previously developed intelligent switching control algorithm. Subsequently the teleoperated system is set up and the preliminary experiments are carried out. In addition to development of the single DOF experimental set up, methods such as passivity control in teleoperation are reviewed. The thesis also contains review of modeling of the servo slider in particular reference to the servo valve. Markov Chain Monte Carlo method is utilized in developing the robustness of the model in presence of noise.
Resumo:
I doktorsavhandlingen undersöks förmågan att lösa hos ett antal lösare för optimeringsproblem och ett antal svårigheter med att göra en rättvis lösarjämförelse avslöjas. Dessutom framläggs några förbättringar som utförts på en av lösarna som heter GAMS/AlphaECP. Optimering innebär, i det här sammanhanget, att finna den bästa möjliga lösningen på ett problem. Den undersökta klassen av problem kan karaktäriseras som svårlöst och förekommer inom ett flertal industriområden. Målet har varit att undersöka om det finns en lösare som är universellt snabbare och hittar lösningar med högre kvalitet än någon av de andra lösarna. Det kommersiella optimeringssystemet GAMS (General Algebraic Modeling System) och omfattande problembibliotek har använts för att jämföra lösare. Förbättringarna som presenterats har utförts på GAMS/AlphaECP lösaren som baserar sig på skärplansmetoden Extended Cutting Plane (ECP). ECP-metoden har utvecklats främst av professor Tapio Westerlund på Anläggnings- och systemteknik vid Åbo Akademi.
Resumo:
Tässä diplomityössä määritellään biopolttoainetta käyttävän voimalaitoksen käytönaikainen tuotannon optimointimenetelmä. Määrittelytyö liittyy MW Powerin MultiPower CHP –voimalaitoskonseptin jatkokehitysprojektiin. Erilaisten olemassa olevien optimointitapojen joukosta valitaan tarkoitukseen sopiva, laitosmalliin ja kustannusfunktioon perustuva menetelmä, jonka tulokset viedään automaatiojärjestelmään PID-säätimien asetusarvojen muodossa. Prosessin mittaustulosten avulla lasketaan laitoksen energia- ja massataseet, joiden tuloksia käytetään seuraavan optimointihetken lähtötietoina. Optimoinnin kohdefunktio on kustannusfunktio, jonka termit ovat voimalaitoksen käytöstä aiheutuvia tuottoja ja kustannuksia. Prosessia optimoidaan säätimille annetut raja-arvot huomioiden niin, että kokonaiskate maksimoituu. Kun laitokselle kertyy käyttöikää ja historiadataa, voidaan prosessin optimointia nopeuttaa hakemalla tilastollisesti historiadatasta nykytilanteen olosuhteita vastaava hetki. Kyseisen historian hetken katetta verrataan kustannusfunktion optimoinnista saatuun katteeseen. Paremman katteen antavan menetelmän laskemat asetusarvot otetaan käyttöön prosessin ohjausta varten. Mikäli kustannusfunktion laskenta eikä historiadatan perusteella tehty haku anna paranevaa katetta, niiden laskemia asetusarvoja ei oteta käyttöön. Sen sijaan optimia aletaan hakea deterministisellä optimointialgoritmilla, joka hakee nykyhetken ympäristöstä paremman katteen antavia säätimien asetusarvoja. Säätöjärjestelmä on mahdollista toteuttaa myös tulevaisuutta ennustavana. Työn käytännön osuudessa voimalaitosmalli luodaan kahden eri mallinnusohjelman avulla, joista toisella kuvataan kattilan ja toisella voimalaitosprosessin toimintaa. Mallinnuksen tuloksena saatuja prosessiarvoja hyödynnetään lähtötietoina käyttökatteen laskennassa. Kate lasketaan kustannusfunktion perusteella. Tuotoista suurimmat liittyvät sähkön ja lämmön myyntiin sekä tuotantotukeen, ja suurimmat kustannukset liittyvät investoinnin takaisinmaksuun ja polttoaineen ostoon. Kustannusfunktiolle tehdään herkkyystarkastelu, jossa seurataan katteen muutosta prosessin teknisiä arvoja muutettaessa. Tuloksia vertaillaan referenssivoimalaitoksella suoritettujen verifiointimittausten tuloksiin, ja havaitaan, että tulokset eivät ole täysin yhteneviä. Erot johtuvat sekä mallinnuksen puutteista että mittausten lyhyehköistä tarkasteluajoista. Automatisoidun optimointijärjestelmän käytännön toteutusta alustetaan määrittelemällä käyttöön otettava optimointitapa, siihen liittyvät säätöpiirit ja tarvittavat lähtötiedot. Projektia tullaan jatkamaan järjestelmän ohjelmoinnilla, testauksella ja virityksellä todellisessa voimalaitosympäristössä ja myöhemmin ennustavan säädön toteuttamisella.
Resumo:
The cytoskeleton is a key feature of both prokaryotic and eukaryotic cells. Itis comprised of three protein families, one of which is the intermediate filaments (IFs). Of these, the IFs are the largest and most diverse. The IFs are expressed throughout life, and are involved in the regulation of cell differentiation, homeostasis, ageing and pathogenesis. The IFs not only provide structural integrity to the cell, they are also involved in a range of cellular functions from organelle trafficking and cell migration to signalling transduction. The IFs are highly dynamic proteins, able to respond and adapt their network rapidly in response to intra- and extra- cellular cues. Consequently they interact with a whole host of cellular signalling proteins, regulating function, and activity, and cellular localisation. While the function of some of the better-known IFs such as the keratins is well studied, the understanding of the function of two IFs, nestin and vimentin, is poor. Nestin is well known as a marker of differentiation and is expressed in some cancers. In cancer, nestin is primarily described as is a promoter of cell motility, however, how it fulfils this role remains undefined. Vimentin too is expressed in cancer, and is known to promote cell motility and is used as a marker for epithelial to mesenchymal transition (EMT). It is only in the last decade that studies have addressed the role that vimentin plays in cell motility and EMT. This work provides novel insight into how the IFs, nestin and vimentin regulate cell motility and invasion. In particular we show that nestin regulates the cellular localisation and organisation of two key facilitators of cell migration, focal adhesion kinase and integrins. We identify nestin as a regulator of extracellular matrix degradation and integrin-mediated cell invasion. Two further studies address the specific regulation of vimentin by phosphorylation. A detailed characterisation study identified key phosphorylation sites on vimentin, which are critical for proper organisation of the vimentin network. Furthermore, we show that the bioactive sphingolipids are vimentin network regulators. Specifically, the sphingolipids induced RhoA kinasedependent (ROCK) phosphorylation at vimentin S71, which lead to filament reorganisation and inhibition of cell migration. Together these studies shed new light into the regulation of nestin and vimentin during cell motility.
Resumo:
The Laboratory of Intelligent Machine researches and develops energy-efficient power transmissions and automation for mobile construction machines and industrial processes. The laboratory's particular areas of expertise include mechatronic machine design using virtual technologies and simulators and demanding industrial robotics. The laboratory has collaborated extensively with industrial actors and it has participated in significant international research projects, particularly in the field of robotics. For years, dSPACE tools were the lonely hardware which was used in the lab to develop different control algorithms in real-time. dSPACE's hardware systems are in widespread use in the automotive industry and are also employed in drives, aerospace, and industrial automation. But new competitors are developing new sophisticated systems and their features convinced the laboratory to test new products. One of these competitors is National Instrument (NI). In order to get to know the specifications and capabilities of NI tools, an agreement was made to test a NI evolutionary system. This system is used to control a 1-D hydraulic slider. The objective of this research project is to develop a control scheme for the teleoperation of a hydraulically driven manipulator, and to implement a control algorithm between human and machine interaction, and machine and task environment interaction both on NI and dSPACE systems simultaneously and to compare the results.
Resumo:
The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.
Resumo:
A new area of machine learning research called deep learning, has moved machine learning closer to one of its original goals: artificial intelligence and general learning algorithm. The key idea is to pretrain models in completely unsupervised way and finally they can be fine-tuned for the task at hand using supervised learning. In this thesis, a general introduction to deep learning models and algorithms are given and these methods are applied to facial keypoints detection. The task is to predict the positions of 15 keypoints on grayscale face images. Each predicted keypoint is specified by an (x,y) real-valued pair in the space of pixel indices. In experiments, we pretrained deep belief networks (DBN) and finally performed a discriminative fine-tuning. We varied the depth and size of an architecture. We tested both deterministic and sampled hidden activations and the effect of additional unlabeled data on pretraining. The experimental results show that our model provides better results than publicly available benchmarks for the dataset.