16 resultados para multiple-input single-output FRF
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
One of the targets of the climate and energy package of the European Union is to increase the energy efficiency in order to achieve a 20 percent reduction in primary energy use compared with the projected level by 2020. The energy efficiency can be improved for example by increasing the rotational speed of large electrical drives, because this enables the elimination of gearboxes leading to a compact design with lower losses. The rotational speeds of traditional bearings, such as roller bearings, are limited by mechanical friction. Active magnetic bearings (AMBs), on the other hand, allow very high rotational speeds. Consequently, their use in large medium- and high-speed machines has rapidly increased. An active magnetic bearing rotor system is an inherently unstable, nonlinear multiple-input, multiple-output system. Model-based controller design of AMBs requires an accurate system model. Finite element modeling (FEM) together with the experimental modal analysis provides a very accurate model for the rotor, and a linearized model of the magneticactuators has proven to work well in normal conditions. However, the overall system may suffer from unmodeled dynamics, such as dynamics of foundation or shrink fits. This dynamics can be modeled by system identification. System identification can also be used for on-line diagnostics. In this study, broadband excitation signals are adopted to the identification of an active magnetic bearing rotor system. The broadband excitation enables faster frequency response function measurements when compared with the widely used stepped sine and swept sine excitations. Different broadband excitations are reviewed, and the random phase multisine excitation is chosen for further study. The measurement times using the multisine excitation and the stepped sine excitation are compared. An excitation signal design with an analysis of the harmonics produced by the nonlinear system is presented. The suitability of different frequency response function estimators for an AMB rotor system are also compared. Additionally, analytical modeling of an AMB rotor system, obtaining a parametric model from the nonparametric frequency response functions, and model updating are discussed in brief, as they are key elements in the modeling for a control design. Theoretical methods are tested with a laboratory test rig. The results conclude that an appropriately designed random phase multisine excitation is suitable for the identification of AMB rotor systems.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Työn tavoitteena oli kehittää maalämpöpumppujärjestelmään kuuluvien komponenttien mitoitusta. Työ tehtiin Alufer Oy nimiselle yritykselle, joka on työskennellyt jo kolme vuotta maalämpöpumppujärjestelmän tuotekehityksen parissa. Maalämpöpumppujärjestelmä tullaan suunnittelemaan mahdollisimman suorituskykyiseksi ja joustavaksi. Suunnittelun lähtökohtana on, että lämmitysjärjestelmä on ns. matalalämpö-järjestelmä, joka käytännössä usein toteutetaan lattialämmityksenä. Ensimmäiseksi työssä on selvitetty mitä maalämpö on ja mitkä ovat yleisimmät maalämpöpumppujärjestelmän lämmönkeruuputkistojen asennustavat. Tällä hetkellä käytössä on joko vaakaan (maa, vesi) tai pystyyn asennettava lämmönkeruuputkisto (porakaivo). Seuraavaksi työssä on lähdetty selvittämään maalämpöpumppumarkkinoita Suomessa sekä selvitetty kolmen suurimman valmistajan Geopro Systemsin, Suomen Lämpöpumpputekniikan ja Ekowellin tuotteita sekä tekniikkaa. Työssä selvitetään myös muutamien Eurooppalaisten maiden markkinat. Mitoitusjärjestelmässä tarkastelu on aloitettu uudisrakennuksen lämmitystehon tarpeesta ja käyttöveden lämmityksen tarvitsemasta tehosta. Tarvittavan lämmitysenergian perusteella määriteltiin lämpöpumppujärjestelmään kuuluvat komponentit. Maalämpöpumppujärjestelmä koostuu seuraavista pääkomponenteista: höyrystin, kompressori, lauhdutin ja paisuntaventtiili. Höyrystimen tehon mitoituksessa on huomioitu lämmönkeruuputkistossa kulkevan nesteen aineominaisuudet, massavirta ja lämpötilaero höyrystimen nesteen ulostulon sekä sisään menon välillä. Kompressorin teho on määritetty valitun kylmäaineen (R407C) lg p-h piirroksesta tai määritetty teoreettisesti kompressorivalmistajien omista valintaohjelmista. Lauhduttimen teho on määritelty höyrystimen sekä kompressorin tehon summasta. Samalla määräytyy myös uudisrakennuksen lämmitystehontarve. Lopuksi työssä on käsitelty maalämpöpumppujärjestelmän kehitysmahdollisuuksia. Vaihtoehtoina on huomioitu tulistin, alijäähdytin ja varaaja, joilla voidaan huomattavasti parantaa maalämpöpumpun lämpökerrointa.
Resumo:
Työn alkuosassa kartoitettiin AvestaPolarit –yhtiöiden Tornion tehtaiden keskeiset fluoridilähteet kuten fluspaatti, valukuonat, valupulverit ja fluorivetyhappo. Valupulverien ja kuonien haihtumis- ja liukoisuuskäyttäytymistä valaistiin kotimaisten ja kansainvälisten tutkimusten avulla. Tutkimustuloksia sovellettiin pääpiirteittäin Tornion tehtaiden tilanteeseen ottamalla huomioon tekijät, jotka saattoivat lieventää tai vahvistaa fluoridien vaikutusta ympäristöön. Yleisesti fluoridien ympäristö- ja terveysvaikutukset arvioitiin vähäisiksi. Työn kokeellisessa osassa määritettiin Tornion tehtaiden ferrokromitehtaan, terässulaton, kuumavalssaamon ja kylmävalssaamon fluoriditaseet. Jokaisen osastojen syötteiden fluoridipitoisuudet selvitettiin tuottajien ilmoittamien tuotekoostumuksien, spesifikaatioiden ja fluoridianalyysien perusteella. Fluoridien kokonaismäärät laskettiin jokaiselle syötteelle ja ne suhteutettiin kunkin osaston vuoden 2001 tuotantotasoon. Tasetarkastelussa suurimpina fluoridisyötteinä nousivat odotetusti esiin terässulaton käyttämä kuonanmuodostaja-aine fluspaatti (CaF2) ja kylmävalssaamon peittaushappo, 70 prosenttinen fluorivetyhappo (HF). Lisäksi muita merkittäviä syötteitä olivat kylmävalssaamon käyttämä kalkkipitoinen sekakuona ja ferrokromitehtaan sulatuskoksi. Tuotoksien eli päästöjen fluoridipitoisuudet saatiin selville päästömittauksin. Jätevesistä otettiin pääosin viikoittaisia kokoomanäytteitä, jotka analysoitiin tehtaan laboratoriossa. Kaasumaiset tuotokset oli määritetty kertamittauksien perusteella. Kiinteiden tuotoksien eli sakkojen ja kuonien fluoridimittaukset suoritettiin 3 sulatuksen kuonanäytteistä ja sakan vuosinäytteestä. Tuotoksista suurimmat ominaispäästökertoimet olivat juuri terässulaton AOD-konvertterin ja senkkauunin kuonilla ja kylmävalssaamon neutraloidulla regenerointisakalla ja neutralointisakoilla. Näistä ei aiheutunut varsinaista päästöä lähiympäristöön, koska sakat ja kuonat loppusijoitetaan tehtaan kaatopaikalle tai niitä käytetään liukenemattomassa muodossa. Tornion tehtaiden fluoridisyötteiden ja -tuotoksien mittausepätarkkuudet vaikuttivat fluoriditaseeseen. Ferrokromitehtaan fluoridisyötteet olivat kokonaismäärältään selvästi suurempia kuin tuotokset. Terässulaton fluoriditaseen tuotokset olivat suurempia kuin syötteet ja kylmävalssaamon syötteet sekä tuotokset olivat karkeasti arvioiden samaa suuruusluokkaa. Kuumavalssaamon fluoridisyötteet ja -tuotokset olivat mitättömiä. Fluoriditaseen epävarmuustekijöitä voidaan vähentää suorittamalla esimerkiksi useita fluoridimittauksia kaasumaisista päästöistä.
Resumo:
The study focuses on the key factors in outsourcing from the viewpoint of manufacturing companies operating in Russia. The goal has been to give an overview of the different kinds of challenges companies might face in the case of outsourcing. Of particular interest are the possible risks which might originate from the subcontract relationship, as well as managing these risks. The empirical material for this qualitative interview study was collected from three large-scale manufacturing companies operating in food industry in Russia. Two of the interviewed companies were local Russian actors, and one was an international firm. According to the respondents, a big challenge is to find a suitable supplier in the Russian markets. If there are suppliers available, they may often not be capable of operating as outsourcing partners. The most common problems faced with suppliers are unstable quality and arbitrary pricing. Whether the suppliers are capable to offer activities which satisfy the company’s own and the end customers’ requirements, seems to be the biggest concern in the interviewed companies. This quality risk is managed by the strategy of multiple sourcing. Single sourcing is seen as an impossible option. The interviewed companies have no organised risk management with their external suppliers.
Resumo:
Erotusvahvistin on instrumentointivahvistin, jonka tulo ja lähtö ovat galvaanisesti erotettu toisistaan. Erotusvahvistimia käytetään galvaanista erotusta vaativissa sovelluksissa, muun muassa sairaalalaitteissa. Teollisuudessa on olemassa sovelluksia, joihin tarvittaisiin analogisia erotusvahvistimia, mutta ei tiedetä onko analogisilla erotusvahvistimilla riittävän hyvät komponenttiarvot. Tässä työssä selvitetään analogisten erotusvahvistimien tämänhetkisiä ominaisuuksia, hintoja ja komponenttiarvoja neljältä eri valmistajalta.
Resumo:
It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.
Resumo:
Tässä tutkimuksessa selvitetään, miten pääomasijoittajat arvioivat tuoteideoita sekä niiden potentiaalisuutta kehittyä innovaatioiksi. Tutkimusongelmaa lähestytään kolmen osaongelman kautta: 1. Millaisia prosesseja, menetelmiä tai käytäntöjä pääomasijoittajilla on tuoteideoiden arviointiin; 2. Millaisia innovaation tekijöitä, komponentteja tai attribuutteja pääomasijoittajat huomioivat tuoteideaa arvioidessaan; 3. Millaisia tekijöitä pääomasijoittajat huomioivat tuoteidean innovaatiopotentiaalin arvioinnissa, ja kuinka tärkeitä nämä tekijät ovat. Tutkimusongelman selvittämiseksi ja ymmärtämiseksi kuvataan innovaatio -käsite: mitä on innovaatiotoiminta, millainen on innovaatioprosessi ja miten ideasta edetään kohti tuotetta ja mahdollista innovaatiota. Potentiaalisten ideoiden rahoitusta tarkastellaan sekä yleisellä tasolla että pääomasijoittajien näkökulmasta. Pääomasijoittamisen historia, sijoitusmotiivit sekä pääomasijoitusprosessi selvitetään. Lisäksi selvitetään tuoteinnovaation tekijät, jotka tässä tutkimuksessa ovat syötteet, prosessit ja tuotokset. Tutkimukseen liittyvä tiedonkeruu suoritettiin verkkokyselynä keväällä 2013. Tutkimus oli kokonaistutkimus, ja kyselylomake lähetettiin kaikille FVCA:n jäsenille (Suomen pääomasijoi-tusyhdistys ry:n varsinaiset jäsenet). Empiirisessä osiossa selvitettiin pääomasijoittajien taus-tatietojen lisäksi tuoteidean arviointiin liittyviä menetelmiä ja järjestelmiä, syötteitä, prosesseja, tuotoksia sekä tuoteidean innovaatiopotentiaaliin liittyviä asioita. Pääomasijoittajia kiinnostaa eniten yritysten kasvuvaiheen rahoitus ja toimialoista teollisuus sekä energia. Tuoteideoita ja niiden innovaatiopotentiaalia arvioidaan useiden eri tekijöiden perusteella, mutta vakiintuneita ja kiinteitä arviointimenetelmiä tai -prosesseja on harvalla yrityksistä.
Resumo:
This thesis examines the application of data envelopment analysis as an equity portfolio selection criterion in the Finnish stock market during period 2001-2011. A sample of publicly traded firms in the Helsinki Stock Exchange is examined in this thesis. The sample covers the majority of the publicly traded firms in the Helsinki Stock Exchange. Data envelopment analysis is used to determine the efficiency of firms using a set of input and output financial parameters. The set of financial parameters consist of asset utilization, liquidity, capital structure, growth, valuation and profitability measures. The firms are divided into artificial industry categories, because of the industry-specific nature of the input and output parameters. Comparable portfolios are formed inside the industry category according to the efficiency scores given by the DEA and the performance of the portfolios is evaluated with several measures. The empirical evidence of this thesis suggests that with certain limitations, data envelopment analysis can successfully be used as portfolio selection criterion in the Finnish stock market when the portfolios are rebalanced at annual frequency according to the efficiency scores given by the data envelopment analysis. However, when the portfolios were rebalanced every two or three years, the results are mixed and inconclusive.
Resumo:
Laser additive manufacturing (LAM), known also as 3D printing, is a powder bed fusion (PBF) type of additive manufacturing (AM) technology used to manufacture metal parts layer by layer by assist of laser beam. The development of the technology from building just prototype parts to functional parts is due to design flexibility. And also possibility to manufacture tailored and optimised components in terms of performance and strength to weight ratio of final parts. The study of energy and raw material consumption in LAM is essential as it might facilitate the adoption and usage of the technique in manufacturing industries. The objective this thesis was find the impact of LAM on environmental and economic aspects and to conduct life cycle inventory of CNC machining and LAM in terms of energy and raw material consumption at production phases. Literature overview in this thesis include sustainability issues in manufacturing industries with focus on environmental and economic aspects. Also life cycle assessment and its applicability in manufacturing industry were studied. UPLCI-CO2PE! Initiative was identified as mostly applied exiting methodology to conduct LCI analysis in discrete manufacturing process like LAM. Many of the reviewed literature had focused to PBF of polymeric material and only few had considered metallic materials. The studies that had included metallic materials had only measured input and output energy or materials of the process and compared to different AM systems without comparing to any competitive process. Neither did any include effect of process variation when building metallic parts with LAM. Experimental testing were carried out to make dissimilar samples with CNC machining and LAM in this thesis. Test samples were designed to include part complexity and weight reductions. PUMA 2500Y lathe machine was used in the CNC machining whereas a modified research machine representing EOSINT M-series was used for the LAM. The raw material used for making the test pieces were stainless steel 316L bar (CNC machined parts) and stainless steel 316L powder (LAM built parts). An analysis of power, time, and the energy consumed in each of the manufacturing processes on production phase showed that LAM utilises more energy than CNC machining. The high energy consumption was as result of duration of production. Energy consumption profiles in CNC machining showed fluctuations with high and low power ranges. LAM energy usage within specific mode (standby, heating, process, sawing) remained relatively constant through the production. CNC machining was limited in terms of manufacturing freedom as it was not possible to manufacture all the designed sample by machining. And the one which was possible was aided with large amount of material removed as waste. Planning phase in LAM was shorter than in CNC machining as the latter required many preparation steps. Specific energy consumption (SEC) were estimated in LAM based on the practical results and assumed platform utilisation. The estimated platform utilisation showed SEC could reduce when more parts were placed in one build than it was in with the empirical results in this thesis (six parts).
Resumo:
Medium-voltage motor drives extend the power rating of AC motor drives in industrial applications. Multilevel converters are gaining an ever-stronger foothold in this field. This doctoral dissertation introduces a new topology to the family of modular multilevel converters: the modular double-cascade converter. The modularity of the converter is enabled by the application of multiwinding mediumfrequency isolation transformers. Owing to the innovative transformer link, the converter presents many advantageous properties at a concept level: modularity, high input and output power quality, small footprint, and wide variety of applications, among others. Further, the research demonstrates that the transformer link also plays a key role in the disadvantages of the topology. An extensive simulation study on the new converter is performed. The focus of the simulation study is on the development of control algorithms and the feasibility of the topology. In particular, the circuit and control concepts used in the grid interface, the coupling configurations of the load inverter, and the transformer link operation are thoroughly investigated. Experimental results provide proof-of-concept results on the operation principle of the converter. This work concludes a research collaboration project on multilevel converters between LUT and Vacon Plc. The project was active from 2009 until 2014.
Resumo:
The human striatum is a heterogeneous structure representing a major part of the dopamine (DA) system’s basal ganglia input and output. Positron emission tomography (PET) is a powerful tool for imaging DA neurotransmission. However, PET measurements suffer from bias caused by the low spatial resolution, especially when imaging small, D2/3 -rich structures such as the ventral striatum (VST). The brain dedicated high-resolution PET scanner, ECAT HRRT (Siemens Medical Solutions, Knoxville, TN, USA) has superior resolution capabilities than its predecessors. In the quantification of striatal D2/3 binding, the in vivo highly selective D2/3 antagonist [11C] raclopride is recognized as a well-validated tracer. The aim of this thesis was to use a traditional test-retest setting to evaluate the feasibility of utilizing the HRRT scanner for exploring not only small brain regions such as the VST but also low density D2/3 areas such as cortex. It was demonstrated that the measurement of striatal D2/3 binding was very reliable, even when studying small brain structures or prolonging the scanning interval. Furthermore, the cortical test-retest parameters displayed good to moderate reproducibility. For the first time in vivo, it was revealed that there are significant divergent rostrocaudal gradients of [11C]raclopride binding in striatal subregions. These results indicate that high-resolution [11C]raclopride PET is very reliable and its improved sensitivity means that it should be possible to detect the often very subtle changes occurring in DA transmission. Another major advantage is the possibility to measure simultaneously striatal and cortical areas. The divergent gradients of D2/3 binding may have functional significance and the average distribution binding could serve as the basis for a future database. Key words: dopamine, PET, HRRT, [11C]raclopride, striatum, VST, gradients, test-retest.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Resumo:
This dissertation describes a networking approach to infinite-dimensional systems theory, where there is a minimal distinction between inputs and outputs. We introduce and study two closely related classes of systems, namely the state/signal systems and the port-Hamiltonian systems, and describe how they relate to each other. Some basic theory for these two classes of systems and the interconnections of such systems is provided. The main emphasis lies on passive and conservative systems, and the theoretical concepts are illustrated using the example of a lossless transfer line. Much remains to be done in this field and we point to some directions for future studies as well.