57 resultados para Overhead passage
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
This thesis focuses on flavonoids, a subgroup of phenolic compounds produced by plants, and how they affect the herbivorous larvae of lepidopterans and sawflies. The first part of the literature review examines different techniques to analyze the chemical structures of flavonoids and their concentrations in biological samples. These techniques include, for example, ultraviolet-visible spectroscopy, mass spectrometry, and nuclear magnetic resonance spectroscopy. The second part of the literature review studies how phenolic compounds function in the metabolism of larvae. The harmful oxidation reactions of phenolic compounds in insect guts are also emphasized. In addition to the negative effects, many insect species have evolved the use of phenolic compounds for their own benefit. In the experimental part of the thesis, high concentrations of complex flavonoid oligoglycosides were found in the hemolymph (the circulatory fluid of insects) of birch and pine sawflies. The larvae produced these compounds from simple flavonoid precursors present in the birch leaves and pine needles. Flavonoid glycosides were also found in the cocoon walls of sawflies, which suggested that flavonoids were used in the construction of cocoons. The second part of the experimental work studied the modifications of phenolic compounds in conditions that mimicked the alkaline guts of lepidopteran larvae. It was found that the 24 plant species studied and their individual phenolic compounds had variable capacities to function as oxidative defenses in alkaline conditions. The excrements of lepidopteran and sawfly species were studied to see how different types of phenolics were processed by the larvae. These results suggested that phenolic compounds were oxidized, hydrolyzed, or modified in other ways during their passage through the digestive tract of the larvae.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
Distillation is a unit operation of process industry, which is used to separate a liquid mixture into two or more products and to concentrate liquid mixtures. A drawback of the distillation is its high energy consumption. An increase in energy and raw material prices has led to seeking ways to improve the energy efficiency of distillation. In this Master's Thesis, these ways are studied in connection with the concentration of hydrogen peroxide at the Solvay Voikkaa Plant. The aim of this thesis is to improve the energy efficiency of the concentration of the Voikkaa Plant. The work includes a review of hydrogen peroxide and its manufacturing. In addition, the fundamentals of distillation and its energy efficiency are reviewed. An energy analysis of the concentration unit of Solvay Voikkaa Plant is presented in the process development study part. It consists of the current and past information of energy and utility consumptions, balances, and costs. After that, the potential ways to improve the energy efficiency of the distillation unit at the factory are considered and their feasibility is evaluated technically and economically. Finally, proposals to improve the energy efficiency are suggested. Advanced process control, heat integration and energy efficient equipment are the most potential ways to carry out the energy efficient improvements of the concentration at the Solvay Voikkaa factory. Optimization of the reflux flow and the temperatures of the overhead condensers can offer immediate savings in the energy and utility costs without investments. Replacing the steam ejector system with a vacuum pump would result in savings of tens of thousands of euros per year. The heat pump solutions, such as utilizing a mechanical vapor recompression or thermal vapor recompression, are not feasible due to the high investment costs and long pay back times.
Resumo:
In the 2000’s Finland suffered from storms that caused long outages in electricity distribution, longest up to two weeks. These major disturbances increased the importance of supply security. In 2013 new Electricity Market Act was announced. It defined maximum duration for outages, 6 h for city plan areas and 36 h for other areas. The aim for this work is to determine required major disturbance proof level for a study area and find tools for prioritizing overhead lines for cabling renovation to improve supply security. Three prioritization methods were chosen to be studied: A: prioritization line sections by customer outage costs they cause, B: maximizing customers major disturbance proof network and C: minimizing excavation costs in medium voltage network. Profitability calculations showed that prioritization method A was the most profitable and C had the weakest profitability. The prioritization method C drove renovation into unreasonable locations in the study area in reliability point of view. Therefore universal rule prioritization methods couldn’t be made from the prioritization methods. This led to the conclusion that every renewing area need to be evaluated in a case by case basis.
Resumo:
Tässä kandidaatintyössä on tarkoituksena selvittää ilmajohtojen lentokuvauksen käyttömahdollisuuksia sähköverkkoyhtiöiden toiminnassa. Ilmajohtojen lentokuvaus on Suomessa ja koko maailmassa vielä varsin vähän hyödynnetty keino esimerkiksi sähköverkkojen huolto- ja kunnossapito tarkastuksissa. Lentokuvauksella tarkoitetaan vielä nykyään vuonna 2014 yleensä helikopterista tehtävää johtokadun 3D-kuvausta ja laserkeilausta. Tulevaisuudessa se voi kuitenkin olla mahdollista tehdä myös muista lentävistä aluksista. Työssä on erityisesti keskitytty lentokuvauksessa tehtävään laserkeilukseen ja 3D-kuvaus on jätetty pienemmälle huomiolle. Lisäksi työssä selvitetään lentokuvauksen taloudellista kannattavuutta sähköverkkoyhtiöille sekä pohditaan lentokuvauksen tulevaisuuden näkymiä.
Resumo:
Tutkimuksen tarkoituksena on tutkia globaalin konsernin yhden liiketoimintayksikön tuotekustannuslaskennan nykytilaa. Lisäksi tutkimuksessa selvitetään, miten tuotekohtaista kustannusseurantaa voidaan kehittää mallimoottoriajatuksen avulla. Tutkimus on toteutettu laadullisena case-tutkimuksena yhden organisaation tietojen pohjalta. Teoriaosuuden lähdeaineistot koostuvat pääosin kustannuslaskennan ja -johtamisen perusteoksista ja tieteellisistä artikkeleista. Empiriaosuuden tiedot pohjautuvat haastatteluihin, tietojärjestelmiin ja tutustumiseen organisaatioon. Tutkimuksessa selvisi, että liiketoimintayksikkö ei tällä hetkellä seuraa tuotekohtaisia kustannuksia yksittäisten tuotteiden tasolla. Kustannusseuranta tapahtuu sen sijaan suurempien kokonaisuuksien keskimääräisten kustannuksien tasolla. Tuotekustannuslaskenta on toteutettu perinteiseksi menetelmäksi luokiteltavalla laskentatavalla, jossa välilliset kustannukset kohdistetaan yleiskustannuslisäprosenttien avulla. Tutkimuksen perusteella yleiskustannuksien kohdistamisperusteissa on havaittavissa viitteitä kustannuksien vääristymisestä. Tuotetason kustannuksien seurantaan kehitettiin mallimoottoriajatukseen pohjautuva kustannusmalli, jonka avulla seurataan tarkasti valikoitujen tuotteiden kustannuksien kehittymistä sekä kustannusrakennetta. Mallin avulla voidaan lisätä tuotetason kustannustietoisuutta liiketoimintayksikössä sekä tehdä havaintoja tuotekohtaisten kustannuksien kehityssuunnasta. Mallin kustannustietona käytetään olemassa olevan kustannuslaskentajärjestelmän tietoja. Tästä johtuen mallin kustannustiedoissa on havaittavissa myös viitteitä kustannuksien vääristymisestä.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
Recent Storms in Nordic countries were a reason of long power outages in huge territories. After these disasters distribution networks' operators faced with a problem how to provide adequate quality of supply in such situation. The decision of utilization cable lines rather than overhead lines were made, which brings new features to distribution networks. The main idea of this work is a complex analysis of medium voltage distribution networks with long cable lines. High value of cable’s specific capacitance and length of lines determine such problems as: high values of earth fault currents, excessive amount of reactive power flow from distribution to transmission network, possibility of a high voltage level at the receiving end of cable feeders. However the core tasks was to estimate functional ability of the earth fault protection and the possibility to utilize simplified formulas for operating setting calculations in this network. In order to provide justify solution or evaluation of mentioned above problems corresponding calculations were made and in order to analyze behavior of relay protection principles PSCAD model of the examined network have been created. Evaluation of the voltage rise in the end of a cable line have educed absence of a dangerous increase in a voltage level, while excessive value of reactive power can be a reason of final penalty according to the Finish regulations. It was proved and calculated that for this networks compensation of earth fault currents should be implemented. In PSCAD models of the electrical grid with isolated neutral, central compensation and hybrid compensation were created. For the network with hybrid compensation methodology which allows to select number and rated power of distributed arc suppression coils have been offered. Based on the obtained results from experiments it was determined that in order to guarantee selective and reliable operation of the relay protection should be utilized hybrid compensation with connection of high-ohmic resistor. Directional and admittance based relay protection were tested under these conditions and advantageous of the novel protection were revealed. However, for electrical grids with extensive cabling necessity of a complex approach to the relay protection were explained and illustrated. Thus, in order to organize reliable earth fault protection is recommended to utilize both intermittent and conventional relay protection with operational settings calculated by the use of simplified formulas.
Resumo:
Sähkömarkkinalakimuutokset ovat ohjanneet useita verkkoyhtiötä muuttamaan investoin-tistrategioitaan. Kiristyneet toimitusvarmuusvaatimukset edellyttävät useilta verkkoyh-tiöiltä aiempaa suurempaa panostusta jakeluverkon kehittämiseksi. Toimitusvarmuusvaa-timusten täyttäminen edellyttää myös merkittäviä muutoksia verkkoyhtiöiden käyttämiin verkostotekniikoihin. Suurhäiriöille alttiita ilmajohtoja muutetaan verkkoyhtiöissä totuttua nopeammalla aikataululla maakaapeleiksi tiukentuneiden toimitusvarmuusvaatimusten täyttämiseksi. PKS Sähkönsiirto Oy:n 20 kV sähköverkko on ollut suurimmalta osalta avojohtoverkkoa jossa toimitusvarmuus ei ole nykyisellään muuttuneen sähkömarkkinalain asettamalla ta-solla. Tämä on johtanut verkostostrategian luomiseen, jossa yhtenä toimena toimitusvar-muuden lisäämiseksi vaaditulle tasolle on avojohtolinjojen korvaaminen maakaapeleilla. Maakaapelointien nopea rakennusaikataulu tuo monia haasteita verkkoyhtiöille. Maakaa-pelointien korkea maasulkuvirran ja loistehon tuotto verrattuna avojohtoverkkoon tulee huomioida yhtiössä verkkoa rakennettaessa. Tässä diplomityössä selvitetään PKS Sähkönsiirto Oy:n verkostostrategian mukaisten maakaapelointien vaikutuksia sähköverkolle. Työssä on arvioitu tavoiteverkon mukaisten maakaapeleiden aiheuttamaa maasulkuvirran ja loistehon tuoton tasoa. Tulosten perusteel-la on tehty johtopäätökset mihin verkkoyhtiön on kiinnitettävä huomioita kaapelointeja suunnitellessa ja toteuttaessa.
Resumo:
Elenia Oy linjasi vuonna 2009 suunnittelustrategiassaan, että koko sen verkkoalueella käytetään ainoana rakennustapana maakaapelointia. Lisäksi vuoden 2013 sähkömarkkinalain sähkönjakeluverkkoyhtiöiltä edellyttämät toimitusvarmuuskriteerit ovat lisänneet säävarman verkon rakentamista merkittävästi. Tämän diplomityön tavoitteena on tutkia ilmajohtoverkon viankorjauksen soveltumista maakaapeloinnilla tehtäväksi. Tarkastelu toteutetaan taloudellisuuden ja käytännön toteutuksen näkökulmista. Taloudellisessa tarkastelussa vertaillaan ilmajohtorakentamisen ja maakaapeloinnin elinkaarikustannuksia viankorjauksen yhteydessä. Käytännön toteutuksen osalta esitetään tilanteet, joissa viankorjaus on järkevä suorittaa maakaapeloimalla. Käyttämällä laskennassa Energiaviraston yksikköhintoja, jäävät viankorjaukselle soveltuvat verkko-osuudet elinkaarikustannustarkastelun valossa vähäisiksi. Tehokas investoiminen, auraustekniikan hyödyntäminen, regulaatiomallin kannustimet ja kaapelointiratkaisujen hintakehitys voidaan kuitenkin todeta merkittäviksi vaikuttajiksi maakaapeloinnin yleistymisessä toimintatavaksi ilmajohtoverkon viankorjauksessa.