19 resultados para Minimization Problem, Lattice Model
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Pitkäaikaisten rakennusurakoiden tarjouslaskennassa on ennakoitava hintojen muutoksia useiden vuosien päähän, kun tarjoukset on tehtävä kiinteillä hinnoilla. Kustannusten ennakointi ja hintariskienhallinta on kriittinen tekijä rakennusalan yrityksen kilpailukyvylle. Tämän tutkielman tavoitteena on kehittää YIT Rakennus Oy:n Infrapalveluille toimintamalli ja työkalu, joiden avulla hintariskejä voidaan hallita tarjouslaskennassa sekä hankintatoimessa. Ratkaisuksi kehitettiin kustannusten ennakointi -malli, jossa panosryhmien hintojen kehitystä ennustetaan asiantuntijaryhmissä säännöllisesti. Kustannusten ennakointi -mallin käyttöönotto vaatii ennustettavien panosryhmien määrittelyä. Lisäksi on nimettävä asiantuntijaryhmä sekä valittava aikajänne, jolle ennuste tehdään. Ennusteisiin sisältyvä epävarmuus saadaan esiin Monte Carlo simulaatiolla, ja urakan hintariskiä voidaan siten arvioida todennäköisyysjakaumien ja herkkyysanalyysin avulla. Valmiita ennusteita hyödynnetään tarjouslaskennassa sekä hankintatoimessa taktiikoiden ja strategioiden valinnassa.
Resumo:
The purpose of this thesis is to develop an environment or network that enables effective collaborative product structure management among stakeholders in each unit, throughout the entire product lifecycle and product data management. This thesis uses framework models as an approach to the problem. Framework model methods for development of collaborative product structure management are proposed in this study, there are three unique models depicted to support collaborative product structure management: organization model, process model and product model. In the organization model, the formation of product data management system (eDSTAT) key user network is specified. In the process model, development is based on the case company’s product development matrix. In the product model framework, product model management, product knowledge management and design knowledge management are defined as development tools and collaboration is based on web-based product structure management. Collaborative management is executed using all these approaches. A case study from an actual project at the case company is presented as an implementation; this is to verify the models’ applicability. A computer assisted design tool and the web-based product structure manager, have been used as tools of this collaboration with the support of the key user. The current PDM system, eDSTAT, is used as a piloting case for key user role. The result of this development is that the role of key user as a collaboration channel is defined and established. The key user is able to provide one on one support for the elevator projects. Also the management activities are improved through the application of process workflow by following criteria for each project milestone. The development shows effectiveness of product structure management in product lifecycle, improved production process by eliminating barriers (e.g. improvement of two-way communication) during design phase and production phase. The key user role is applicable on a global scale in the company.
Resumo:
Toimituskyky on yrityksen suorituskykyä kuvaava tekijä, jolla on merkittävä vaikutus asiakastyytyväisyyteen, erityisesti valmistusteollisuudessa. Toimituskyky muodostuu materiaalin saatavuudesta ja logistisen järjestelmän toimitusvarmuudesta, joten hyvä toimituskyky edellyttää materiaalipuutteiden hallintaa. Tämän diplomityön tavoitteena on esittää, miten toimituskykyä voidaan kehittää tilaus-toimitusprosessin materiaalipuutteita hallitsemalla. Tutkimus toteutettiin konstruktiivisella tutkimusotteella case-tutkimuksena havainnoimalla toimintaa case-yrityksessä sekä analysoimalla case-yrityksen kirjallista materiaalia ja arkistoja. Yrityksessä havaittuja materiaalipuutteisiin liittyviä ongelmia tarkasteltiin tilaus-toimitusprosessin näkökulmasta prosessijohtamisen ja systemaattisen ongelmanratkaisun teorioiden avulla. Tutkimuksen tuloksena laadittiin kolme käytännönläheistä ratkaisuehdotusta havaittuihin ongelmiin; (1) materiaalipuutteiden syy-seuraussuhteita kuvaavat ongelma-syy-seurausketjut, (2) materiaalipuutteiden ongelmanratkaisumalli systemaattisen ongelmanratkaisun tueksi sekä (3) visuaalinen tilaus-toimitusprosessimalli, joka painottaa osaprosessien yhteyttä koko prosessin toimituskykyyn ja toimitusvarmuuteen. Tulosten mukaan materiaalipuutteet tulisi käsittää prosessin laatuvirheinä, jotka antavat arvokasta tietoa siitä, että prosessissa on laatuongelmia. Tulosten perusteella yrityksen toimituskykyä voidaan kehittää havainnoimalla tilaus-toimitusprosessin laatuvirheitä, selvittämällä laatuvirheiden syy-seuraussuhteet systemaattisesti ongelmanratkaisumallia hyödyntäen sekä toimimalla prosessiajattelun mukaisesti tilaus-toimitusprosessin toimituskyvyn jatkuvaa parantamista tavoitellen. Tutkimusongelman tarkastelutapaa ja työn tuloksia voidaan soveltaa samankaltaisiin tapauksiin, joissa tilaus-toimitusprosessin laatuvirheet, esimerkiksi materiaalipuutteet, paljastavat kehittämistä vaativia epäkohtia prosessin toimintatavoissa. Tilaus-toimitusprosessin toimituskykyä voidaan kehittää vain, jos panostetaan ajan hallintaan ja kykyyn toimia asiakaslupausten ja sopimusten mukaisesti.
Resumo:
The suitable timing of capacity investments is a remarkable issue especially in capital intensive industries. Despite its importance, fairly few studies have been published on the topic. In the present study models for the timing of capacity change in capital intensive industry are developed. The study considers mainly the optimal timing of single capacity changes. The review of earlier research describes connections between cost, capacity and timing literature, and empirical examples are used to describe the starting point of the study and to test the developed models. The study includes four models, which describe the timing question from different perspectives. The first model, which minimizes unit costs, has been built for capacity expansion and replacement situations. It is shown that the optimal timing of an investment can be presented with the capacity and cost advantage ratios. After the unit cost minimization model the view is extended to the direction of profit maximization. The second model states that early investments are preferable if the change of fixed costs is small compared to the change of the contribution margin. The third model is a numerical discounted cash flow model, which emphasizes the roles of start-up time, capacity utilization rate and value of waiting as drivers of the profitable timing of a project. The last model expands the view from project level to company level and connects the flexibility of assets and cost structures to the timing problem. The main results of the research are the solutions of the models and analysis or simulations done with the models. The relevance and applicability of the results are verified by evaluating the logic of the models and by numerical cases.
Resumo:
Tämän tutkielman tavoitteena on tutkia peso-ongelmaa sekä devalvaatio-odotuksia seuraavissa Latinalaisen Amerikan maissa: Argentiina, Brasilia, Costa Rica, Uruguay ja Venezuela. Lisäksi tutkitaan, onko peso-ongelmalla mahdollista selittää korkojen epäsäännöllistä käyttäytymistä ennen todellisen devalvaation tapahtumista. Jotta näiden tutkiminen olisi mahdollista, lasketaan markkinoiden odotettu devalvaation todennäköisyys tutkittavissa maissa. Odotettu devalvaation todennäköisyys lasketaan aikavälillä tammikuusta 1996 joulukuuhun 2006 käyttäen kahta erilaista mallia. Korkoero-mallin mukaan maiden välisestä korkoerosta on mahdollista laskea markkinoiden devalvaatio-odotukset. Toiseksi, Probit-mallissa käytetään useita makrotaloudellisia tekijöitä selittävinä muuttujina laskettaessa odotettua devalvaation todennäköisyyttä. Lisäksi tutkitaan, miten yksittäisten makrotaloudellisten muuttujien kehitys vaikuttaa odotettuun devalvaation todennäköisyyteen. Empiiriset tulokset osoittavat, että tutkituissa Latinalaisen Amerikan maissa oli peso-ongelma aikavälillä tammikuusta 1996 joulukuuhun 2006. Korkoero-mallin tulosten mukaan peso-ongelma löytyi kaikista muista tutkituista maista lukuun ottamatta Argentiinaa. Vastaavasti Probit-mallin mukaan peso-ongelma löytyi kaikista tutkituista maista. Tulokset osoittavat myös, että korkojen epäsäännöllinen kehitys ennen varsinaista devalvaatiota on mahdollista selittää peso-ongelmalla. Probit-mallin tulokset osoittavat lisäksi, että makrotaloudellisten muuttujien kehityksellä ei ole mitään tiettyä kaavaa liittyen siihen, kuinka ne vaikuttavat markkinoiden devalvaatio-odotuksiin Latinalaisessa Amerikassa. Pikemmin vaikutukset näyttävät olevan maakohtaisia.
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.
Resumo:
This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.
Resumo:
Life cycle costing (LCC) practices are spreading from military and construction sectors to wider area of industries. Suppliers as well as customers are demanding comprehensive cost knowledge that includes all relevant cost elements through the life cycle of products. The problem of total cost visibility is being acknowledged and the performance of suppliers is evaluated not just by low acquisition costs of their products, but by total value provided through the life time of their offerings. The main purpose of this thesis is to provide better understanding of product cost structure to the case company. Moreover, comprehensive theoretical body serves as a guideline or methodology for further LCC process. Research includes the constructive analysis of LCC related concepts and features as well as overview of life cycle support services in manufacturing industry. The case study aims to review the existing LCC practices within the case company and provide suggestions for improvements. It includes identification of most relevant life cycle cost elements, development of cost breakdown structure and generic cost model for data collection. Moreover, certain cost-effective suggestions are provided as well. This research should support decision making processes, assessment of economic viability of products, financial planning, sales and other processes within the case company.
Resumo:
En option är ett finansiellt kontrakt som ger dess innehavare en rättighet (men medför ingen skyldighet) att sälja eller köpa någonting (till exempel en aktie) till eller från säljaren av optionen till ett visst pris vid en bestämd tidpunkt i framtiden. Den som säljer optionen binder sig till att gå med på denna framtida transaktion ifall optionsinnehavaren längre fram bestämmer sig för att inlösa optionen. Säljaren av optionen åtar sig alltså en risk av att den framtida transaktion som optionsinnehavaren kan tvinga honom att göra visar sig vara ofördelaktig för honom. Frågan om hur säljaren kan skydda sig mot denna risk leder till intressanta optimeringsproblem, där målet är att hitta en optimal skyddsstrategi under vissa givna villkor. Sådana optimeringsproblem har studerats mycket inom finansiell matematik. Avhandlingen "The knapsack problem approach in solving partial hedging problems of options" inför en ytterligare synpunkt till denna diskussion: I en relativt enkel (ändlig och komplett) marknadsmodell kan nämligen vissa partiella skyddsproblem beskrivas som så kallade kappsäcksproblem. De sistnämnda är välkända inom en gren av matematik som heter operationsanalys. I avhandlingen visas hur skyddsproblem som tidigare lösts på andra sätt kan alternativt lösas med hjälp av metoder som utvecklats för kappsäcksproblem. Förfarandet tillämpas även på helt nya skyddsproblem i samband med så kallade amerikanska optioner.
Resumo:
Biotechnology has been recognized as the key strategic technology for industrial growth. The industry is heavily dependent on basic research. Finland continues to rank in the top 10 of Europe's most innovative countries in terms of tax-policy, education system, infrastructure and the number of patents issued. Regardless of the excellent statistical results, the output of this innovativeness is below acceptable. Research on the issues hindering the output creation has already been done and the identifiable weaknesses in the Finland's National Innovation system are the non-existent growth of entrepreneurship and the missing internationalization. Finland is proven to have all the enablers of the innovation policy tools, but is lacking the incentives and rewards to push the enablers, such as knowledge and human capital, forward. Science Parks are the biggest operator in research institutes in the Finnish Science and Technology system. They exist with the purpose of speeding up the commercialization process of biotechnology innovations which usually include technological uncertainty, technical inexperience, business inexperience and high technology cost. Innovation management only internally is a rather historic approach, current trend drives towards open innovation model with strong triple helix linkages. The evident problems in the innovation management within the biotechnology industry are examined through a case study approach including analysis of the semi-structured interviews which included biotechnology and business expertise from Turku School of Economics. The results from the interviews supported the theoretical implications as well as conclusions derived from the pilot survey, which focused on the companies inside Turku Science Park network. One major issue that the Finland's National innovation system is struggling with is the fact that it is technology driven, not business pulled. Another problem is the university evaluation scale which focuses more on number of graduates and short-term factors, when it should put more emphasis on the cooperation success in the long-term, such as the triple helix connections with interaction and knowledge distribution. The results of this thesis indicated that there is indeed requirement for some structural changes in the Finland's National innovation system and innovation policy in order to generate successful biotechnology companies and innovation output. There is lack of joint output and scales of success, lack of people with experience, lack of language skills, lack of business knowledge and lack of growth companies.
Resumo:
An augmented reality (AR) device must know observer’s location and orientation, i.e. observer’s pose, to be able to correctly register the virtual content to observer’s view. One possible way to determine and continuously follow-up the pose is model-based visual tracking. It supposes that a 3D model of the surroundings is known and that there is a video camera that is fixed to the device. The pose is tracked by comparing the video camera image to the model. Each new pose estimate is usually based on the previous estimate. However, the first estimate must be found out without a prior estimate, i.e. the tracking must be initialized, which in practice means that some model features must be identified from the image and matched to model features. This is known in literature as model-to-image registration problem or simultaneous pose and correspondence problem. This report reviews visual tracking initialization methods that are suitable for visual tracking in ship building environment when the ship CAD model is available. The environment is complex, which makes the initialization non-trivial. The report has been done as part of MARIN project.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.