964 resultados para flexibility,


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assembly and maintenance of the International Thermonuclear Experimental Reactor (ITER) vacuum vessel (VV) is highly challenging since the tasks performed by the robot involve welding, material handling, and machine cutting from inside the VV. The VV is made of stainless steel, which has poor machinability and tends to work harden very rapidly, and all the machining operations need to be carried out from inside of the ITER VV. A general industrial robot cannot be used due to its poor stiffness in the heavy duty machining process, and this will cause many problems, such as poor surface quality, tool damage, low accuracy. Therefore, one of the most suitable options should be a light weight mobile robot which is able to move around inside of the VV and perform different machining tasks by replacing different cutting tools. Reducing the mass of the robot manipulators offers many advantages: reduced material costs, reduced power consumption, the possibility of using smaller actuators, and a higher payload-to-robot weight ratio. Offsetting these advantages, the lighter weight robot is more flexible, which makes it more difficult to control. To achieve good machining surface quality, the tracking of the end effector must be accurate, and an accurate model for a more flexible robot must be constructed. This thesis studies the dynamics and control of a 10 degree-of-freedom (DOF) redundant hybrid robot (4-DOF serial mechanism and 6-DOF 6-UPS hexapod parallel mechanisms) hydraulically driven with flexible rods under the influence of machining forces. Firstly, the flexibility of the bodies is described using the floating frame of reference method (FFRF). A finite element model (FEM) provided the Craig-Bampton (CB) modes needed for the FFRF. A dynamic model of the system of six closed loop mechanisms was assembled using the constrained Lagrange equations and the Lagrange multiplier method. Subsequently, the reaction forces between the parallel and serial parts were used to study the dynamics of the serial robot. A PID control based on position predictions was implemented independently to control the hydraulic cylinders of the robot. Secondly, in machining, to achieve greater end effector trajectory tracking accuracy for surface quality, a robust control of the actuators for the flexible link has to be deduced. This thesis investigates the intelligent control of a hydraulically driven parallel robot part based on the dynamic model and two schemes of intelligent control for a hydraulically driven parallel mechanism based on the dynamic model: (1) a fuzzy-PID self-tuning controller composed of the conventional PID control and with fuzzy logic, and (2) adaptive neuro-fuzzy inference system-PID (ANFIS-PID) self-tuning of the gains of the PID controller, which are implemented independently to control each hydraulic cylinder of the parallel mechanism based on rod length predictions. The serial component of the hybrid robot can be analyzed using the equilibrium of reaction forces at the universal joint connections of the hexa-element. To achieve precise positional control of the end effector for maximum precision machining, the hydraulic cylinder should be controlled to hold the hexa-element. Thirdly, a finite element approach of multibody systems using the Special Euclidean group SE(3) framework is presented for a parallel mechanism with flexible piston rods under the influence of machining forces. The flexibility of the bodies is described using the nonlinear interpolation method with an exponential map. The equations of motion take the form of a differential algebraic equation on a Lie group, which is solved using a Lie group time integration scheme. The method relies on the local description of motions, so that it provides a singularity-free formulation, and no parameterization of the nodal variables needs to be introduced. The flexible slider constraint is formulated using a Lie group and used for modeling a flexible rod sliding inside a cylinder. The dynamic model of the system of six closed loop mechanisms was assembled using Hamilton’s principle and the Lagrange multiplier method. A linearized hydraulic control system based on rod length predictions was implemented independently to control the hydraulic cylinders. Consequently, the results of the simulations demonstrating the behavior of the robot machine are presented for each case study. In conclusion, this thesis studies the dynamic analysis of a special hybrid (serialparallel) robot for the above-mentioned special task involving the ITER and investigates different control algorithms that can significantly improve machining performance. These analyses and results provide valuable insight into the design and control of the parallel robot with flexible rods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluated the photosynthetic responses of seven tropical trees of different successional groups under contrasting irradiance conditions, taking into account changes in gas exchange and chlorophyll a fluorescence. Although early successional species have shown higher values of CO2 assimilation (A) and transpiration (E), there was not a defined pattern of the daily gas exchange responses to high irradiance (FSL) among evaluated species. Cariniana legalis (Mart.) Kuntze (late secondary) and Astronium graveolens Jacq. (early secondary) exhibited larger reductions in daily-integrated CO2 assimilation (DIA) when transferred from medium light (ML) to FSL. On the other hand, the pioneer species Guazuma ulmifolia Lam. had significant DIA increase when exposed to FSL. The pioneers Croton spp. trended to show a DIA decrease around 19%, while Cytharexyllum myrianthum Cham. (pioneer) and Rhamnidium elaeocarpum Reiss. (early secondary) trended to increase DIA when transferred to FSL. Under this condition, all species showed dynamic photoinhibition, except for C. legalis that presented chronic photoinhibition of photosynthesis. Considering daily photosynthetic processes, our results supported the hypothesis of more flexible responses of early successional species (pioneer and early secondary species). The principal component analysis indicated that the photochemical parameters effective quantum efficiency of photosystem II and apparent electron transport rate were more suitable to separate the successional groups under ML condition, whereas A and E play a major role to this task under FSL condition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scanning optics create different types of phenomena and limitation to cladding process compared to cladding with static optics. This work concentrates on identifying and explaining the special features of laser cladding with scanning optics. Scanner optics changes cladding process energy input mechanics. Laser energy is introduced into the process through a relatively small laser spot which moves rapidly back and forth, distributing the energy to a relatively large area. The moving laser spot was noticed to cause dynamic movement in the melt pool. Due to different energy input mechanism scanner optic can make cladding process unstable if parameter selection is not done carefully. Especially laser beam intensity and scanning frequency have significant role in the process stability. The laser beam scanning frequency determines how long the laser beam affects with specific place local specific energy input. It was determined that if the scanning frequency in too low, under 40 Hz, scanned beam can start to vaporize material. The intensity in turn determines on how large package this energy is brought and if the intensity of the laser beam was too high, over 191 kW/cm2, laser beam started to vaporize material. If there was vapor formation noticed in the melt pool, the process starts to resample more laser alloying due to deep penetration of laser beam in to the substrate. Scanner optics enables more flexibility to the process than static optics. The numerical adjustment of scanning amplitude enables clad bead width adjustment. In turn scanner power modulation (where laser power is adjusted according to where the scanner is pointing) enables modification of clad bead cross-section geometry when laser power can be adjusted locally and thus affect how much laser beam melts material in each sector. Power modulation is also an important factor in terms of process stability. When a linear scanner is used, oscillating the scanning mirror causes a dwell time in scanning amplitude border area, where the scanning mirror changes the direction of movement. This can cause excessive energy input to this area which in turn can cause vaporization and process instability. This process instability can be avoided by decreasing energy in this region by power modulation. Powder feeding parameters have a significant role in terms of process stability. It was determined that with certain powder feeding parameter combinations powder cloud behavior became unstable, due to the vaporizing powder material in powder cloud. Mainly this was noticed, when either or both the scanning frequency or powder feeding gas flow was low or steep powder feeding angle was used. When powder material vaporization occurred, it created vapor flow, which prevented powder material to reach the melt pool and thus dilution increased. Also powder material vaporization was noticed to produce emission of light at wavelength range of visible light. This emission intensity was noticed to be correlated with the amount of vaporization in the powder cloud.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Personalized nanomedicine has been shown to provide advantages over traditional clinical imaging, diagnosis, and conventional medical treatment. Using nanoparticles can enhance and clarify the clinical targeting and imaging, and lead them exactly to the place in the body that is the goal of treatment. At the same time, one can reduce the side effects that usually occur in the parts of the body that are not targets for treatment. Nanoparticles are of a size that can penetrate into cells. Their surface functionalization offers a way to increase their sensitivity when detecting target molecules. In addition, it increases the potential for flexibility in particle design, their therapeutic function, and variation possibilities in diagnostics. Mesoporous nanoparticles of amorphous silica have attractive physical and chemical characteristics such as particle morphology, controllable pore size, and high surface area and pore volume. Additionally, the surface functionalization of silica nanoparticles is relatively straightforward, which enables optimization of the interaction between the particles and the biological system. The main goal of this study was to prepare traceable and targetable silica nanoparticles for medical applications with a special focus on particle dispersion stability, biocompatibility, and targeting capabilities. Nanoparticle properties are highly particle-size dependent and a good dispersion stability is a prerequisite for active therapeutic and diagnostic agents. In the study it was shown that traceable streptavidin-conjugated silica nanoparticles which exhibit a good dispersibility could be obtained by the suitable choice of a proper surface functionalization route. Theranostic nanoparticles should exhibit sufficient hydrolytic stability to effectively carry the medicine to the target cells after which they should disintegrate and dissolve. Furthermore, the surface groups should stay at the particle surface until the particle has been internalized by the cell in order to optimize cell specificity. Model particles with fluorescently-labeled regions were tested in vitro using light microscopy and image processing technology, which allowed a detailed study of the disintegration and dissolution process. The study showed that nanoparticles degrade more slowly outside, as compared to inside the cell. The main advantage of theranostic agents is their successful targeting in vitro and in vivo. Non-porous nanoparticles using monoclonal antibodies as guiding ligands were tested in vitro in order to follow their targeting ability and internalization. In addition to the targeting that was found successful, a specific internalization route for the particles could be detected. In the last part of the study, the objective was to clarify the feasibility of traceable mesoporous silica nanoparticles, loaded with a hydrophobic cancer drug, being applied for targeted drug delivery in vitro and in vivo. Particles were provided with a small molecular targeting ligand. In the study a significantly higher therapeutic effect could be achieved with nanoparticles compared to free drug. The nanoparticles were biocompatible and stayed in the tumor for a longer time than a free medicine did, before being eliminated by renal excretion. Overall, the results showed that mesoporous silica nanoparticles are biocompatible, biodegradable drug carriers and that cell specificity can be achieved both in vitro and in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data management consists of collecting, storing, and processing the data into the format which provides value-adding information for decision-making process. The development of data management has enabled of designing increasingly effective database management systems to support business needs. Therefore as well as advanced systems are designed for reporting purposes, also operational systems allow reporting and data analyzing. The used research method in the theory part is qualitative research and the research type in the empirical part is case study. Objective of this paper is to examine database management system requirements from reporting managements and data managements perspectives. In the theory part these requirements are identified and the appropriateness of the relational data model is evaluated. In addition key performance indicators applied to the operational monitoring of production are studied. The study has revealed that the appropriate operational key performance indicators of production takes into account time, quality, flexibility and cost aspects. Especially manufacturing efficiency has been highlighted. In this paper, reporting management is defined as a continuous monitoring of given performance measures. According to the literature review, the data management tool should cover performance, usability, reliability, scalability, and data privacy aspects in order to fulfill reporting managements demands. A framework is created for the system development phase based on requirements, and is used in the empirical part of the thesis where such a system is designed and created for reporting management purposes for a company which operates in the manufacturing industry. Relational data modeling and database architectures are utilized when the system is built for relational database platform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuous loading and unloading can cause breakdown of cranes. In seeking solution to this problem, the use of an intelligent control system for improving the fatigue life of cranes in the control of mechatronics has been under study since 1994. This research focuses on the use of neural networks as possibilities of developing algorithm to map stresses on a crane. The intelligent algorithm was designed to be a part of the system of a crane, the design process started with solid works, ANSYS and co-simulation using MSc Adams software which was incorporated in MATLAB-Simulink and finally MATLAB neural network (NN) for the optimization process. The flexibility of the boom accounted for the accuracy of the maximum stress results in the ADAMS model. The flexibility created in ANSYS produced more accurate results compared to the flexibility model in ADAMS/View using discrete link. The compatibility between.ADAMS and ANSYS softwares was paramount in the efficiency and the accuracy of the results. Von Mises stresses analysis was more suitable for this thesis work because the hydraulic boom was made from construction steel FE-510 of steel grade S355 with yield strength of 355MPa. Von Mises theory was good for further analysis due to ductility of the material and the repeated tensile and shear loading. Neural network predictions for the maximum stresses were then compared with the co-simulation results for accuracy, and the comparison showed that the results obtained from neural network model were sufficiently accurate in predicting the maximum stresses on the boom than co-simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Putkipalkkiliitosten käyttäminen offshore-teollisuuden rakennusten tukirakenteissa on erittäin yleistä. Liitosten valmistaminen on hankalaa ja hidasta. Hyvin usein tukirakenteiden putkipalkkiliitokset joudutaan hitsaamaan manuaalisesti tukirakenteen suuren koon vuoksi. Tukirakenteen uudella valmistustavalla, jossa rakenne kootaan pienemmistä osista, voidaan putkipalkkiliitosten valmistaminen ja hitsaaminen automatisoida. Robottihitsausasema sekä sen käyttöliittymä ja ohjelmisto todettiin toimivaksi ratkaisuksi putkipalkkiliitosten hitsaamiseen. Automaatiosuunnitteluun liittyy monia eri vaiheita, joiden huolellinen läpikäynti takaa todenmukaisemman konseptiratkaisun. Konseptiratkaisu kehittyy samalla, kun laitteistoja ja layoutia muokataan valmiimmiksi. Automaatiosuunnittelun aikana pyritään löytämään oikea taso automaatiolle. Valittu automaation taso vaikuttaa tuotannon tuottavuuteen, läpimenoaikaan ja joustavuuteen. Automaation määrällä vaikutetaan myös ihmisen tekemän työn määrään ja työnkuvaan. Tässä diplomityössä kehitettiin Pemamek Oy:lle hitsausautomaatioratkaisuja putkimaisille kappaleille. Putkiston osia valmistavan tehtaan hitsaus- ja tuotantoautomaation konseptiratkaisua tarkasteltiin esimerkkitapauksen muodossa, jolla kuvattiin, kuinka automaatiojärjestelmä voidaan suunnitella konseptitasolle. Toinen hitsausautomaatioratkaisu, joka tässä työssä kehitettiin, on robottihitsausasema käyttöliittymineen putkipalkkiliitoksen hitsaamiseen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectral sensitivities of visual systems are specified as the reciprocals of the intensities of light (quantum fluxes) needed at each wavelength to elicit the same criterion amplitude of responses. This review primarily considers the methods that have been developed for electrophysiological determinations of criterion amplitudes of slow-wave responses from single retinal cells. Traditional flash methods can require tedious dark adaptations and may yield erroneous spectral sensitivity curves which are not seen in such modifications as ramp methods. Linear response methods involve interferometry, while constant response methods involve manual or automatic adjustments of continuous illumination to keep response amplitudes constant during spectral scans. In DC or AC computerized constant response methods, feedback to determine intensities at each wavelength is derived from the response amplitudes themselves. Although all but traditional flash methods have greater or lesser abilities to provide on-line determinations of spectral sensitivities, computerized constant response methods are the most satisfactory due to flexibility, speed and maintenance of a constant adaptation level

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on the development of sustainable industrial architectures for bioenergy based on the metaphors of industrial symbiosis and industrial ecosystems, which imply exchange of material and energy side-flows of various industries in order to improve sustainability of those industries on a system level. The studies on industrial symbiosis have been criticised for staying at the level of incremental changes by striving for cycling waste and by-flows of the industries ‘as is’ and leaving the underlying industry structures intact. Moreover, there has been articulated the need for interdisciplinary research on industrial ecosystems as well as the need to extend the management and business perspectives on industrial ecology. This thesis addresses this call by applying a business ecosystem and business model perspective on industrial symbiosis in order to produce knowledge on how industrial ecosystems can be developed that are sustainable environmentally and economically. A case of biogas business is explored and described in four research papers and an extended summary that form this thesis. Since the aim of the research was to produce a normative model for developing sustainable industrial ecosystems, the methodology applied in this research can be characterised as constructive and collaborative. A constructive research mode was required in order to expand the historical knowledge on industrial symbiosis development and business ecosystem development into the knowledge of what should be done, which is crucial for sustainability and the social change it requires. A collaborative research mode was employed through participating in a series of projects devoted to the development of a biogas-for-traffic industrial ecosystem. The results of the study showed that the development of material flow interconnections within industrial symbiosis is inseparable from larger business ecosystem restructuring. This included a shift in the logic of the biogas and traffic fuel industry and a subsequent development of a business ecosystem that would entail the principles of industrial symbiosis and localised energy production and consumption. Since a company perspective has been taken in this thesis, the role of an ecosystem integrator appeared as a crucial means to achieve the required industry restructuring. This, in turn, required the development of a modular and boundary-spanning business model that had a strong focus on establishing collaboration among ecosystem stakeholders and development of multiple local industrial ecosystems as part of business growth. As a result, the designed business model of the ecosystem integrator acquired the necessary flexibility in order to adjust to local conditions, which is crucial for establishing industrial symbiosis. This thesis presents a normative model for the development of a business model required for creating sustainable industrial ecosystems, which contributes to approaches at the policy-makers’ level, proposed earlier. Therefore, this study addresses the call for more research on the business level of industrial ecosystem formation and the implications for the business models of the involved actors. Moreover, the thesis increases the understanding of system innovation and innovation in business ecosystems by explicating how business model innovation can be the trigger for achieving more sustainable industry structures, such as those relying on industrial symbiosis.