27 resultados para structured parallel computations
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Abstract
Resumo:
The markets of biomass for energy are developing rapidly and becoming more international. A remarkable increase in the use of biomass for energy needs parallel and positive development in several areas, and there will be plenty of challenges to overcome. The main objective of the study was to clarify the alternative future scenarios for the international biomass market until the year 2020, and based on the scenario process, to identify underlying steps needed towards the vital working and sustainable biomass market for energy purposes. Two scenario processes were conducted for this study. The first was carried out with a group of Finnish experts and thesecond involved an international group. A heuristic, semi-structured approach, including the use of preliminary questionnaires as well as manual and computerised group support systems (GSS), was applied in the scenario processes.The scenario processes reinforced the picture of the future of international biomass and bioenergy markets as a complex and multi-layer subject. The scenarios estimated that the biomass market will develop and grow rapidly as well as diversify in the future. The results of the scenario process also opened up new discussion and provided new information and collective views of experts for the purposes of policy makers. An overall view resulting from this scenario analysis are the enormous opportunities relating to the utilisation of biomass as a resource for global energy use in the coming decades. The scenario analysis shows the key issues in the field: global economic growth including the growing need for energy, environmental forces in the global evolution, possibilities of technological development to solve global problems, capabilities of the international community to find solutions for global issues and the complex interdependencies of all these driving forces. The results of the scenario processes provide a starting point for further research analysing the technological and commercial aspects related the scenarios and foreseeing the scales and directions of biomass streams.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
Numerical weather prediction and climate simulation have been among the computationally most demanding applications of high performance computing eversince they were started in the 1950's. Since the 1980's, the most powerful computers have featured an ever larger number of processors. By the early 2000's, this number is often several thousand. An operational weather model must use all these processors in a highly coordinated fashion. The critical resource in running such models is not computation, but the amount of necessary communication between the processors. The communication capacity of parallel computers often fallsfar short of their computational power. The articles in this thesis cover fourteen years of research into how to harness thousands of processors on a single weather forecast or climate simulation, so that the application can benefit as much as possible from the power of parallel high performance computers. The resultsattained in these articles have already been widely applied, so that currently most of the organizations that carry out global weather forecasting or climate simulation anywhere in the world use methods introduced in them. Some further studies extend parallelization opportunities into other parts of the weather forecasting environment, in particular to data assimilation of satellite observations.
Resumo:
This thesis presents briefly the basic operation and use of centrifugal pumps and parallel pumping applications. The characteristics of parallel pumping applications are compared to circuitry, in order to search analogy between these technical fields. The purpose of studying circuitry is to find out if common software tools for solving circuit performance could be used to observe parallel pumping applications. The empirical part of the thesis introduces a simulation environment for parallel pumping systems, which is based on circuit components of Matlab Simulink —software. The created simulation environment ensures the observation of variable speed controlled parallel pumping systems in case of different controlling methods. The introduced simulation environment was evaluated by building a simulation model for actual parallel pumping system at Lappeenranta University of Technology. The simulated performance of the parallel pumps was compared to measured values of the actual system. The gathered information shows, that if the initial data of the system and pump perfonnance is adequate, the circuitry based simulation environment can be exploited to observe parallel pumping systems. The introduced simulation environment can represent the actual operation of parallel pumps in reasonably accuracy. There by the circuitry based simulation can be used as a researching tool to develop new controlling ways for parallel pumps.
Resumo:
Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.
Resumo:
The maximum realizable power throughput of power electronic converters may be limited or constrained by technical or economical considerations. One solution to this problemis to connect several power converter units in parallel. The parallel connection can be used to increase the current carrying capacity of the overall system beyond the ratings of individual power converter units. Thus, it is possible to use several lower-power converter units, produced in large quantities, as building blocks to construct high-power converters in a modular manner. High-power converters realized by using parallel connection are needed for example in multimegawatt wind power generation systems. Parallel connection of power converter units is also required in emerging applications such as photovoltaic and fuel cell power conversion. The parallel operation of power converter units is not, however, problem free. This is because parallel-operating units are subject to overcurrent stresses, which are caused by unequal load current sharing or currents that flow between the units. Commonly, the term ’circulatingcurrent’ is used to describe both the unequal load current sharing and the currents flowing between the units. Circulating currents, again, are caused by component tolerances and asynchronous operation of the parallel units. Parallel-operating units are also subject to stresses caused by unequal thermal stress distribution. Both of these problemscan, nevertheless, be handled with a proper circulating current control. To design an effective circulating current control system, we need information about circulating current dynamics. The dynamics of the circulating currents can be investigated by developing appropriate mathematical models. In this dissertation, circulating current models aredeveloped for two different types of parallel two-level three-phase inverter configurations. Themodels, which are developed for an arbitrary number of parallel units, provide a framework for analyzing circulating current generation mechanisms and developing circulating current control systems. In addition to developing circulating current models, modulation of parallel inverters is considered. It is illustrated that depending on the parallel inverter configuration and the modulation method applied, common-mode circulating currents may be excited as a consequence of the differential-mode circulating current control. To prevent the common-mode circulating currents that are caused by the modulation, a dual modulator method is introduced. The dual modulator basically consists of two independently operating modulators, the outputs of which eventually constitute the switching commands of the inverter. The two independently operating modulators are referred to as primary and secondary modulators. In its intended usage, the same voltage vector is fed to the primary modulators of each parallel unit, and the inputs of the secondary modulators are obtained from the circulating current controllers. To ensure that voltage commands obtained from the circulating current controllers are realizable, it must be guaranteed that the inverter is not driven into saturation by the primary modulator. The inverter saturation can be prevented by limiting the inputs of the primary and secondary modulators. Because of this, also a limitation algorithm is proposed. The operation of both the proposed dual modulator and the limitation algorithm is verified experimentally.
Resumo:
Diplomityö on tehty osana Sinfonet-tutkimusprojektia, ja sen tavoitteena on laatia uusi toimintamalli asiakaslähtöisen tuotekehitystoiminnan tehostamiseksi. Toimintamallin vaatimuksena oli asiakastarvetiedon kerääminen koko tuotteen elinkaaren käyttövaiheen ajalta, ja rinnakkainen prosessi asiakastarpeiden analysointiin sekä hyödyntämiseen. Työssä yhdistetään asiakastarpeiden kartoittamiseen, analysointiin ja hyödyntämiseen käytettävät menetelmät yhdeksi systemaattiseksi kokonaisuudeksi, jota Laroxin henkilöstö voi tehokkaasti käyttää työssään. Menetelmä laatittiin teoriaosuudessa havaittujen viitekehysten ja Laroxin avainhenkilöiden kanssa suoritettujen haastattelujen pohjalta. On ehdottoman tärkeätä, että toimintamallin käyttö ja siitä saavutettavat hyödyt saadaan perusteltua yrityksen henkilöstölle. Toimintamallissa asiakastarpeiden kartoitusmenetelmiksi valittiin AVAIN-ryhmähaastattelu sekä puolistrukturoitu haastattelu, ja analysointimenetelmiksi tulkintataulukko sekä QFD (Quality Function Deployment). Tarpeiden dokumentointiin valittiin tietokannaksi Laroxin käytössä jo valmiiksi oleva wiki-pohjainen Live-järjestelmä. Uusi toimintamalli ja siinä mukana olevat prosessit kuvataan diplomityössä yksityiskohtaisesti tietovirtakaaviona.
Resumo:
The aim of this thesis is to describe hybrid drive design problems, the advantages and difficulties related to the drive. A review of possible hybrid constructions, benefits of parallel, series and series-parallel hybrids is done. In the thesis analytical and finite element calculations of permanent magnet synchronous machines with embedded magnets were done. The finite element calculations were done using Cedrat’s Flux 2D software. This machine is planned to be used as a motor-generator in a low power parallel hybrid vehicle. The boundary conditions for the design were found from Lucas-TVS Ltd., India. Design Requirements, briefly: • The system DC voltage level is 120 V, which implies Uphase = 49 V (RMS) in a three phase system. • The power output of 10 kW at base speed 1500 rpm (Torque of 65 Nm) is desired. • The maximum outer diameter should not be more than 250 mm, and the maximum core length should not exceed 40 mm. The main difficulties which the author met were the dimensional restrictions. After having designed and analyzed several possible constructions they were compared and the final design selected. Dimensioned and detailed design is performed. Effects of different parameters, such as the number of poles, number of turns and magnetic geometry are discussed. The best modification offers considerable reduction of volume.
Resumo:
Cellular automata are models for massively parallel computation. A cellular automaton consists of cells which are arranged in some kind of regular lattice and a local update rule which updates the state of each cell according to the states of the cell's neighbors on each step of the computation. This work focuses on reversible one-dimensional cellular automata in which the cells are arranged in a two-way in_nite line and the computation is reversible, that is, the previous states of the cells can be derived from the current ones. In this work it is shown that several properties of reversible one-dimensional cellular automata are algorithmically undecidable, that is, there exists no algorithm that would tell whether a given cellular automaton has the property or not. It is shown that the tiling problem of Wang tiles remains undecidable even in some very restricted special cases. It follows that it is undecidable whether some given states will always appear in computations by the given cellular automaton. It also follows that a weaker form of expansivity, which is a concept of dynamical systems, is an undecidable property for reversible one-dimensional cellular automata. It is shown that several properties of dynamical systems are undecidable for reversible one-dimensional cellular automata. It shown that sensitivity to initial conditions and topological mixing are undecidable properties. Furthermore, non-sensitive and mixing cellular automata are recursively inseparable. It follows that also chaotic behavior is an undecidable property for reversible one-dimensional cellular automata.
Resumo:
A software development process is a predetermined sequence of steps to create a piece of software. A software development process is used, so that an implementing organization could gain significant benefits. The benefits for software development companies, that can be attributed to software process improvement efforts, are improved predictability in the development effort and improved quality software products. The implementation, maintenance, and management of a software process as well as the software process improvement efforts are expensive. Especially the implementation phase is expensive with a best case scenario of a slow return on investment. Software processes are rare in very small software development companies because of the cost of implementation and an improbable return on investment. This study presents a new method to enable benefits that are usually related to software process improvement to small companies with a low cost. The study presents reasons for the development of the method, a description of the method, and an implementation process for the method, as well as a theoretical case study of a method implementation. The study's focus is on describing the method. The theoretical use case is used to illustrate the theory of the method and the implementation process of the method. The study ends with a few conclusions on the method and on the method's implementation process. The main conclusion is that the method requires further study as well as implementation experiments to asses the value of the method.
Resumo:
Investment decision-making on far-reaching innovation ideas is one of the key challenges practitioners and academics face in the field of innovation management. However, the management practices and theories strongly rely on evaluation systems that do not fit in well with this setting. These systems and practices normally cannot capture the value of future opportunities under high uncertainty because they ignore the firm’s potential for growth and flexibility. Real options theory and options-based methods have been offered as a solution to facilitate decision-making on highly uncertain investment objects. Much of the uncertainty inherent in these investment objects is attributable to unknown future events. In this setting, real options theory and methods have faced some challenges. First, the theory and its applications have largely been limited to market-priced real assets. Second, the options perspective has not proved as useful as anticipated because the tools it offers are perceived to be too complicated for managerial use. Third, there are challenges related to the type of uncertainty existing real options methods can handle: they are primarily limited to parametric uncertainty. Nevertheless, the theory is considered promising in the context of far-reaching and strategically important innovation ideas. The objective of this dissertation is to clarify the potential of options-based methodology in the identification of innovation opportunities. The constructive research approach gives new insights into the development potential of real options theory under non-parametric and closeto- radical uncertainty. The distinction between real options and strategic options is presented as an explanans for the discovered limitations of the theory. The findings offer managers a new means of assessing future innovation ideas based on the frameworks constructed during the course of the study.
Resumo:
Novel biomaterials are needed to fill the demand of tailored bone substitutes required by an ever‐expanding array of surgical procedures and techniques. Wood, a natural fiber composite, modified with heat treatment to alter its composition, may provide a novel approach to the further development of hierarchically structured biomaterials. The suitability of wood as a model biomaterial as well as the effects of heat treatment on the osteoconductivity of wood was studied by placing untreated and heat‐treated (at 220 C , 200 degrees and 140 degrees for 2 h) birch implants (size 4 x 7mm) into drill cavities in the distal femur of rabbits. The follow‐up period was 4, 8 and 20 weeks in all in vivo experiments. The flexural properties of wood as well as dimensional changes and hydroxyl apatite formation on the surface of wood (untreated, 140 degrees C and 200 degrees C heat‐treated wood) were tested using 3‐point bending and compression tests and immersion in simulated body fluid. The effect of premeasurement grinding and the effect of heat treatment on the surface roughness and contour of wood were tested with contact stylus and non‐contact profilometry. The effects of heat treatment of wood on its interactions with biological fluids was assessed using two different test media and real human blood in liquid penetration tests. The results of the in vivo experiments showed implanted wood to be well tolerated, with no implants rejected due to foreign body reactions. Heat treatment had significant effects on the biocompatibility of wood, allowing host bone to grow into tight contact with the implant, with occasional bone ingrowth into the channels of the wood implant. The results of the liquid immersion experiments showed hydroxyl apatite formation only in the most extensively heat‐treated wood specimens, which supported the results of the in vivo experiments. Parallel conclusions could be drawn based on the results of the liquid penetration test where human blood had the most favorable interaction with the most extensively heat‐treated wood of the compared materials (untreated, 140 degrees C and 200 degrees C heat‐treated wood). The increasing biocompatibility was inferred to result mainly from changes in the chemical composition of wood induced by the heat treatment, namely the altered arrangement and concentrations of functional chemical groups. However, the influence of microscopic changes in the cell walls, surface roughness and contour cannot be totally excluded. The heat treatment was hypothesized to produce a functional change in the liquid distribution within wood, which could have biological relevance. It was concluded that the highly evolved hierarchical anatomy of wood could yield information for the future development of bulk bone substitutes according to the ideology of bioinspiration. Furthermore, the results of the biomechanical tests established that heat treatment alters various biologically relevant mechanical properties of wood, thus expanding the possibilities of wood as a model material, which could include e.g. scaffold applications, bulk bone applications and serving as a tool for both mechanical testing and for further development of synthetic fiber reinforced composites.