976 resultados para Lappeenranta University of Technology
Resumo:
In this thesis, equilibrium and dynamic sorption properties of weakly basic chelating adsorbents were studied to explain removal of copper, nickel from a concentrated zinc sulfate solution in a hydrometallurgical process. Silica-supported chelating composites containing either branched poly(ethyleneimine) (BPEI) or 2-(aminomethyl)pyridine (AMP) as a functional group were used. The adsorbents are commercially available from Purity Systems Inc, USA as WP-1® and CuWRAM®, respectively. The fundamental interactions between the adsorbents, sulfuric acid and metal sulfates were studied in detail and the results were used to find the best conditions for removal of copper and nickel from an authentic ZnSO4 process solution. In particular, the effect of acid concentration and temperature on the separation efficiency was considered. Both experimental and modeling aspectswere covered in all cases. Metal sorption is considerably affected by the chemical properties of the studied adsorbents and by the separation conditions. In the case of WP-1, acid affinity is so high that column separation of copper, nickel and zinc has to be done using the adsorbent in base-form. On the other hand, the basicity of CuWRAM is significantly lower and protonated adsorbent can be used. Increasing temperature decreases the basicity and the metals affinity of both adsorbents, but the uptake capacities remain practically unchanged. Moreover, increasing temperature substantially enhances intra-particle mass transport and decreases viscosities thus allowing significantly higher feed flow rates in the fixed-bed separation. The copper selectivity of both adsorbents is very high even in the presence of a 250-fold excess of zinc. However, because of the basicity of WP-1, metal precipitation is a serious problem and therefore only CuWRAM is suitable for the practical industrial application. The optimum temperature for copper removal appears to be around 60 oC and an alternative solution purification method is proposed. The Ni/Zn selectivity of both WP-1 and CuWRAM is insufficient for removal of the very small amounts of nickel present in the concentrated ZnSO4 solution.
Resumo:
This is a study of team social networks, their antecedents and outcomes. In focusing attention on the structural configuration of the team this research contributes to a new wave of thinking concerning group social capital. The research site was a random sample of Finnish work organisations. The data consisted of 499 employees in 76 teams representing 48 different organisations. A systematic literature review and quantitative methods were used in conducting the research: the former primarily to establish the current theoretical position on the relationships among the variables and the latter to test these relationships. Social network analysis was the primary method used in identifying the social-network relations among the work-team members. The first and key contribution of this study is that it relates the structuralnetwork properties of work teams to behavioural outcomes, attitudinal outcomes and, ultimately, team performance. Moreover, it shows that addressing attitudinal outcomes is also important in terms of team performance; attitudinal outcomes (team identity) mediated the relationship between the team’s performance and its social network. The second contribution is that it examines the possible antecedents of the social structure. It is thus one response to Salancik’s (1995) call for a network theory in that it explains why certain network characteristics exist. Itdemonstrates that irrespective of whether or not a team is heterogeneous in terms of age or gender, educational diversity may protect it from centralisation. However, heterogeneity in terms of gender turned out to have a negative impact on density. Thirdly, given the observation that the benefits of (team) networks are typically theorised and modelled without reference to the nature of the relationships comprising the structure, the study directly tested whether team knowledge mediated the effects of instrumental and expressive network relationships on team performance. Furthermore, with its focus on expressive networks that link the workplace to a more informal world, which have been rather neglected in previous research, it enhances knowledge of teams andnetworks. The results indicate that knowledge sharing fully mediates the influence of complementarities between dense and fragmented instrumental network relationships, thus providing empirical validation of the implicit understanding that networks transfer knowledge. Fourthly, the study findings suggest that an optimal configuration of the work-team social-network structure combines both bridging and bonding social relationships.
Resumo:
This thesis is devoted to investigations of three typical representatives of the II-V diluted magnetic semiconductors, Zn1-xMnxAs2, (Zn1-xMnx)3As2 and p-CdSb:Ni. When this work started the family of the II-V semiconductors was presented by only the compounds belonging to the subgroup II3-V2, as (Zn1-xMnx)3As2, whereas the rest of the materials mentioned above were not investigated at all. Pronounced low-field magnetic irreversibility, accompanied with a ferromagnetic transition, are observed in Zn1-xMnxAs2 and (Zn1-xMnx)3As2 near 300 K. These features give evidence for presence of MnAs nanosize magnetic clusters, responsible for frustrated ground magnetic state. In addition, (Zn1-xMnx)3As2 demonstrates large paramagnetic response due to considerable amount of single Mn ions and small antiferromagnetic clusters. Similar paramagnetic system existing in Zn1-xMnxAs2 is much weaker. Distinct low-field magnetic irreversibility, accompanied with a rapid saturation of the magnetization with increasing magnetic field, is observed near the room temperature in p- CdSb:Ni, as well. Such behavior is connected to the frustrated magnetic state, determined by Ni-rich magnetic Ni1-xSbx nanoclusters. Their large non-sphericity and preferable orientations are responsible for strong anisotropy of the coercivity and saturation magnetization of p- CdSb:Ni. Parameters of the Ni1-xSbx nanoclusters are estimated. Low-temperature resistivity of p-CdSb:Ni is governed by a hopping mechanism of charge transfer. The variable-range hopping conductivity, observed in zero magnetic field, demonstrates a tendency of transformation into the nearest-neighbor hopping conductivity in non-zero magnetic filed. The Hall effect in p-CdSb:Ni exhibits presence of a positive normal and a negative anomalous contributions to the Hall resistivity. The normal Hall coefficient is governed mainly by holes activated into the valence band, whereas the anomalous Hall effect, attributable to the Ni1-xSbx nanoclusters with ferromagnetically ordered internal spins, exhibits a low-temperature power-law resistivity scaling.
Resumo:
In the paper machine, it is not a desired feature for the boundary layer flows in the fabric and the roll surfaces to travel into the closing nips, creating overpressure. In this thesis, the aerodynamic behavior of the grooved roll and smooth rolls is compared in order to understand the nip flow phenomena, which is the main reason why vacuum and grooved roll constructions are designed. A common method to remove the boundary layer flow from the closing nip is to use the vacuum roll construction. The downside of the use of vacuum rolls is high operational costs due to pressure losses in the vacuum roll shell. The deep grooved roll has the same goal, to create a pressure difference over the paper web and keep the paper attached to the roll or fabric surface in the drying pocket of the paper machine. A literature review revealed that the aerodynamic functionality of the grooved roll is not very well known. In this thesis, the aerodynamic functionality of the grooved roll in interaction with a permeable or impermeable wall is studied by varying the groove properties. Computational fluid dynamics simulations are utilized as the research tool. The simulations have been performed with commercial fluid dynamics software, ANSYS Fluent. Simulation results made with 3- and 2-dimensional fluid dynamics models are compared to laboratory scale measurements. The measurements have been made with a grooved roll simulator designed for the research. The variables in the comparison are the paper or fabric wrap angle, surface velocities, groove geometry and wall permeability. Present-day computational and modeling resources limit grooved roll fluid dynamics simulations in the paper machine scale. Based on the analysis of the aerodynamic functionality of the grooved roll, a grooved roll simulation tool is proposed. The smooth roll simulations show that the closing nip pressure does not depend on the length of boundary layer development. The surface velocity increase affects the pressure distribution in the closing and opening nips. The 3D grooved roll model reveals the aerodynamic functionality of the grooved roll. With the optimal groove size it is possible to avoid closing nip overpressure and keep the web attached to the fabric surface in the area of the wrap angle. The groove flow friction and minor losses play a different role when the wrap angle is changed. The proposed 2D grooved roll simulation tool is able to replicate the grooved aerodynamic behavior with reasonable accuracy. A small wrap angle predicts the pressure distribution correctly with the chosen approach for calculating the groove friction losses. With a large wrap angle, the groove friction loss shows too large pressure gradients, and the way of calculating the air flow friction losses in the groove has to be reconsidered. The aerodynamic functionality of the grooved roll is based on minor and viscous losses in the closing and opening nips as well as in the grooves. The proposed 2D grooved roll model is a simplification in order to reduce computational and modeling efforts. The simulation tool makes it possible to simulate complex paper machine constructions in the paper machine scale. In order to use the grooved roll as a replacement for the vacuum roll, the grooved roll properties have to be considered on the basis of the web handling application.
Resumo:
It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.
Resumo:
The front end of innovation is regarded as one of the most important steps in building new software products or services, and the most significant benefits in software development can be achieved through improvements in the front end activities. Problems in the front end phase have an impact on customer dissatisfaction with delivered software, and on the effectiveness of the entire software development process. When these processes are improved, the likelihood of delivering high quality software and business success increases. This thesis highlights the challenges and problems related to the early phases of software development, and provides new methods and tools for improving performance in the front end activities of software development. The theoretical framework of this study comprises two fields of research. The first section belongs to the field of innovation management, and especially to the management of the early phases of the innovation process, i.e. the front end of innovation. The second section of the framework is closely linked to the processes of software engineering, especially to the early phases of the software development process, i.e. the practice of requirements engineering. Thus, this study extends the theoretical knowledge and discloses the differences and similarities in these two fields of research. In addition, this study opens up a new strand for academic discussion by connecting these research directions. Several qualitative business research methodologies have been utilized in the individual publications to solve the research questions. The theoretical and managerial contribution of the study can be divided into three areas: 1) processes and concepts, 2) challenges and development needs, and 3) means and methods for the front end activities of software development. First, the study discloses the difference and similarities between the concepts of the front end of innovation and requirements engineering, and proposes a new framework for managing the front end of the software innovation process, bringing business and innovation perspectives into software development. Furthermore, the study discloses managerial perceptions of the similarities and differences in the concept of the front end of innovation between the software industry and the traditional industrial sector. Second, the study highlights the challenges and development needs in the front end phase of software development, especially challenges in communication, such as linguistic problems, ineffective communication channels, a communication gap between users/customers and software developers, and participation of multiple persons in software development. Third, the study proposes new group methods for improving the front end activities of software development, especially customer need assessment, and the elicitation of software requirements.
Resumo:
Strategic development of distribution networks plays a key role in the asset management in electricity distribution companies. Owing to the capital-intensive nature of the field and longspan operations of companies, the significance of a strategy is emphasised. A well-devised strategy combines awareness of challenges posed by the operating environment and the future targets of the distribution company. Economic regulation, ageing infrastructure, scarcity of resources and tightening supply requirements with challenges created by the climate change put a pressure on the strategy work. On the other hand, technology development related to network automation and underground cabling assists in answering these challenges. This dissertation aims at developing process knowledge and establishing a methodological framework by which key issues related to network development can be addressed. Moreover, the work develops tools by which the effects of changes in the operating environment on the distribution business can be analysed in the strategy work. To this end, the work discusses certain characteristics of the distribution business and describes the strategy process at a principle level. Further, the work defines the subtasks in the strategy process and presents the key elements in the strategy work and long-term network planning. The work delineates the factors having either a direct or indirect effect on strategic planning and development needs in the networks; in particular, outage costs constitute an important part of the economic regulation of the distribution business, reliability being thus a key driver in network planning. The dissertation describes the methodology and tools applied to cost and reliability analyses in the strategy work. The work focuses on determination of the techno-economic feasibility of different network development technologies; these feasibility surveys are linked to the economic regulation model of the distribution business, in particular from the viewpoint of reliability of electricity supply and allowed return. The work introduces the asset management system developed for research purposes and to support the strategy work, the calculation elements of the system and initial data used in the network analysis. The key elements of this asset management system are utilised in the dissertation. Finally, the study addresses the stages of strategic decision-making and compilation of investment strategies. Further, the work illustrates implementation of strategic planning in an actual distribution company environment.
Resumo:
Cooling crystallization is one of the most important purification and separation techniques in the chemical and pharmaceutical industry. The product of the cooling crystallization process is always a suspension that contains both the mother liquor and the product crystals, and therefore the first process step following crystallization is usually solid-liquid separation. The properties of the produced crystals, such as their size and shape, can be affected by modifying the conditions during the crystallization process. The filtration characteristics of solid/liquid suspensions, on the other hand, are strongly influenced by the particle properties, as well as the properties of the liquid phase. It is thus obvious that the effect of the changes made to the crystallization parameters can also be seen in the course of the filtration process. Although the relationship between crystallization and filtration is widely recognized, the number of publications where these unit operations have been considered in the same context seems to be surprisingly small. This thesis explores the influence of different crystallization parameters in an unseeded batch cooling crystallization process on the external appearance of the product crystals and on the pressure filtration characteristics of the obtained product suspensions. Crystallization experiments are performed by crystallizing sulphathiazole (C9H9N3O2S2), which is a wellknown antibiotic agent, from different mixtures of water and n-propanol in an unseeded batch crystallizer. The different crystallization parameters that are studied are the composition of the solvent, the cooling rate during the crystallization experiments carried out by using a constant cooling rate throughout the whole batch, the cooling profile, as well as the mixing intensity during the batch. The obtained crystals are characterized by using an automated image analyzer and the crystals are separated from the solvent through constant pressure batch filtration experiments. Separation characteristics of the suspensions are described by means of average specific cake resistance and average filter cake porosity, and the compressibilities of the cakes are also determined. The results show that fairly large differences can be observed between the size and shape of the crystals, and it is also shown experimentally that the changes in the crystal size and shape have a direct impact on the pressure filtration characteristics of the crystal suspensions. The experimental results are utilized to create a procedure that can be used for estimating the filtration characteristics of solid-liquid suspensions according to the particle size and shape data obtained by image analysis. Multilinear partial least squares regression (N-PLS) models are created between the filtration parameters and the particle size and shape data, and the results presented in this thesis show that relatively obvious correlations can be detected with the obtained models.
Resumo:
The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.
Resumo:
In the theoretical part, the different polymerisation catalysts are introduced and the phenomena related to mixing in the stirred tank reactor are presented. Also the advantages and challenges related to scale-up are discussed. The aim of the applied part was to design and implement an intermediate-sized reactor useful for scale-up studies. The reactor setting was tested making one batch of Ziegler–Natta polypropylene catalyst. The catalyst preparation with a designed equipment setting succeeded and the catalyst was analysed. The analyses of the catalyst were done, because the properties of the catalyst were compared to the normal properties of Ziegler–Natta polypropylene catalyst. The total titanium content of the catalyst was slightly higher than in normal Ziegler–Natta polypropylene catalyst, but the magnesium and aluminium content of the catalyst were in the normal level. By adjusting the siphonation tube and adding one washing step the titanium content of the catalyst could be decreased. The particle size of the catalyst was small, but the activity was in a normal range. The size of the catalyst particles could be increased by decreasing the stirring speed. During the test run, it was noticed that some improvements for the designed equipment setting could be done. For example more valves for the chemical feed line need to be added to ensure inert conditions during the catalyst preparation. Also nitrogen for the reactor needs to separate from other nitrogen line. With this change the pressure in the reactor can be kept as desired during the catalyst preparation. The proposals for improvements are presented in the applied part. After these improvements are done, the equipment setting is ready for start-up. The computational fluid dynamics model for the designed reactor was provided by cooperation with Lappeenranta University of Technology. The experiments showed that for adequate mixing with one impeller, stirring speed of 600 rpm is needed. The computational fluid dynamics model with two impellers showed that there was no difference in the mixing efficiency if the upper impeller were pumping downwards or upwards.
Resumo:
The problem of understanding how humans perceive the quality of a reproduced image is of interest to researchers of many fields related to vision science and engineering: optics and material physics, image processing (compression and transfer), printing and media technology, and psychology. A measure for visual quality cannot be defined without ambiguity because it is ultimately the subjective opinion of an “end-user” observing the product. The purpose of this thesis is to devise computational methods to estimate the overall visual quality of prints, i.e. a numerical value that combines all the relevant attributes of the perceived image quality. The problem is limited to consider the perceived quality of printed photographs from the viewpoint of a consumer, and moreover, the study focuses only on digital printing methods, such as inkjet and electrophotography. The main contributions of this thesis are two novel methods to estimate the overall visual quality of prints. In the first method, the quality is computed as a visible difference between the reproduced image and the original digital (reference) image, which is assumed to have an ideal quality. The second method utilises instrumental print quality measures, such as colour densities, measured from printed technical test fields, and connects the instrumental measures to the overall quality via subjective attributes, i.e. attributes that directly contribute to the perceived quality, using a Bayesian network. Both approaches were evaluated and verified with real data, and shown to predict well the subjective evaluation results.
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
Työssä perehdytään tapoihin, joilla alueellinen julkinen verkko voidaan toteuttaa. Työn lähtökohtana on näkemys, että tietoyhteiskunnassa pääsy verkkoon on perusedellytys. Tällöin lähes kaikissa kodeissa tulisi olla mahdollisuus kytkeytyä ja olla jatkuvasti kytkeytyneenä tietoverkkoon. Lappeenranta-malli määrittelee tavan toteuttaa alueellisen julkisen verkon peruspalvelut. Mallin erityispiirteenä on mahdollisuus ilmoitusten esittämiseen verkon käyttäjille. Työssä arvioidaan Lappeenranta-mallin sopivuutta alueellisen julkisen verkon toteutustavaksi ja mitataan mallin suorituskykyä. Työn osana toteutetaan Lappeenranta-malliin kuuluva yhdysliikennepiste Lappeenrannan teknillisen yliopiston käyttöön.
Resumo:
The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue ”Pervasive and trusted network and service infrastructure” and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in “Pervasive and trusted network and service infrastructure” – programme 2007. Second the projects funded according to “Pervasive and trusted network and service infrastructure”-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.
Resumo:
Supersonic axial turbine stages typically exhibit lower efficiencies than subsonic axial turbine stages. One reason for the lower efficiency is the occurrence of shock waves. With higher pressure ratios the flow inside the turbine becomes relatively easily supersonic if there is only one turbine stage. Supersonic axial turbines can be designed in smaller physical size compared to subsonic axial turbines of same power. This makes them good candidates for turbochargers in large diesel engines, where space can be a limiting factor. Also the production costs are lower for a supersonic axial turbine stage than for two subsonic stages. Since supersonic axial turbines are typically low reaction turbines, they also create lower axial forces to be compensated with bearings compared to high reaction turbines. The effect of changing the stator-rotor axial gap in a small high (rotational) speed supersonic axial flow turbine is studied in design and off-design conditions. Also the effect of using pulsatile mass flow at the supersonic stator inlet is studied. Five axial gaps (axial space between stator and rotor) are modeled using threedimensional computational fluid dynamics at the design and three axial gaps at the off-design conditions. Numerical reliability is studied in three independent studies. An additional measurement is made with the design turbine geometry at intermediate off-design conditions and is used to increase the reliability of the modelling. All numerical modelling is made with the Navier-Stokes solver Finflo employing Chien’s k ¡ ² turbulence model. The modelling of the turbine at the design and off-design conditions shows that the total-to-static efficiency of the turbine decreases when the axial gap is increased in both design and off-design conditions. The efficiency drops almost linearily at the off-design conditions, whereas the efficiency drop accelerates with increasing axial gap at the design conditions. The modelling of the turbine stator with pulsatile inlet flow reveals that the mass flow pulsation amplitude is decreased at the stator throat. The stator efficiency and pressure ratio have sinusoidal shapes as a function of time. A hysteresis-like behaviour is detected for stator efficiency and pressure ratio as a function of inlet mass flow, over one pulse period. This behaviour arises from the pulsatile inlet flow. It is important to have the smallest possible axial gap in the studied turbine type in order to maximize the efficiency. The results for the whole turbine can also be applied to some extent in similar turbines operating for example in space rocket engines. The use of a supersonic stator in a pulsatile inlet flow is shown to be possible.