1000 resultados para 952
Resumo:
The active magnetic bearings have recently been intensively developed because of noncontact support having several advantages compared to conventional bearings. Due to improved materials, strategies of control, and electrical components, the performance and reliability of the active magnetic bearings are improving. However, additional bearings, retainer bearings, still have a vital role in the applications of the active magnetic bearings. The most crucial moment when the retainer bearings are needed is when the rotor drops from the active magnetic bearings on the retainer bearings due to component or power failure. Without appropriate knowledge of the retainer bearings, there is a chance that an active magnetic bearing supported rotor system will be fatal in a drop-down situation. This study introduces a detailed simulation model of a rotor system in order to describe a rotor drop-down situation on the retainer bearings. The introduced simulation model couples a finite element model with component mode synthesis and detailed bearing models. In this study, electrical components and electromechanical forces are not in the focus. The research looks at the theoretical background of the finite element method with component mode synthesis that can be used in the dynamic analysis of flexible rotors. The retainer bearings are described by using two ball bearing models, which include damping and stiffness properties, oil film, inertia of rolling elements and friction between races and rolling elements. Thefirst bearing model assumes that the cage of the bearing is ideal and that the cage holds the balls in their predefined positions precisely. The second bearing model is an extension of the first model and describes the behavior of the cageless bearing. In the bearing model, each ball is described by using two degrees of freedom. The models introduced in this study are verified with a corresponding actual structure. By using verified bearing models, the effects of the parameters of the rotor system onits dynamics during emergency stops are examined. As shown in this study, the misalignment of the retainer bearings has a significant influence on the behavior of the rotor system in a drop-down situation. In this study, a stability map of the rotor system as a function of rotational speed of the rotor and the misalignment of the retainer bearings is presented. In addition, the effects of parameters of the simulation procedure and the rotor system on the dynamics of system are studied.
Resumo:
Tässä väitöstutkimuksessa tutkittiin fysikaaliskemiallisten olosuhteiden ja toimintaparametrien vaikutusta juustoheran fraktiointiin. Kirjallisuusosassa on käsitelty heran ympäristövaikutusta, heran hyödyntämistä ja heran käsittelyä kalvotekniikalla. Kokeellinen osa on jaettu kahteen osaan, joista ensimmäinen käsittelee ultrasuodatusta ja toinen nanosuodatusta juustoheran fraktioinnissa. Ultrasuodatuskalvon valinta tehtiin perustuen kalvon cut-off lukuun, joka oli määritetty polyetyleeniglykoliliuoksilla olosuhteissa, joissa konsentraatiopolariosaatioei häiritse mittausta. Kriittisen vuon konseptia käytettiin sopivan proteiinikonsentraation löytämiseksi ultrasuodatuskokeisiin, koska heraproteiinit ovat tunnetusti kalvoa likaavia aineita. Ultrasuodatuskokeissa tutkittiin heran eri komponenttien suodattumista kalvon läpi ja siihen vaikuttavia ominaisuuksia. Herapermeaattien peptidifraktiot analysoitiin kokoekskluusiokromatografialla ja MALDI-TOF massaspektrometrillä. Kokeissa käytettävien nanosuodatuskalvojen keskimääräinen huokoskoko analysoitiin neutraaleilla liukoisilla aineilla ja zeta-potentiaalit virtauspotentiaalimittauksilla. Aminohappoja käytettiin malliaineina tutkittaessa huokoskoon ja varauksen merkitystä erotuksessa. Aminohappojen retentioon vaikuttivat pH ja liuoksen ionivahvuus sekä molekyylien väliset vuorovaikutukset. Heran ultrasuodatuksessa tuotettu permeaatti, joka sisälsi pieniä peptidejä, laktoosia ja suoloja, nanosuodatettiin happamassa ja emäksisessä pH:ssa. Emäksisissä oloissa tehdyssä nanosuodatuksessa foulaantumista tapahtui vähemmän ja permeaattivuo oli parempi. Emäksisissä oloissa myös selektiivisyys laktoosin erotuksessa peptideistä oli parempi verrattuna selektiivisyyteen happamissa oloissa.
Resumo:
Despite the rapid change in today's business environment there are relatively few studies about corporate renewal. This study aims for its part at filling that research gap by studying the concepts of strategy, corporate renewal, innovation and corporate venturing. Its purpose is to enhance our understanding of how established companies operating in dynamic and global environment can benefit from their corporate venturing activities. The theoretical part approaches the research problem in corporate and venture levels. Firstly, it focuses on mapping the determinants of strategy and suggests using industry, location, resources, knowledge, structure and culture, market, technology and business model to assess the environment and using these determinants to optimize speed and magnitude of change.Secondly, it concludes that the choice of innovation strategy is dependent on the type and dimensions of innovation and suggests assessing market, technology, business model as well as novelty and complexity related to each of them for choosing an optimal context for developing innovations further. Thirdly, it directsattention on processes through which corporate renewal takes place. On corporate level these processes are identified as strategy formulation, strategy formation and strategy implementation. On the venture level the renewal processes are identified as learning, leveraging and nesting. The theoretical contribution of this study, the framework of strategic corporate venturing, joins corporate and venture level management issues together and concludes that strategy processes and linking processes are the mechanism through which continuous corporate renewaltakes place. The framework of strategic corporate venturing proposed by this study is a new way to illustrate the role of corporate venturing as a purposefullybuilt, different view of a company's business environment. The empirical part extended the framework by enhancing our understanding of the link between corporate renewal and corporate venturing in its real life environment in three Finnish companies: Metso, Nokia and TeliaSonera. Characterizing companies' environmentwith the determinants of strategy identified in this study provided a structured way to analyze their competitive position and renewal challenges that they arefacing. More importantly the case studies confirmed that a link between corporate renewal and corporate venturing exists and found out that the link is not as straight forward as indicated by the theory. Furthermore, the case studies enhanced the framework by indicating a sequence according to which the processes work. Firstly, the induced strategy processes strategy formulation and strategy implementation set the scene for corporate venturing context and management processes and leave strategy formation for the venture. Only after that can strategies formed by ventures come back to the corporate level - and if found viable in the corporate level be formalized through formulation and implementation. With the help of the framework of strategic corporate venturing the link between corporaterenewal and corporate venturing can be found and managed. The suggested response to the continuous need for change is continuous renewal i.e. institutionalizing corporate renewal in the strategy processes of the company. As far as benefiting from venturing is concerned the answer lies in deliberately managing venturing in a context different to the mainstream businesses and establishing efficientlinking processes to exploit the renewal potential of individual ventures.
Resumo:
In this thesis, the sorption and elastic properties of the cation-exchange resins were studied to explain the liquid chromatographic separation of carbohydrates. Na+, Ca2+ and La3+ form strong poly(styrene-co-divinylbenzene) (SCE) as well as Na+ and Ca2+ form weak acrylic (WCE) cation-exchange resins at different cross-link densities were treated within this work. The focus was on the effects of water-alcohol mixtures, mostly aqueous ethanol, and that of the carbohydrates. The carbohydrates examined were rhamnose, xylose, glucose, fructose, arabinose, sucrose, xylitol and sorbitol. In addition to linear chromatographic conditions, non-linear conditions more typical for industrial applications were studied. Both experimental and modeling aspectswere covered. The aqueous alcohol sorption on the cation-exchangers were experimentally determined and theoretically calculated. The sorption model includes elastic parameters, which were obtained from sorption data combined with elasticity measurements. As hydrophilic materials cation-exchangers are water selective and shrink when an organic solvent is added. At a certain deswelling degree the elastic resins go through glass transition and become as glass-like material. Theincreasing cross-link level and the valence of the counterion decrease the sorption of solvent components in the water-rich solutions. The cross-linkage or thecounterions have less effect on the water selectivity than the resin type or the used alcohol. The amount of water sorbed is higher in the WCE resin and, moreover, the WCE resin is more water selective than the corresponding SCE resin. Theincreased aliphatic part of lower alcohols tend to increase the water selectivity, i.e. the resins are more water selective in 2-propanol than in ethanol solutions. Both the sorption behavior of carbohydrates and the sorption differences between carbohydrates are considerably affected by the eluent composition and theresin characteristics. The carbohydrate sorption was experimentally examined and modeled. In all cases, sorption and moreover the separation of carbohydrates are dominated by three phenomena: partition, ligand exchange and size exclusion. The sorption of hydrophilic carbohydrates increases when alcohol is added into the eluent or when carbohydrate is able to form coordination complexes with the counterions, especially with multivalent counterions. Decreasing polarity of the eluent enhances the complex stability. Size exclusion effect is more prominent when the resin becomes tighter or carbohydrate size increases. On the other hand,the elution volumes between different sized carbohydrates decreases with the decreasing polarity of the eluent. The chromatographic separation of carbohydrateswas modeled, using rhamnose and xylose as target molecules. The thermodynamic sorption model was successfully implemented in the rate-based column model. The experimental chromatographic data were fitted by using only one adjustable parameter. In addition to the fitted data also simulated data were generated and utilized in explaining the effect of the eluent composition and of the resin characteristics on the carbohydrate separation.
Resumo:
Woven monofilament, multifilament, and spun yarn filter media have long been the standard media in liquid filtration equipment. While the energy for a solid-liquid separation process is determined by the engineering work, it is the interface between the slurry and the equipment - the filter media - that greatly affects the performance characteristics of the unit operation. Those skilled in the art are well aware that a poorly designed filter medium may endanger the whole operation, whereas well-performing filter media can make the operation smooth and economical. As the mineral and pulp producers seek to produce ever finer and more refined fractions of their products, it is becoming increasingly important to be able to dewater slurries with average particle sizes around 1 ¿m using conventional, high-capacity filtration equipment. Furthermore, the surface properties of the media must not allow sticky and adhesive particles to adhere to the media. The aim of this thesis was to test how the dirt-repellency, electrical resistance and highpressure filtration performance of selected woven filter media can be improved by modifying the fabric or yarn with coating, chemical treatment and calendering. The results achieved by chemical surface treatments clearly show that the woven media surface properties can be modified to achieve lower electrical resistance and improved dirt-repellency. The main challenge with the chemical treatments is the abrasion resistance and, while the experimental results indicate that the treatment is sufficiently permanent to resist standard weathering conditions, they may still prove to be inadequately strong in terms of actual use.From the pressure filtration studies in this work, it seems obvious that the conventional woven multifilament fabrics still perform surprisingly well against the coated media in terms of filtrate clarity and cake build-up. Especially in cases where the feed slurry concentration was low and the pressures moderate, the conventional media seemed to outperform the coated media. In the cases where thefeed slurry concentration was high, the tightly woven media performed well against the monofilament reference fabrics, but seemed to do worse than some of the coated media. This result is somewhat surprising in that the high initial specific resistance of the coated media would suggest that the media will blind more easily than the plain woven media. The results indicate, however, that it is actually the woven media that gradually clogs during the coarse of filtration. In conclusion, it seems obvious that there is a pressure limit above which the woven media looses its capacity to keep the solid particles from penetrating the structure. This finding suggests that for extreme pressures the only foreseeable solution is the coated fabrics supported by a strong enough woven fabric to hold thestructure together. Having said that, the high pressure filtration process seems to follow somewhat different laws than the more conventional processes. Based on the results, it may well be that the role of the cloth is most of all to support the cake, and the main performance-determining factor is a long life time. Measuring the pore size distribution with a commercially available porometer gives a fairly accurate picture of the pore size distribution of a fabric, but failsto give insight into which of the pore sizes is the most important in determining the flow through the fabric. Historically air, and sometimes water, permeability measures have been the standard in evaluating media filtration performance including particle retention. Permeability, however, is a function of a multitudeof variables and does not directly allow the estimation of the effective pore size. In this study a new method for estimating the effective pore size and open pore area in a densely woven multifilament fabric was developed. The method combines a simplified equation of the electrical resistance of fabric with the Hagen-Poiseuille flow equation to estimate the effective pore size of a fabric and the total open area of pores. The results are validated by comparison to the measured values of the largest pore size (Bubble point) and the average pore size. The results show good correlation with measured values. However, the measured and estimated values tend to diverge in high weft density fabrics. This phenomenon is thought to be a result of a more tortuous flow path of denser fabrics, and could most probably be cured by using another value for the tortuosity factor.
Resumo:
The changing business environment demands that chemical industrial processes be designed such that they enable the attainment of multi-objective requirements and the enhancement of innovativedesign activities. The requirements and key issues for conceptual process synthesis have changed and are no longer those of conventional process design; there is an increased emphasis on innovative research to develop new concepts, novel techniques and processes. A central issue, how to enhance the creativity of the design process, requires further research into methodologies. The thesis presentsa conflict-based methodology for conceptual process synthesis. The motivation of the work is to support decision-making in design and synthesis and to enhance the creativity of design activities. It deals with the multi-objective requirements and combinatorially complex nature of process synthesis. The work is carriedout based on a new concept and design paradigm adapted from Theory of InventiveProblem Solving methodology (TRIZ). TRIZ is claimed to be a `systematic creativity' framework thanks to its knowledge based and evolutionary-directed nature. The conflict concept, when applied to process synthesis, throws new lights on design problems and activities. The conflict model is proposed as a way of describing design problems and handling design information. The design tasks are represented as groups of conflicts and conflict table is built as the design tool. The general design paradigm is formulated to handle conflicts in both the early and detailed design stages. The methodology developed reflects the conflict nature of process design and synthesis. The method is implemented and verified through case studies of distillation system design, reactor/separator network design and waste minimization. Handling the various levels of conflicts evolve possible design alternatives in a systematic procedure which consists of establishing an efficient and compact solution space for the detailed design stage. The approach also provides the information to bridge the gap between the application of qualitative knowledge in the early stage and quantitative techniques in the detailed design stage. Enhancement of creativity is realized through the better understanding of the design problems gained from the conflict concept and in the improvement in engineering design practice via the systematic nature of the approach.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
The aim of this thesis was to produce information for the estimation of the flow balance of wood resin in mechanical pulping and to demonstrate the possibilities for improving the efficiency of deresination in practice. It was observed that chemical changes in wood resin take place only during peroxide bleaching, a significant amount of water dispersed wood resin is retained in the pulp mat during dewatering and the amount of wood resin in the solid phase of the process filtrates is very small. On the basis of this information there exist three parameters related to behaviour of wood resin that determine the flow balance in the process: 1. The liberation of wood resin to the pulp water phase 2. Theretention of water dispersed wood resin in dewatering 3. The proportion of wood resin degraded in the peroxide bleaching The effect of different factors on these parameters was evaluated with the help of laboratory studies and a literature survey. Also, information related to the values of these parameters in existing processes was obtained in mill measurements. With the help of this information, it was possible to evaluate the deresination efficiency and the effect of different factors on this efficiency in a pulping plant that produced low-freeness mechanical pulp. This evaluation showed that the wood resin content of mechanical pulp can be significantly decreased if there exists, in the process, a peroxide bleaching and subsequent washing stage. In the case of an optimal process configuration, as high as a 85 percent deresination efficiency seems to be possible with a water usage level of 8 m3/o.d.t.
Resumo:
Numerical weather prediction and climate simulation have been among the computationally most demanding applications of high performance computing eversince they were started in the 1950's. Since the 1980's, the most powerful computers have featured an ever larger number of processors. By the early 2000's, this number is often several thousand. An operational weather model must use all these processors in a highly coordinated fashion. The critical resource in running such models is not computation, but the amount of necessary communication between the processors. The communication capacity of parallel computers often fallsfar short of their computational power. The articles in this thesis cover fourteen years of research into how to harness thousands of processors on a single weather forecast or climate simulation, so that the application can benefit as much as possible from the power of parallel high performance computers. The resultsattained in these articles have already been widely applied, so that currently most of the organizations that carry out global weather forecasting or climate simulation anywhere in the world use methods introduced in them. Some further studies extend parallelization opportunities into other parts of the weather forecasting environment, in particular to data assimilation of satellite observations.
Resumo:
The main objective of this thesis was togenerate better filtration technologies for effective production of pure starchproducts, and thereby the optimisation of filtration sequences using created models, as well as the synthesis of the theories of different filtration stages, which were suitable for starches. At first, the structure and the characteristics of the different starch grades are introduced and each starch grade is shown to have special characteristics. These are taken as the basis of the understanding of the differences in the behaviour of the different native starch grades and their modifications in pressure filtration. Next, the pressure filtration process is divided into stages, which are filtration, cake washing, compression dewatering and displacement dewatering. Each stage is considered individually in their own chapters. The order of the different suitable combinations of the process stages are studied, as well as the proper durations and pressures of the stages. The principles of the theory of each stageare reviewed, the methods for monitoring the progress of each stage are presented, and finally, the modelling of them is introduced. The experimental results obtained from the different stages of starch filtration tests are given and the suitability of the theories and models to the starch filtration are shown. Finally, the theories and the models are gathered together and shown, that the analysis of the whole starch pressure filtration process can be performed with the software developed.
Resumo:
Over the past few decades, turbulent change has characterized the situation in the media industry. It has been noted that digitalization and new media are strongly influencing the industry: it is changing the existing market dynamics and requires new strategies. Prior research on the impact of digitalization and the Internet has emphasized news-focused media such as newspaper publishing and broadcasting, yet magazine publishing is very seldom the focus of the research. This study examines how the Internetimpacts magazine publishing. The work presents a multi-level analysis on the role and impact of the Internet on magazine products, companies and industry. The study is founded on strategic management, technology management and media economics literature. This study consists of two parts. The first part introduces the research topic and discusses the overall results of the study. The second part comprises five research publications. Qualitative research methods are used throughout. The results of the study indicate that the Internet has not had a disruptive effect on magazine publishing, and that its strategic implications could rather be considered complementary to the print magazine and the business as a whole. It seems that the co-specialized assets, together with market-related competencies and unchanged core competence have protected established firms from the disruptive effect of the new technology in magazine publishing. In addition, it seems that the Internet offers a valuable possibility to build and nourish customer relationships. The study contributes tomedia management and economics research by moving from product- or industry-level investigations towards a strategic-management perspective.
Resumo:
The purpose of this dissertation is to increase the understanding and knowledge of field sales management control systems (i.e. sales managers monitoring, directing, evaluating and rewarding activities) and their potential consequences on salespeople. This topic is important because research conducted in the past has indicated that the choice of control system type can on the other hand have desirable consequences, such as high levels of motivation and performance, and on the other hand leadto harmful unintended consequences, such as opportunistic or unethical behaviors. Despite the fact that marketing and sales management control systems have been under rigorous research for over two decades, it still is at a very early stage of development, and several inconsistencies can be found in the research results. This dissertation argues that these inconsistencies are mainly derived from misspecification of the level of analysis in the past research. These different levels of analysis (i.e. strategic, tactical, and operational levels) involve very different decision-making situations regarding the control and motivation of sales force, which should be taken into consideration when conceptualizing the control. Moreover, the study of salesperson consequences of a field sales management control system is actually a cross-level phenomenon, which means that at least two levels of analysis are simultaneously involved. The results of this dissertation confirm the need to re-conceptualize the field sales management control system concept. It provides empirical evidence for the assertion that control should be conceptualized with more details atthe tactical/operational level of analysis than at the strategic levelof analysis. Moreover, the results show that some controls are more efficiently communicated to field salespeople than others. It is proposed that this difference is due to different purposes of control; some controls aredesigned for influencing salespersons' behavior (aim at motivating) whereas some controls are designed to aid decision-making (aim at providing information). According to the empirical results of this dissertation, the both types of controls have an impact to the sales force, but this impactis not as strong as expected. The results obtained in this dissertation shed some light to the nature of field sales management control systems, and their consequences on salespeopl
Resumo:
Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.
Resumo:
The solid-rotor induction motor provides a mechanically and thermally reliable solution for demanding environments where other rotor solutions are prohibited or questionable. Solid rotors, which are manufactured of single pieces of ferromagnetic material, are commonly used in motors in which the rotationspeeds exceed substantially the conventional speeds of laminated rotors with squirrel-cage. During the operation of a solid-rotor electrical machine, the rotor core forms a conductor for both the magnetic flux and the electrical current. This causes an increase in the rotor resistance and rotor leakage inductance, which essentially decreases the power factor and the efficiency of the machine. The electromagnetic problems related to the solid-rotor induction motor are mostly associated with the low performance of the rotor. Therefore, the main emphasis in this thesis is put on the solid steel rotor designs. The rotor designs studied in thisthesis are based on the fact that the rotor construction should be extremely robust and reliable to withstand the high mechanical stresses caused by the rotational velocity of the rotor. In addition, the demanding operation environment sets requirements for the applied materials because of the high temperatures and oxidizing acids, which may be present in the cooling fluid. Therefore, the solid rotors analyzed in this thesis are made of a single piece of ferromagnetic material without any additional parts, such as copper end-rings or a squirrel-cage. A pure solid rotor construction is rigid and able to keep its balance over a large speed range. It also may tolerate other environmental stresses such as corroding substances or abrasive particles. In this thesis, the main target is to improve the performance of an induction motor equipped with a solid steel rotor by traditional methods: by axial slitting of the rotor, by selecting a proper rotor core material and by coating the rotor with a high-resistive stainless ferromagnetic material. In the solid steel rotor calculation, the rotor end-effects have a significant effect on the rotor characteristics. Thus, the emphasis is also put on the comparison of different rotor endfactors. In addition, a corrective slip-dependent end-factor is proposed. The rotor designs covered in this thesis are the smooth solid rotor, the axially slitted solid rotor and the slitted rotor having a uniform ferromagnetic coating cylinder. The thesis aims at design rules for multi-megawatt machines. Typically, mega-watt-size solidrotor machines find their applications mainly in the field of electric-motor-gas-compression systems, in steam-turbine applications, and in various types of largepower pump applications, where high operational speeds are required. In this thesis, a 120 kW, 10 000 rpm solid-rotor induction motor is usedas a small-scale model for such megawatt-range solid-rotor machines. The performance of the 120 kW solid-rotor induction motors is determined by experimental measurements and finite element calculations.
Resumo:
This thesis investigates factors that affect software testing practice. The thesis consists of empirical studies, in which the affecting factors were analyzed and interpreted using quantitative and qualitative methods. First, the Delphi method was used to specify the scope of the thesis. Secondly, for the quantitative analysis 40industry experts from 30 organizational units (OUs) were interviewed. The survey method was used to explore factors that affect software testing practice. Conclusions were derived using correlation and regression analysis. Thirdly, from these 30 OUs, five were further selected for an in-depth case study. The data was collected through 41 semi-structured interviews. The affecting factors and their relationships were interpreted with qualitative analysis using grounded theory as the research method. The practice of software testing was analyzed from the process improvement and knowledge management viewpoints. The qualitative and quantitativeresults were triangulated to increase the validity of the thesis. Results suggested that testing ought to be adjusted according to the business orientation of the OU; the business orientation affects the testing organization and knowledge management strategy, and the business orientation andthe knowledge management strategy affect outsourcing. As a special case, the complex relationship between testing schedules and knowledge transfer is discussed. The results of this thesis can be used in improvingtesting processes and knowledge management in software testing.