1000 resultados para 1456
Resumo:
Fuzzy subsets and fuzzy subgroups are basic concepts in fuzzy mathematics. We shall concentrate on fuzzy subgroups dealing with some of their algebraic, topological and complex analytical properties. Explorations are theoretical belonging to pure mathematics. One of our ideas is to show how widely fuzzy subgroups can be used in mathematics, which brings out the wealth of this concept. In complex analysis we focus on Möbius transformations, combining them with fuzzy subgroups in the algebraic and topological sense. We also survey MV spaces with or without a link to fuzzy subgroups. Spectral space is known in MV algebra. We are interested in its topological properties in MV-semilinear space. Later on, we shall study MV algebras in connection with Riemann surfaces. In fact, the Riemann surface as a concept belongs to complex analysis. On the other hand, Möbius transformations form a part of the theory of Riemann surfaces. In general, this work gives a good understanding how it is possible to fit together different fields of mathematics.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.
Resumo:
The role of transport in the economy is twofold. As a sector of economic activity it contributes to a share of national income. On the other hand, improvements in transport infrastructure create room for accelerated economic growth. As a means to support railways as a safe and environmentally friendly transportation mode, the EU legislation has required the opening of domestic railway freight for competition from beginning of year 2007. The importance of railways as a mode of transport has been great in Finland, as a larger share of freight has been carried on rails than in Europe on average. In this thesis it is claimed that the efficiency of goods transport can be enhanced by service specific investments. Furthermore, it is stressed that simulation can and should be used to evaluate the cost-efficiency of transport systems on operational level, as well as to assess transportation infrastructure investments. In all the studied cases notable efficiency improvements were found. For example in distribution, home delivery of groceries can be almost twice as cost efficient as the current practice of visiting the store. The majority of the cases concentrated on railway freight. In timber transportation, the item with the largest annual transport volume in domestic railway freight in Finland, the transportation cost could be reduced most substantially. Also in international timber procurement, the utilization of railway wagons could be improved by combining complementary flows. The efficiency improvements also have positive environmental effects; a large part of road transit could be moved to rails annually. If impacts of freight transport are included in cost-benefit analysis of railway investments, up to 50 % increase in the net benefits of the evaluated alternatives can be experienced, avoiding a possible inbuilt bias in the assessment framework, and thus increasing the efficiency of national investments in transportation infrastructure. Transportation systems are a typical example of complex real world systems that cannot be analysed realistically by analytical methods, whereas simulation allows inclusion of dynamics and the level of detail required. Regarding simulation as a viable tool for assessing the efficiency of transportation systems finds support also in the international survey conducted for railway freight operators; operators use operations research methods widely for planning purposes, while simulation is applied only by the larger operators.
Resumo:
This research focuses on the career experiences of women managers in the IT industry in China and Finland, two countries with different cultures, policies, size of population, and social and economic structures regarding work-life support and equal opportunities. The object of this research is to present a cross-cultural comparison of women’s career experiences and how women themselves understand and account for their careers. The study explores how the macro and the micro levels of cultural and social processes become manifested in the lives of individual women. The main argument in this thesis is that culture plays a crucial role in making sense of women’s career experiences, although its role should be understood through its interrelationship with other social processes, e.g., institutional relations, social policies, industrial structures and organizations, as well as globalization. The interrelationship of a series of cultural and social processes affects individuals’ attitudes to, and arrangement and organization of, their work and family lives. This thesis consists of two parts. The first part introduces the research topic and discusses the overall results. The second part comprises five research papers. The main research question of the study is: How do cultural and social processes affect the experiences of women managers? Quantitative and qualitative research methods, which include in-depth interviews, Q-methodology, interpretive analysis, and questionnaires, are used in the study. The main theoretical background is culturally sensitive career theory and the theory of individual differences. The results of this study are viewed through a feminist lens. The research methodology applied allows new explorations on how demographic factors, work experiences, lifestyle issues, and organizational cultures can jointly affect women’s managerial careers. The sample group used in the research is 42 women managers working in IT companies in China (21) and Finland (21). The results of the study illustrate the impact of history, tradition, culture, institutional relations, social politics, industry and organizations, and globalization on the careers of women managers. It is claimed that the role of culture – cultural norms within nations and organizations – is of great importance in the relationship of gender and work. Women’s managerial careers are affected by multiple factors (personal, social and cultural) reflecting national and inter-individual differences. The results of the study contribute to research on careers, adding particularly to the literature on gender, work and culture, and offering a complex and holistic perspective for a richer understanding of pluralism and global diversity. The results of the study indicate how old and new career perspectives are evidenced in women managers in the IT industry. The research further contributes to an understanding of women’s managerial careers from a cross-culture perspective. In addition, the study contributes to the literature on culture and extends understanding of Hofstede’s work. Further, most traditional career theories do not perceive the importance of culture in determining an individual’s career experience and this study richens understanding of women managers’ careers and has considerable implications for international human resource management. The results of this study emphasize the need, when discussing women managers’ careers, to understand the ways by which gendering is produced rather than merely examining gender differences. It is argued that the meaning of self-knowledge is critical. Further, the environment where the careers under study develop differs greatly; China and Finland are very different – culturally, historically and socially. The findings of this study should, therefore, be understood as a holistic, specific, and contextually-bound.
Resumo:
Last two decades have seen a rapid change in the global economic and financial situation; the economic conditions in many small and large underdeveloped countries started to improve and they became recognized as emerging markets. This led to growth in the amounts of global investments in these countries, partly spurred by expectations of higher returns, favorable risk-return opportunities, and better diversification alternatives to global investors. This process, however, has not been without problems and it has emphasized the need for more information on these markets. In particular, the liberalization of financial markets around the world, globalization of trade and companies, recent formation of economic and regional blocks, and the rapid development of underdeveloped countries during the last two decades have brought a major challenge to the financial world and researchers alike. This doctoral dissertation studies one of the largest emerging markets, namely Russia. The motivation why the Russian equity market is worth investigating includes, among other factors, its sheer size, rapid and robust economic growth since the turn of the millennium, future prospect for international investors, and a number of important major financial reforms implemented since the early 1990s. Another interesting feature of the Russian economy, which gives motivation to study Russian market, is Russia’s 1998 financial crisis, considered as one of the worst crisis in recent times, affecting both developed and developing economies. Therefore, special attention has been paid to Russia’s 1998 financial crisis throughout this dissertation. This thesis covers the period from the birth of the modern Russian financial markets to the present day, Special attention is given to the international linkage and the 1998 financial crisis. This study first identifies the risks associated with Russian market and then deals with their pricing issues. Finally some insights about portfolio construction within Russian market are presented. The first research paper of this dissertation considers the linkage of the Russian equity market to the world equity market by examining the international transmission of the Russia’s 1998 financial crisis utilizing the GARCH-BEKK model proposed by Engle and Kroner. Empirical results shows evidence of direct linkage between the Russian equity market and the world market both in regards of returns and volatility. However, the weakness of the linkage suggests that the Russian equity market was only partially integrated into the world market, even though the contagion can be clearly seen during the time of the crisis period. The second and the third paper, co-authored with Mika Vaihekoski, investigate whether global, local and currency risks are priced in the Russian stock market from a US investors’ point of view. Furthermore, the dynamics of these sources of risk are studied, i.e., whether the prices of the global and local risk factors are constant or time-varying over time. We utilize the multivariate GARCH-M framework of De Santis and Gérard (1998). Similar to them we find price of global market risk to be time-varying. Currency risk also found to be priced and highly time varying in the Russian market. Moreover, our results suggest that the Russian market is partially segmented and local risk is also priced in the market. The model also implies that the biggest impact on the US market risk premium is coming from the world risk component whereas the Russian risk premium is on average caused mostly by the local and currency components. The purpose of the fourth paper is to look at the relationship between the stock and the bond market of Russia. The objective is to examine whether the correlations between two classes of assets are time varying by using multivariate conditional volatility models. The Constant Conditional Correlation model by Bollerslev (1990), the Dynamic Conditional Correlation model by Engle (2002), and an asymmetric version of the Dynamic Conditional Correlation model by Cappiello et al. (2006) are used in the analysis. The empirical results do not support the assumption of constant conditional correlation and there was clear evidence of time varying correlations between the Russian stocks and bond market and both asset markets exhibit positive asymmetries. The implications of the results in this dissertation are useful for both companies and international investors who are interested in investing in Russia. Our results give useful insights to those involved in minimising or managing financial risk exposures, such as, portfolio managers, international investors, risk analysts and financial researchers. When portfolio managers aim to optimize the risk-return relationship, the results indicate that at least in the case of Russia, one should account for the local market as well as currency risk when calculating the key inputs for the optimization. In addition, the pricing of exchange rate risk implies that exchange rate exposure is partly non-diversifiable and investors are compensated for bearing the risk. Likewise, international transmission of stock market volatility can profoundly influence corporate capital budgeting decisions, investors’ investment decisions, and other business cycle variables. Finally, the weak integration of the Russian market and low correlations between Russian stock and bond market offers good opportunities to the international investors to diversify their portfolios.
Resumo:
The thesis deals with the phenomenon of learning between organizations in innovation networks that develop new products, services or processes. Inter organizational learning is studied especially at the level of the network. The role of the network can be seen as twofold: either the network is a context for inter organizational learning, if the learner is something else than the network (organization, group, individual), or the network itself is the learner. Innovations are regarded as a primary source of competitiveness and renewal in organizations. Networking has become increasingly common particularly because of the possibility to extend the resource base of the organization through partnerships and to concentrate on core competencies. Especially in innovation activities, networks provide the possibility to answer the complex needs of the customers faster and to share the costs and risks of the development work. Networked innovation activities are often organized in practice as distributed virtual teams, either within one organization or as cross organizational co operation. The role of technology is considered in the research mainly as an enabling tool for collaboration and learning. Learning has been recognized as one important collaborative process in networks or as a motivation for networking. It is even more important in the innovation context as an enabler of renewal, since the essence of the innovation process is creating new knowledge, processes, products and services. The thesis aims at providing enhanced understanding about the inter organizational learning phenomenon in and by innovation networks, especially concentrating on the network level. The perspectives used in the research are the theoretical viewpoints and concepts, challenges, and solutions for learning. The methods used in the study are literature reviews and empirical research carried out with semi structured interviews analyzed with qualitative content analysis. The empirical research concentrates on two different areas, firstly on the theoretical approaches to learning that are relevant to innovation networks, secondly on learning in virtual innovation teams. As a result, the research identifies insights and implications for learning in innovation networks from several viewpoints on organizational learning. Using multiple perspectives allows drawing a many sided picture of the learning phenomenon that is valuable because of the versatility and complexity of situations and challenges of learning in the context of innovation and networks. The research results also show some of the challenges of learning and possible solutions for supporting especially network level learning.
Resumo:
Yhteiskunnan riippuvuus sähköstä on lisääntynyt voimakkaasti viime vuosikymmenien aikana. Sähkönjakelussa esiintyneet lyhyet ja pitkät keskeytykset ovat osoittaneet yhteiskunnan haavoittuvuuden ja yhteiskunta kestää entistä vähemmän sähkönjakelussa tapahtuvia häiriöitä. Keskeytyksistä aiheutuneiden haittojen arvostus on kasvanut ja tämä on luonut taloudelliset perusteet sähkön laatua parantaville investoinneille. Haja-asutusalueiden keskijänniteverkon johdot on rakennettu avojohtoina ja siten ne ovat alttiita sääolosuhteista johtuville myrsky- ja lumikuormavaurioille. Ilmastomuutoksen ennustetaan lisäävän tuulisuutta ja siten ongelmat sähkönjakelussa mahdollisesti lisääntyvät. Taajamissa käytetään enemmän kaapeleita ja johtolähdöt ovat lyhyitä, joten myrskyistä aiheutuvia keskeytyksiä on vähemmän kuin haja-asutusalueella. Olemassa olevat jakeluverkot ovat käytössä vielä vuosikymmeniä, joten uuden tekniikan kehittämisen rinnalla on kehitettävä myös olemassa olevaa jakeluverkkoa ja sen ylläpitoa. Ylläpidon tavoitteena on käyttövarmuuden parantamisen lisäksi huolehtia siitä, että jakeluverkkoihin sitoutunut omaisuus säilyttää arvonsa mahdollisimman hyvin pitoajan loppuun saakka. Jakeluverkkoihin investoitiin paljon 1950–70-luvuilla. Tältä ajalta on yhä käytössä puupylväitä, joiden ikääntymisen takia korvausinvestointien tarve kasvaa. Hyvänä puolena tässä on että käyttövarmuuden parantamiseksi olemassa olevaa jakeluverkkoa ei tarvitse uusia ennenaikaisesti. Tutkimuksessa päähuomio on haja-asutusalueiden 20 kV keskijänniteverkon kehittämisessä, sillä yli 90 % asiakkaiden kokemista keskeytyksistä johtuu keskijänniteverkon vioista. Erityisesti johtorakenteisiin ja johtojen sijoittamiseen on kiinnitettävä huomiota. Käyttövarmuuden lisäksi jakeluverkkojen kehittämistä ohjaavia tekijöitä ovat taloudellisuus, ympäristön huomioiminen, viranomaisvalvonta sekä asiakkaiden ja omistajien odotukset. Haja-asutusalueilla taloudelliset haasteet ovat suuret vakituisen väestön vähenemisen ja mahdollisesti sähköntarpeen pienenemisen takia. Taloudellisuus korostuu ja riskit kasvavat, kun tuottojen määrä supistuu tarvittaviin jakeluverkon investointeihin ja ylläpitokustannuksiin verrattuna. Ristiriitaa aiheuttaa se, että asiakkaat odottavat sähkönjakelulta parempaa luotettavuutta, mutta paremmasta sähkönlaadusta ei olla valmiita maksamaan juurikaan nykyistä enempää. Jakeluverkkojen kehittämistä voi hidastaa myös viranomaisvalvonta, jos tuottoja ei voida lisätä investointien lisätarpeiden suhteessa. Tutkimuksessa on analysoitu yleisellä tasolla kaapeloinnin lisäämistä, korkeiden pylväiden käyttämistä, leveitä johtokatuja, edullisten ja yksinkertaisten sähköasemien rakentamista haja-asutusalueille ja automaatioasemien lisäämistä keskijänniteverkon solmupisteisiin. Erityisesti tutkimuksessa on analysoitu uutena tekniikkana 1000 V jännitteen käyttömahdollisuutta jakeluverkkojen kehittämisessä. Sähköjohtojen siirtäminen teiden varsiin parantaa käyttövarmuutta, vaikka johdot rakennetaan samalla tekniikalla kuin olemassa olevat johdot. Hajaasutusalueille rakennettavilla sähköasemilla pitkät syöttöjohdot voidaan jakaa pienemmiksi syöttöalueiksi, jolloin keskeytyksistä aiheutuvat haitat koskettavat kerrallaan pienempää asiakasmäärää. Samaan tulokseen päästään oikein sijoitetuilla ja toteutetuilla automaatioasemilla. Tutkimuksen mukaan lupaavaksi tekniikaksi jakeluverkkojen kehittämisessä on osoittautumassa 1000 V jänniteportaan ottaminen 400 V pienjännitteen lisäksi. 1000 V verkoilla voidaan korvata häiriöherkkiä 20 kV keskijänniteverkon lyhyitä, alle viiden kilometrin pituisia haarajohtoja ja haarajohtojen jatkeita, missä siirrettävät tehot ovat pieniä. Uudessa jakelujärjestelmässä sähkö tuodaan 1000 V jännitteellä lähelle asiakasta, jossa jännite muunnetaan normaaliksi asiakkaille soveltuvaksi 400/230 V jännitteeksi. Edullisuus perustuu siihen, että rakentamisessa käytetään samoja pienjännitejohtoja kuin asiakkaille menevässä 400 V pienjänniteverkossa. 1000 V jakelutekniikassa sekä investointikustannukset että ylläpitokustannukset ovat pienemmät kuin perinteisessä 20 kV ilmajohtotekniikassa. 1000 V johdot säästävät maisemaa, sillä ne eivät tarvitse leveää johtokatua kuten 20 kV keskijännitejohdot. 1000 V verkkojen käyttö soveltuukin erityisesti vapaa-ajanasuntojen sähköistykseen herkissä ranta- ja järvimaisemissa. 1000 V verkot mahdollistavat kaapeliauraamisen lisäämisen ja näin voidaan vähentää ympäristöä haittaavien kyllästettyjen pylväiden käyttöä. 1000 V jakeluverkkojen osalta tutkimustyön tuloksia on sovellettu suomalaisessa Suur-Savon Sähkö Oy:ssä. Käytännön kokemuksia 1000 V jakelujärjestelmästä on useista kymmenistä kohteista. Tutkimustulokset osoittavat, ettei keskijänniteverkon maakaapelointi hajaasutusalueilla ole taloudellisesti kannattavaa nykyisillä keskeytyksistä aiheutuvilla haitta-arvoilla, mutta jos keskeytyskustannusten arvostus kasvaa, tulee kaapelointi kannattavaksi monissa paikoissa. Myös myrskyisyyden ja myrskyistä aiheutuvien jakelukeskeytysten lisääntyminen tekisi kaapeloinnista kannattavan. Tulevaisuudessa jakeluverkkojen rakentaminen on entistä monimuotoisempi tehtävä, jossa taloudellisuuden ja käyttövarmuuden lisäksi on huomioitava asiakkaat, omistajat, viranomaiset ja ympäristö. Tutkimusta jakelutekniikan kehittämiseksi tarvitaan edelleen. Tulevaisuuden osalta haja-asutusalueiden jakeluverkkojen kehittämiseen liittyy paljon epävarmuuksia. Hajautetun kiinteistökohtaisen sähköntuotannon lisääntyminen voi tehdä jakeluverkoista nykyistä tarpeettomampia, mutta esimerkiksi liikenteen sähköistyminen voi kasvattaa jakeluverkkojen merkitystä. Tästä syystä jakeluverkkojen rakentamisessa tarvitaan joustavuutta, jotta tarvittaessa voidaan helposti sopeutua erilaisiin kehityssuuntiin.
Resumo:
Nanofiltration performance was studied with effluents from the pulp and paper industry and with model substances. The effect of filtration conditions and membrane properties on nanofiltration flux, retention, and fouling was investigated. Generally, the aim was to determine the parameters that influence nanofiltration efficiency and study how to carry out nanofiltration without fouling by controlling these parameters. The retentions of the nanofiltration membranes studied were considerably higher than those of tight ultrafiltration membranes, and the permeate fluxes obtained were approximately the same as those of tight ultrafiltration membranes. Generally, about 80% retentions of total carbon and conductivity were obtained during the nanofiltration experiments. Depending on the membrane and the filtration conditions, the retentions of monovalent ions (chloride) were between 80 and 95% in the nanofiltrations. An increase in pH improved retentions considerably and also the flux to some degree. An increase in pressure improved retention, whereas an increase in temperature decreased retention if the membrane retained the solute by the solution diffusion mechanism. In this study, more open membranes fouled more than tighter membranes due to higher concentration polarization and plugging of the membrane material. More irreversible fouling was measured for hydrophobic membranes. Electrostatic repulsion between the membrane and the components in the solution reduced fouling but did not completely prevent it with the hydrophobic membranes. Nanofiltration could be carried out without fouling, at least with the laboratory scale apparatus used here when the flux was below the critical flux. Model substances had a strong form of the critical flux, but the effluents had only a weak form of the critical flux. With the effluents, some fouling always occurred immediately when the filtration was started. However, if the flux was below the critical flux, further fouling was not observed. The flow velocity and pH were probably the most important parameters, along with the membrane properties, that influenced the critical flux. Precleaning of the membranes had only a small effect on the critical flux and retentions, but it improved the permeability of the membranes significantly.
Resumo:
This study presents mathematical methods for evaluation of retail performance with special regard to product sourcing strategies. Forecast accuracy, process lead time, offshore / local sourcing mix and up front / replenishment buying mix are defined as critical success factors in connection with sourcing seasonal products with a fashion content. As success measures, this research focuses on service level, lost sales, product substitute percentage, gross margin, gross margin return on inventory and mark down rate. The accuracy of demand forecast is found to be a fundamental success factor. Forecast accuracy depends on lead time. Lead times are traditionally long and buying decisions are made seven to eight months prior to the start of the selling season. Forecast errors cause stockouts and lost sales. Some of the products bought for the selling season will not be sold and have to be marked down and sold at clearance, causing loss of gross margin. Gross margin percentage is not the best tool for evaluating sourcing decisions and in the context of this study gross margin return on inventory, which combines profitability and assets management, is used. The findings of this research suggest that there are more profitable ways of sourcing products than buying them from low cost offshore sources. Mixing up front and inseason replenishment deliveries, especially when point of sale information is used for improving forecast accuracy, results in better retail performance. Quick Response and Vendor Managed Inventory strategies yield better results than traditional up front buying from offshore even if local purchase prices are higher. Increasing the number of selling seasons, slight over buying for the season in order to
The effects of real time control of welding parameters on weld quality in plasma arc keyhole welding
Resumo:
Joints intended for welding frequently show variations in geometry and position, for which it is unfortunately not possible to apply a single set of operating parameters to ensure constant quality. The cause of this difficulty lies in a number of factors, including inaccurate joint preparation and joint fit up, tack welds, as well as thermal distortion of the workpiece. In plasma arc keyhole welding of butt joints, deviations in the gap width may cause weld defects such as an incomplete weld bead, excessive penetration and burn through. Manual adjustment of welding parameters to compensate for variations in the gap width is very difficult, and unsatisfactory weld quality is often obtained. In this study a control system for plasma arc keyhole welding has been developed and used to study the effects of the real time control of welding parameters on gap tolerance during welding of austenitic stainless steel AISI 304L. The welding tests demonstrated the beneficial effect of real time control on weld quality. Compared with welding using constant parameters, the maximum tolerable gap width with an acceptable weld quality was 47% higher when using the real time controlled parameters for a plate thickness of 5 mm. In addition, burn through occurred with significantly larger gap widths when parameters were controlled in real time. Increased gap tolerance enables joints to be prepared and fit up less accurately, saving time and preparation costs for welding. In addition to the control system, a novel technique for back face monitoring is described in this study. The test results showed that the technique could be successfully applied for penetration monitoring when welding non magnetic materials. The results also imply that it is possible to measure the dimensions of the plasma efflux or weld root, and use this information in a feedback control system and, thus, maintain the required weld quality.
Resumo:
The amphiphilic nature of metal extractants causes the formation of micelles and other microscopic aggregates when in contact with water and an organic diluent. These phenomena and their effects on metal extraction were studied using carboxylic acid (Versatic 10) and organophosphorus acid (Cyanex 272) based extractants. Special emphasis was laid on the study of phase behaviour in a pre neutralisation stage when the extractant is transformed to a sodium or ammonium salt form. The pre neutralised extractants were used to extract nickel and to separate cobalt and nickel. Phase diagrams corresponding to the pre neutralisation stage in a metal extraction process were determined. The maximal solubilisation of the components in the system water(NH3)/extractant/isooctane takes place when the molar ratio between the ammonia salt form and the free form of the extractant is 0.5 for the carboxylic acid and 1 for the organophosphorus acid extractant. These values correspond to the complex stoichiometry of NH4A•HA and NIi4A, respectively. When such a solution is contacted with water a microemulsion is formed. If the aqueous phase contains also metal ions (e.g. Ni²+), complexation will take place on the microscopic interface of the micellar aggregates. Experimental evidence showing that the initial stage of nickel extraction with pre neutralised Versatic 10 is a fast pseudohomogeneous reaction was obtained. About 90% of the metal were extracted in the first 15 s after the initial contact. For nickel extraction with pre neutralised Versatic 10 it was found that the highest metal loading and the lowest residual ammonia and water contents in the organic phase are achieved when the feeds are balanced so that the stoichiometry is 2NH4+(org) = Nit2+(aq). In the case of Co/Ni separation using pre neutralised Cyanex 272 the highest separation is achieved when the Co/extractant molar ratio in the feeds is 1 : 4 and at the same time the optimal degree of neutralisation of the Cyanex 272 is about 50%. The adsorption of the extractants on solid surfaces may cause accumulation of solid fine particles at the interface between the aqueous and organic phases in metal extraction processes. Copper extraction processes are known to suffer of this problem. Experiments were carried out using model silica and mica particles. It was found that high copper loading, aromacity of the diluent, modification agents and the presence of aqueous phase decrease the adsorption of the hydroxyoxime on silica surfaces.
Resumo:
The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.
Resumo:
Synchronous machines with an AC converter are used mainly in large drives, for example in ship propulsion drives as well as in rolling mill drives in steel industry. These motors are used because of their high efficiency, high overload capacity and good performance in the field weakening area. Present day drives for electrically excited synchronous motors are equipped with position sensors. Most drives for electrically excited synchronous motors will be equipped with position sensors also in future. This kind of drives with good dynamics are mainly used in metal industry. Drives without a position sensor can be used e.g. in ship propulsion and in large pump and blower drives. Nowadays, these drives are equipped with a position sensor, too. The tendency is to avoid a position sensor if possible, since a sensor reduces the reliability of the drive and increases costs (latter is not very significant for large drives). A new control technique for a synchronous motor drive is a combination of the Direct Flux Linkage Control (DFLC) based on a voltage model and a supervising method (e.g. current model). This combination is called Direct Torque Control method (DTC). In the case of the position sensorless drive, the DTC can be implemented by using other supervising methods that keep the stator flux linkage origin centered. In this thesis, a method for the observation of the drift of the real stator flux linkage in the DTC drive is introduced. It is also shown how this method can be used as a supervising method that keeps the stator flux linkage origin centered in the case of the DTC. In the position sensorless case, a synchronous motor can be started up with the DTC control, when a method for the determination of the initial rotor position presented in this thesis is used. The load characteristics of such a drive are not very good at low rotational speeds. Furthermore, continuous operation at a zero speed and at a low rotational speed is not possible, which is partly due to the problems related to the flux linkage estimate. For operation in a low speed area, a stator current control method based on the DFLC modulator (DMCQ is presented. With the DMCC, it is possible to start up and operate a synchronous motor at a zero speed and at low rotational speeds in general. The DMCC is necessary in situations where high torque (e.g. nominal torque) is required at the starting moment, or if the motor runs several seconds at a zero speed or at a low speed range (up to 2 Hz). The behaviour of the described methods is shown with test results. The test results are presented for the direct flux linkage and torque controlled test drive system with a 14.5 kVA, four pole salient pole synchronous motor with a damper winding and electric excitation. The static accuracy of the drive is verified by measuring the torque in a static load operation, and the dynamics of the drive is proven in load transient tests. The performance of the drive concept presented in this work is sufficient e.g. for ship propulsion and for large pump drives. Furthermore, the developed methods are almost independent of the machine parameters.