121 resultados para Queensland University of Technology, Visual Arts
Resumo:
This work deals with the cooling of high-speed electric machines, such as motors and generators, through an air gap. It consists of numerical and experimental modelling of gas flow and heat transfer in an annular channel. Velocity and temperature profiles are modelled in the air gap of a high-speed testmachine. Local and mean heat transfer coefficients and total friction coefficients are attained for a smooth rotor-stator combination at a large velocity range. The aim is to solve the heat transfer numerically and experimentally. The FINFLO software, developed at Helsinki University of Technology, has been used in the flow solution, and the commercial IGG and Field view programs for the grid generation and post processing. The annular channel is discretized as a sector mesh. Calculation is performed with constant mass flow rate on six rotational speeds. The effect of turbulence is calculated using three turbulence models. The friction coefficient and velocity factor are attained via total friction power. The first part of experimental section consists of finding the proper sensors and calibrating them in a straight pipe. After preliminary tests, a RdF-sensor is glued on the walls of stator and rotor surfaces. Telemetry is needed to be able to measure the heat transfer coefficients at the rotor. The mean heat transfer coefficients are measured in a test machine on four cooling air mass flow rates at a wide Couette Reynolds number range. The calculated values concerning the friction and heat transfer coefficients are compared with measured and semi-empirical data. Heat is transferred from the hotter stator and rotor surfaces to the coolerair flow in the air gap, not from the rotor to the stator via the air gap, althought the stator temperature is lower than the rotor temperature. The calculatedfriction coefficients fits well with the semi-empirical equations and precedingmeasurements. On constant mass flow rate the rotor heat transfer coefficient attains a saturation point at a higher rotational speed, while the heat transfer coefficient of the stator grows uniformly. The magnitudes of the heat transfer coefficients are almost constant with different turbulence models. The calibrationof sensors in a straight pipe is only an advisory step in the selection process. Telemetry is tested in the pipe conditions and compared to the same measurements with a plain sensor. The magnitudes of the measured data and the data from the semi-empirical equation are higher for the heat transfer coefficients than thenumerical data considered on the velocity range. Friction and heat transfer coefficients are presented in a large velocity range in the report. The goals are reached acceptably using numerical and experimental research. The next challenge is to achieve results for grooved stator-rotor combinations. The work contains also results for an air gap with a grooved stator with 36 slots. The velocity field by the numerical method does not match in every respect the estimated flow mode. The absence of secondary Taylor vortices is evident when using time averagednumerical simulation.
Resumo:
There is a broad consensus among economists that technologicalchange has been a major contributor to the productivity growth and, hence, to the growth of the material welfare in western industrialized countries at least over the last century. Paradoxically, this issue has not been the focal point of theoretical economics. At the same time, we have witnessed the rise of the importance of technological issues at the strategic management level of business firms. Interestingly, the research has not accurately responded to this challenge either. The tension between the overwhelming empirical evidence of the importance of technology and its relative omission in the research offers a challenging target for a methodological endeavor. This study deals with the question of how different theories cope with technology and explain technological change. The focusis at the firm level and the analysis concentrates on metatheoretical issues, except for the last two chapters, which examine the problems of strategic management of technology. Here the aim is to build a new evolutionary-based theoreticalframework to analyze innovation processes at the firm level. The study consistsof ten chapters. Chapter 1 poses the research problem and contrasts the two basic approaches, neoclassical and evolutionary, to be analyzed. Chapter 2 introduces the methodological framework which is based on the methodology of isolation. Methodological and ontoogical commitments of the rival approaches are revealed and basic questions concerning their ways of theorizing are elaborated. Chapters 3-6 deal with the so-called substantive isolative criteria. The aim is to examine how different approaches cope with such critical issues as inherent uncertainty and complexity of innovative activities (cognitive isolations, chapter 3), theboundedness of rationality of innovating agents (behavioral isolations, chapter4), the multidimensional nature of technology (chapter 5), and governance costsrelated to technology (chapter 6). Chapters 7 and 8 put all these things together and look at the explanatory structures used by the neoclassical and evolutionary approaches in the light of substantive isolations. The last two cpahters of the study utilize the methodological framework and tools to appraise different economics-based candidates in the context of strategic management of technology. The aim is to analyze how different approaches answer the fundamental question: How can firms gain competitive advantages through innovations and how can the rents appropriated from successful innovations be sustained? The last chapter introduces a new evolutionary-based technology management framework. Also the largely omitted issues of entrepreneurship are examined.
Resumo:
In a centrifugal compressor the flow around the diffuser is collected and led to the pipe system by a spiral-shaped volute. In this study a single-stage centrifugal compressor with three different volutes is investigated. The compressorwas first equipped with the original volute, the cross-section of which was a combination of a rectangle and semi-circle. Next a new volute with a fully circular cross-section was designed and manufactured. Finally, the circular volute wasmodified by rounding the tongue and smoothing the tongue area. The overall performance of the compressor as well as the static pressure distribution after the impeller and on the volute surface were measured. The flow entering the volute was measured using a three-hole Cobra-probe, and flow visualisations were carriedout in the exit cone of the volute. In addition, the radial force acting on theimpeller was measured using magnetic bearings. The complete compressor with thecircular volute (inlet pipe, full impeller, diffuser, volute and outlet pipe) was also modelled using computational fluid dynamics (CFD). A fully 3-D viscous flow was solved using a Navier-Stokes solver, Finflo, developed at Helsinki University of Technology. Chien's k-e model was used to take account of the turbulence. The differences observed in the performance of the different volutes were quite small. The biggest differences were at low speeds and high volume flows,i.e. when the flow entered the volute most radially. In this operating regime the efficiency of the compressor with the modified circular volute was about two percentage points higher than with the other volutes. Also, according to the Cobra-probe measurements and flow visualisations, the modified circular volute performed better than the other volutes in this operating area. The circumferential static pressure distribution in the volute showed increases at low flow, constant distribution at the design flow and decrease at high flow. The non-uniform static pressure distribution of the volute was transmitted backwards across the vaneless diffuser and observed at the impeller exit. At low volume flow a strong two-wave pattern developed into the static pressure distribution at the impeller exit due to the response of the impeller to the non-uniformity of pressure. The radial force of the impeller was the greatest at the choke limit, the smallest atthe design flow, and moderate at low flow. At low flow the force increase was quite mild, whereas the increase at high flow was rapid. Thus, the non-uniformityof pressure and the force related to it are strong especially at high flow. Theforce caused by the modified circular volute was weaker at choke and more symmetric as a function of the volume flow than the force caused by the other volutes.
Resumo:
Zinc selenide is a prospective material for optoelectronics. The fabrication of ZnSebased light-emitting diodes is hindered by complexity of p-type doping of the component materials. The interaction between native and impurity defects, the tendency of doping impurity to form associative centres with native defects and the tendency to self-compensation are the main factors impeding effective control of the value and type of conductivity. The thesis is devoted to the study of the processes of interaction between native and impurity defects in zinc selenide. It is established that the Au impurity has the most prominent amphoteric properties in ZnSe among Cu, Ag and Au impurities, as it forms a great number of both Au; donors and Auz„ acceptors. Electrical measurements show that Ag and Au ions introduced into vacant sites of the Zn sublattice form simple single-charged Agz„+ and Auzn+ states with d1° electron configuration, while Cu ions can form both single-charged Cuz„ (d1) and double-charged Cuzr`+ (d`o) centres. Amphoteric properties of Ag and Au transition metals stimulated by time are found for the first time from both electrical and luminescent measurements. A model that explains the changes in electrical and luminescent parameters by displacement of Ag ions into interstitial sites due to lattice deformation forces is proposed. Formation of an Ag;-donor impurity band in ZnSe samples doped with Ag and stored at room temperature is also studied. Thus, the properties of the doped samples are modified due to large lattice relaxation during aging. This fact should be taken into account in optoelectronic applications of doped ZnSe and related compounds.
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
This thesis describes the development of advanced silicon radiation detectors and their characterization by simulations, used in the work for searching elementary particles in the European Organization for Nuclear Research, CERN. Silicon particle detectors will face extremely harsh radiation in the proposed upgrade of the Large Hadron Collider, the future high-energy physics experiment Super-LHC. The increase in the maximal fluence and the beam luminosity up to 1016 neq / cm2 and 1035 cm-2s-1 will require detectors with a dramatic improvement in radiation hardness, when such a fluence will be far beyond the operational limits of the present silicon detectors. The main goals of detector development concentrate on minimizing the radiation degradation. This study contributes mainly to the device engineering technology for developing more radiation hard particle detectors with better characteristics. Also the defect engineering technology is discussed. In the nearest region of the beam in Super-LHC, the only detector choice is 3D detectors, or alternatively replacing other types of detectors every two years. The interest in the 3D silicon detectors is continuously growing because of their many advantages as compared to conventional planar detectors: the devices can be fully depleted at low bias voltages, the speed of the charge collection is high, and the collection distances are about one order of magnitude less than those of planar technology strip and pixel detectors with electrodes limited to the detector surface. Also the 3D detectors exhibit high radiation tolerance, and thus the ability of the silicon detectors to operate after irradiation is increased. Two parameters, full depletion voltage and electric field distribution, is discussed in more detail in this study. The full depletion of the detector is important because the only depleted area in the detector is active for the particle tracking. Similarly, the high electric field in the detector makes the detector volume sensitive, while low-field areas are non-sensitive to particles. This study shows the simulation results of full depletion voltage and the electric field distribution for the various types of 3D detectors. First, the 3D detector with the n-type substrate and partial-penetrating p-type electrodes are researched. A detector of this type has a low electric field on the pixel side and it suffers from type inversion. Next, the substrate is changed to p-type and the detectors having electrodes with one doping type and the dual doping type are examined. The electric field profile in a dual-column 3D Si detector is more uniform than that in the single-type column 3D detector. The dual-column detectors are the best in radiation hardness because of their low depletion voltages and short drift distances.
Resumo:
The evaluation of investments in advanced technology is one of the most important decision making tasks. The importance is even more pronounced considering the huge budget concerning the strategic, economic and analytic justification in order to shorten design and development time. Choosing the most appropriate technology requires an accurate and reliable system that can lead the decision makers to obtain such a complicated task. Currently, several Information and Communication Technologies (ICTs) manufacturers that design global products are seeking local firms to act as their sales and services representatives (called distributors) to the end user. At the same time, the end user or customer is also searching for the best possible deal for their investment in ICT's projects. Therefore, the objective of this research is to present a holistic decision support system to assist the decision maker in Small and Medium Enterprises (SMEs) - working either as individual decision makers or in a group - in the evaluation of the investment to become an ICT's distributor or an ICT's end user. The model is composed of the Delphi/MAH (Maximising Agreement Heuristic) Analysis, a well-known quantitative method in Group Support System (GSS), which is applied to gather the average ranking data from amongst Decision Makers (DMs). After that the Analytic Network Process (ANP) analysis is brought in to analyse holistically: it performs quantitative and qualitative analysis simultaneously. The illustrative data are obtained from industrial entrepreneurs by using the Group Support System (GSS) laboratory facilities at Lappeenranta University of Technology, Finland and in Thailand. The result of the research, which is currently implemented in Thailand, can provide benefits to the industry in the evaluation of becoming an ICT's distributor or an ICT's end user, particularly in the assessment of the Enterprise Resource Planning (ERP) programme. After the model is put to test with an in-depth collaboration with industrial entrepreneurs in Finland and Thailand, the sensitivity analysis is also performed to validate the robustness of the model. The contribution of this research is in developing a new approach and the Delphi/MAH software to obtain an analysis of the value of becoming an ERP distributor or end user that is flexible and applicable to entrepreneurs, who are looking for the most appropriate investment to become an ERP distributor or end user. The main advantage of this research over others is that the model can deliver the value of becoming an ERP distributor or end user in a single number which makes it easier for DMs to choose the most appropriate ERP vendor. The associated advantage is that the model can include qualitative data as well as quantitative data, as the results from using quantitative data alone can be misleading and inadequate. There is a need to utilise quantitative and qualitative analysis together, as can be seen from the case studies.
Resumo:
Teollusuussovelluksissa vaaditaan nykyisin yhä useammin reaaliaikaista tiedon käsittelyä. Luotettavuus on yksi tärkeimmistä reaaliaikaiseen tiedonkäsittelyyn kykenevän järjestelmän ominaisuuksista. Sen saavuttamiseksi on sekä laitteisto, että ohjelmisto testattava. Tämän työn päätavoitteena on laitteiston testaaminen ja laitteiston testattavuus, koska luotettava laitteistoalusta on perusta tulevaisuuden reaaliaikajärjestelmille. Diplomityössä esitetään digitaaliseen signaalinkäsittelyyn soveltuvan prosessorikortin suunnittelu. Prosessorikortti on tarkoitettu sähkökoneiden ennakoivaa kunnonvalvontaa varten. Uusimmat DFT (Desing for Testability) menetelmät esitellään ja niitä sovelletaan prosessorikortin sunnittelussa yhdessä vanhempien menetelmien kanssa. Kokemukset ja huomiot menetelmien soveltuvuudesta raportoidaan työn lopussa. Työn tavoitteena on kehittää osakomponentti web -pohjaiseen valvontajärjestelmään, jota on kehitetty Sähkötekniikan osastolla Lappeenrannan teknillisellä korkeakoululla.
Resumo:
Substances emitted into the atmosphere by human activities in urban and industrial areas cause environmental problems such as air quality degradation, respiratory diseases, climate change, global warming, and stratospheric ozone depletion. Volatile organic compounds (VOCs) are major air pollutants, emitted largely by industry, transportation and households. Many VOCs are toxic, and some are considered to be carcinogenic, mutagenic, or teratogenic. A wide spectrum of VOCs is readily oxidized photocatalytically. Photocatalytic oxidation (PCO) over titanium dioxide may present a potential alternative to air treatment strategies currently in use, such as adsorption and thermal treatment, due to its advantageous activity under ambient conditions, although higher but still mild temperatures may also be applied. The objective of the present research was to disclose routes of chemical reactions, estimate the kinetics and the sensitivity of gas-phase PCO to reaction conditions in respect of air pollutants containing heteroatoms in their molecules. Deactivation of the photocatalyst and restoration of its activity was also taken under consideration to assess the practical possibility of the application of PCO to the treatment of air polluted with VOCs. UV-irradiated titanium dioxide was selected as a photocatalyst for its chemical inertness, non-toxic character and low cost. In the present work Degussa P25 TiO2 photocatalyst was mostly used. In transient studies platinized TiO2 was also studied. The experimental research into PCO of following VOCs was undertaken: - methyl tert-butyl ether (MTBE) as the basic oxygenated motor fuel additive and, thus, a major non-biodegradable pollutant of groundwater; - tert-butyl alcohol (TBA) as the primary product of MTBE hydrolysis and PCO; - ethyl mercaptan (ethanethiol) as one of the reduced sulphur pungent air pollutants in the pulp-and-paper industry; - methylamine (MA) and dimethylamine (DMA) as the amino compounds often emitted by various industries. The PCO of VOCs was studied using a continuous-flow mode. The PCO of MTBE and TBA was also studied by transient mode, in which carbon dioxide, water, and acetone were identified as the main gas-phase products. The volatile products of thermal catalytic oxidation (TCO) of MTBE included 2-methyl-1-propene (2-MP), carbon monoxide, carbon dioxide and water; TBA decomposed to 2-MP and water. Continuous PCO of 4 TBA proceeded faster in humid air than dry air. MTBE oxidation, however, was less sensitive to humidity. The TiO2 catalyst was stable during continuous PCO of MTBE and TBA above 373 K, but gradually lost activity below 373 K; the catalyst could be regenerated by UV irradiation in the absence of gas-phase VOCs. Sulphur dioxide, carbon monoxide, carbon dioxide and water were identified as ultimate products of PCO of ethanethiol. Acetic acid was identified as a photocatalytic oxidation by-product. The limits of ethanethiol concentration and temperature, at which the reactor performance was stable for indefinite time, were established. The apparent reaction kinetics appeared to be independent of the reaction temperature within the studied limits, 373 to 453 K. The catalyst was completely and irreversibly deactivated with ethanethiol TCO. Volatile PCO products of MA included ammonia, nitrogen dioxide, nitrous oxide, carbon dioxide and water. Formamide was observed among DMA PCO products together with others similar to the ones of MA. TCO for both substances resulted in the formation of ammonia, hydrogen cyanide, carbon monoxide, carbon dioxide and water. No deactivation of the photocatalyst during the multiple long-run experiments was observed at the concentrations and temperatures used in the study. PCO of MA was also studied in the aqueous phase. Maximum efficiency was achieved in an alkaline media, where MA exhibited high fugitivity. Two mechanisms of aqueous PCO – decomposition to formate and ammonia, and oxidation of organic nitrogen directly to nitrite - lead ultimately to carbon dioxide, water, ammonia and nitrate: formate and nitrite were observed as intermediates. A part of the ammonia formed in the reaction was oxidized to nitrite and nitrate. This finding helped in better understanding of the gasphase PCO pathways. The PCO kinetic data for VOCs fitted well to the monomolecular Langmuir- Hinshelwood (L-H) model, whereas TCO kinetic behaviour matched the first order process for volatile amines and the L-H model for others. It should be noted that both LH and the first order equations were only the data fit, not the real description of the reaction kinetics. The dependence of the kinetic constants on temperature was established in the form of an Arrhenius equation.
Resumo:
The main objective of this dissertation is to create new knowledge on an administrative innovation, its adoption, diffusion and finally its effectiveness. In this dissertation the administrative innovation is approached through a widely utilized management philosophy, namely the total quality management (TQM) strategy. TQM operationalizes a self-assessment procedure, which is based on continual improvement principles and measuring the improvements. This dissertation also captures the theme of change management as it analyzes the adoption and diffusion of the administrative innovation. It identifies innovation characteristics as well as organisational and individual factors explaining the adoption and implementation. As a special feature, this study also explores the effectiveness of the innovation based on objective data. For studying the administrative innovation (TQM model), a multinational Case Company provides a versatile ground for a deep, longitudinal analysis. The Case Company started the adoption systematically in the mid 1980s in some of its units. As part of their strategic planning today, the procedure is in use throughout the entire global company. The empirical story begins from the innovation adoption decision that was made in the Case Company over 22 years ago. In order to be able to capture the right atmosphere and backgrounds leading to the adoption decision, key informants from that time were interviewed, since the main target was to clarify the dynamics of how an administrative innovation develops. In addition, archival material was collected and studied, available memos and data relating to the innovation, innovation adoption and later to the implementation contained altogether 20500 pages of documents. A survey was furthermore conducted at the end of 2006 focusing on questions related to the innovation, organization and leadership characteristics and the response rate totalled up to 54%. For measuring the effectiveness of the innovation implementation, the needed longitudinal objective performance data was collected. This data included the profit unit level experience of TQM, the development of the self assessment scores per profit unit and performance data per profit unit measured with profitability, productivity and customer satisfaction. The data covered the years 1995-2006. As a result, the prerequisites for the successful adoption of an administrative innovation were defined, such as the top management involvement, support of the change agents and effective tools for implementation and measurement. The factors with the greatest effect on the depth of the implementation were the timing of the adoption and formalization. The results also indicated that the TQM model does have an effect on the company performance measured with profitability, productivity and customer satisfaction. Consequently this thesis contributes to the present literature (i) by taking into its scope an administrative innovation and focusing on the whole innovation implementation process, from the adoption, through diffusion until its consequences, (ii) because the studied factors with an effect on the innovation adoption and diffusion are multifaceted and grouped into individual, organizational and environmental factors, and a strong emphasis is put on the role of the individual change agents and (iii) by measuring the depth and consistency of the administrative innovation. This deep analysis was possible due to the availability of longitudinal data with triangulation possibilities.
Resumo:
Due to their numerous novel technological applications ranging from the example of exhaust catalysts in the automotive industry to the catalytic production of hydro- gen, surface reactions on transition metal substrates have become to be one of the most essential subjects within the surface science community. Although numerous applications exist, there are many details in the different processes that, after many decades of research, remain unknown. There are perhaps as many applications for the corrosion resistant materials such as stainless steels. A thorough knowledge of the details of the simplest reactions occuring on the surfaces, such as oxidation, play a key role in the design of better catalysts, or corrosion resistant materials in the future. This thesis examines the oxidation of metal surfaces from a computational point of view mostly concentrating on copper as a model material. Oxidation is studied from the initial oxidation to the oxygen precovered surface. Important parameters for the initial sticking and dissociation are obtained. The saturation layer is thoroughly studied and the calculated results arecompared with available experimental results. On the saturated surface, some open questions still remain. The present calculations demonstrate, that the saturated part of the surface is excluded from being chemically reactive towards the oxygen molecules. The results suggest, that the reason for the chemical activity of the saturated surface is due to a strain effect occuring between the saturated areas of the surface.
Resumo:
In this research we are examining what is the status of logistics and operations management in Finnish and Swedish companies. Empirical data is based on the web based questionnaire, which was completed in the end of 2007 and early 2008. Our examination consists of roughly 30 answers from largest manufacturing (highest representation in our sample), trade and logistics/distribution companies. Generally it could be argued that these companies operate in complex environment, where number of products, raw materials/components and suppliers is high. However, usually companies rely on small amount of suppliers per raw material/component (highest frequency is 2), and this was especially the case among Swedish companies, and among those companies, which favoured overseas sourcing. Sample consisted of companies which mostly are operating in an international environment, and are quite often multinationals. Our survey findings reveal that companies in general have taken logistics and information technology as part of their strategy process; utilization of performance measures as well as system implementations have followed the strategy decisions. In the transportation mode side we identify that road transports dominate all transport flow classes (inbound, internal and outbound), followed by sea and air. Surprisingly small amount of companies use railways, but in general we could argue that Swedish companies prefer this mode over Finnish counterparts. With respect of operations outsourcing, we found that more traditional areas of logistics outsourcing are driving factors in company's performance measurement priority. In contrary to previous research, our results indicate that the scope of outsourcing is not that wide in logistics/operations management area, and companies are not planning to outsource more in the near future. Some support is found for more international operations and increased outsourcing activity. From the increased time pressure of companies, we find evidence that local as well as overseas customers expect deliveries within days or weeks, but suppliers usually supply within weeks or months. So, basically this leads into considerable inventory holding. Interestingly local and overseas sourcing strategy does not have that great influence on lead time performance of these particular sourcing areas - local strategy is anyway considerably better in responding on market changes due to shorter supply lead times. In the end of our research work we have completed correlation analysis concerning items asked with Likert scale. Our analysis shows that seeing logistics more like a process rather than function, applying time based management, favouring partnerships and measuring logistics within different performance dimensions results on preferred features and performance found in logistics literature.
Resumo:
The CO2-laser-MAG hybrid welding process has been shown to be a productive choice for the welding industry, being used in e.g. the shipbuilding, pipe and beam manufacturing, and automotive industries. It provides an opportunity to increase the productivity of welding of joints containing air gaps compared with autogenous laser beam welding, with associated reductions in distortion and marked increases in welding speeds and penetration in comparison with both arc and autogenous laser welding. The literature study indicated that the phenomena of laser hybrid welding are mostly being studied using bead-on-plate welding or zero air gap configurations. This study shows it very clearly that the CO2 laser-MAG hybrid welding process is completely different, when there is a groove with an air gap. As in case of industrial use it is excepted that welding is performed for non-zero grooves, this study is of great importance for industrial applications. The results of this study indicate that by using a 6 kW CO2 laser-MAG hybrid welding process, the welding speed may also be increased if an air gap is present in the joint. Experimental trials indicated that the welding speed may be increased by 30-82% when compared with bead-on-plate welding, or welding of a joint with no air gap i.e. a joint prepared as optimum for autogenous laser welding. This study demonstrates very clearly, that the separation of the different processes, as well as the relative configurations of the processes (arc leading or trailing) affect welding performance significantly. These matters influence the droplet size and therefore the metal transfer mode, which in turn determined the resulting weld quality and the ability to bridge air gaps. Welding in bead-onplate mode, or of an I butt joint containing no air gap joint is facilitated by using a leading torch. This is due to the preheating effect of the arc, which increases the absorptivity of the work piece to the laser beam, enabling greater penetration and the use of higher welding speeds. With an air gap present, air gap bridging is more effectively achieved by using a trailing torch because of the lower arc power needed, the wider arc, and the movement of droplets predominantly towards the joint edges. The experiments showed, that the mode of metal transfer has a marked effect on gap bridgeability. Transfer of a single droplet per arc pulse may not be desirable if an air gap is present, because most of the droplets are directed towards the middle of the joint where no base material is present. In such cases, undercut is observed. Pulsed globular and rotational metal transfer modes enable molten metal to also be transferred to the joint edges, and are therefore superior metal transfer modes when bridging air gaps. It was also found very obvious, that process separation is an important factor in gap bridgeability. If process separation is too large, the resulting weld often exhibits sagging, or no weld may be formed at all as a result of the reduced interaction between the component processes. In contrast, if the processes are too close to one another, the processing region contains excess molten metal that may create difficulties for the keyhole to remain open. When the distance is optimised - i.e. a separation of 0-4 mm in this study, depending on the welding speed and beam-arc configuration - the processes act together, creating beneficial synergistic effects. The optimum process separation when using a trailing torch was found to be shorter (0-2 mm) than when a leading torch is used (2-4 mm); a result of the facilitation of weld pool motion when the latter configuration is adopted. This study demonstrates, that the MAG process used has a strong effect on the CO2-laser-MAG hybrid welding process. The laser beam welding component is relatively stable and easy to manage, with only two principal processing parameters (power and welding speed) needing to be adjusted. In contrast, the MAG process has a large number of processing parameters to optimise, all of which play an important role in the interaction between the laser beam and the arc. The parameters used for traditional MAG welding are often not optimal in achieving the most appropriate mode of metal transfer, and weld quality in laser hybrid welding, and must be optimised if the full range of benefits provided by hybrid welding are to be realised.