40 resultados para High dynamic range


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belt-drive systems have been and still are the most commonly used power transmission form in various applications of different scale and use. The peculiar features of the dynamics of the belt-drives include highly nonlinear deformation,large rigid body motion, a dynamical contact through a dry friction interface between the belt and pulleys with sticking and slipping zones, cyclic tension of the belt during the operation and creeping of the belt against the pulleys. The life of the belt-drive is critically related on these features, and therefore, amodel which can be used to study the correlations between the initial values and the responses of the belt-drives is a valuable source of information for the development process of the belt-drives. Traditionally, the finite element models of the belt-drives consist of a large number of elements thatmay lead to computational inefficiency. In this research, the beneficial features of the absolute nodal coordinate formulation are utilized in the modeling of the belt-drives in order to fulfill the following requirements for the successful and efficient analysis of the belt-drive systems: the exact modeling of the rigid body inertia during an arbitrary rigid body motion, the consideration of theeffect of the shear deformation, the exact description of the highly nonlinear deformations and a simple and realistic description of the contact. The use of distributed contact forces and high order beam and plate elements based on the absolute nodal coordinate formulation are applied to the modeling of the belt-drives in two- and three-dimensional cases. According to the numerical results, a realistic behavior of the belt-drives can be obtained with a significantly smaller number of elements and degrees of freedom in comparison to the previously published finite element models of belt-drives. The results of theexamples demonstrate the functionality and suitability of the absolute nodal coordinate formulation for the computationally efficient and realistic modeling ofbelt-drives. This study also introduces an approach to avoid the problems related to the use of the continuum mechanics approach in the definition of elastic forces on the absolute nodal coordinate formulation. This approach is applied to a new computationally efficient two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation. The proposed beam element uses a linear displacement field neglecting higher-order terms and a reduced number of nodal coordinates, which leads to fewer degrees of freedom in a finite element.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work concerns the experimental study of rapid granular shear flows in annular Couette geometry. The flow is induced by continuous driving of the horizontal plate at the top of the granular bed in an annulus. The compressive pressure, driving torque, instantaneous bed height and rotational speed of the shearing plate are measured. Moreover, local stress fluctuations are measured in a medium made of steel spheres 2 and 3 mm in diameter. Both monodisperse packing and bidisperse packing are investigated to reveal the influence of size diversity in intermittent features of granular materials. Experiments are conducted in an annulus that can contain up to 15 kg of spherical steel balls. The shearing granular medium takes place via the rotation of the upper plate which compresses the material loaded inside the annulus. Fluctuations of compressive force are locally measured at the bottom of the annulus using a piezoelectric sensor. Rapid shear flow experiments are pursued at different compressive forces and shear rates and the sensitivity of fluctuations are then investigated by different means through monodisperse and bidisperse packings. Another important feature of rapid granular shear flows is the formation of ordered structures upon shearing. It requires a certain range for the amount of granular material (uniform size distribution) loaded in the system in order to obtain stable flows. This is studied more deeply in this thesis. The results of the current work bring some new insights into deformation dynamics and intermittency in rapid granular shear flows. The experimental apparatus is modified in comparison to earlier investigations. The measurements produce data for various quantities continuously sampled from the start of shearing to the end. Static failure and dynamic shearing ofa granular medium is investigated. The results of this work revealed some important features of failure dynamics and structure formation in the system. Furthermore, some computer simulations are performed in a 2D annulus to examine the nature of kinetic energy dissipation. It is found that turbulent flow models can statistically represent rapid granular flows with high accuracy. In addition to academic outcomes and scientific publications our results have a number of technological applications associated with grinding, mining and massive grain storages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Suunniteltiin ja rakennettiin suoraa vääntömomenttisäätöä soveltava taajuudenmuuttajakäyttö oikosulkumoottorin ohjaukseen korvaamaan passiivinen jarrukäyttö. Laite on kuntoutuslaite, jolla tehdään lihasvoiman mittauksia ja voimaharjoituksia. Selvitettiin kaupallisten moottoreiden ja taajuudenmuuttajien suoritusominaisuuksia ja tämän perusteella valittiin käyttöön sopivat laitteet. Työssä esitetään kaksi oikosulkumoottorin ohjaustapaa: vektorisäätö ja suora vääntömomenttisäätö. Merkittävin osa tästä työstä käsittelee - tarkan turvallisuussuunnitelman lisäksi - kuntoutuslaitteen prototyypin komponentteja, kokoamista ja suoritustestien tuloksia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the latest decade high-speed motor technology has been increasingly commonly applied within the range of medium and large power. More particularly, applications like such involved with gas movement and compression seem to be the most important area in which high-speed machines are used. In manufacturing the induction motor rotor core of one single piece of steel it is possible to achieve an extremely rigid rotor construction for the high-speed motor. In a mechanical sense, the solid rotor may be the best possible rotor construction. Unfortunately, the electromagnetic properties of a solid rotor are poorer than the properties of the traditional laminated rotor of an induction motor. This thesis analyses methods for improving the electromagnetic properties of a solid-rotor induction machine. The slip of the solid rotor is reduced notably if the solid rotor is axially slitted. The slitting patterns of the solid rotor are examined. It is shown how the slitting parameters affect the produced torque. Methods for decreasing the harmonic eddy currents on the surface of the rotor are also examined. The motivation for this is to improve the efficiency of the motor to reach the efficiency standard of a laminated rotor induction motor. To carry out these research tasks the finite element analysis is used. An analytical calculation of solid rotors based on the multi-layer transfer-matrix method is developed especially for the calculation of axially slitted solid rotors equipped with wellconducting end rings. The calculation results are verified by using the finite element analysis and laboratory measurements. The prototype motors of 250 – 300 kW and 140 Hz were tested to verify the results. Utilization factor data are given for several other prototypes the largest of which delivers 1000 kW at 12000 min-1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to increase the understanding of the role and nature of trust in asymmetric technology partnership formation. In the knowledge-based "learning race" knowledge is considered as a primary source for competitive advantage. In the emerging ICT sector the high pace of technological change, the convergence of technologies and industries as well as the increasing complexity and uncertainty have forced even the largest players to seek cooperation for complementary knowledge and capabilities. Small technology firms need the complementary resources and legitimacy of the large firms to grow and compete in the global market place. Most of the earlier research indicates, however, that partnerships with asymmetric size, managerial resources and cultures have failed. A basic assumption supported by earlier research was that trust is a critical factor in asymmetric technology partnership formation. Asymmetric technology partnership formation is a dynamic and multi-dimensional process, and consequently a holistic research approach was selected. Research issue was approached from different levels: the individual decision-maker, the firm and the relationship between the parties. Also the impact of the dynamic environment and technology content was analyzed. A multitheoretical approach and a qualitative research method with in-depth interviews in five large ICT companies and eight small ICT companies enabled a holistic and rich view of the research issue. Study contributes on the scarce understanding on the nature and evolution of trust in asymmetric technology partnership formation. It sheds also light on the specific nature of asymmetric technology partnerships. The partnerships were found to be tentative and the diverse strategic intent of small and large technology firms appeared as a major challenge. The role of the boundary spanner was highlighted as a possibility to match the incompatible organizational cultures. A shared vision was found to be a pre-condition for individual-based fast trust leading to intuitive decision-making and experimentation. The relationships were tentative and they were continuously re-evaluated through the key actors' sense making of the technology content, asymmetry and the dynamic environment. A multi-dimensional conceptualization for trust was created and propositions on the role and nature of trust for further research are given. The purpose of this study was to increase the understanding of the role and nature of trust in asymmetric technology partnership formation. In the knowledge-based "learning race" knowledge is considered as a primary source for competitive advantage. In the emerging ICT sector the high pace of technological change, the convergence of technologies and industries as well as the increasing complexity and uncertainty have forced even the largest players to seek cooperation for complementary knowledge and capabilities. Small technology firms need the complementary resources and legitimacy of the large firms to grow and compete in the global market place. Most of the earlier research indicates, however, that partnerships with asymmetric size, managerial resources and cultures have failed. A basic assumption supported by earlier research was that trust is a critical factor in asymmetric technology partnership formation. Asymmetric technology partnership formation is a dynamic and multi-dimensional process, and consequently a holistic research approach was selected. Research issue was approached from different levels: the individual decision-maker, the firm and the relationship between the parties. Also the impact of the dynamic environment and technology content was analyzed. A multitheoretical approach and a qualitative research method with in-depth interviews in five large ICT companies and eight small ICT companies enabled a holistic and rich view of the research issue. Study contributes on the scarce understanding on the nature and evolution of trust in asymmetric technology partnership formation. It sheds also light on the specific nature of asymmetric technology partnerships. The partnerships were found to be tentative and the diverse strategic intent of small and large technology firms appeared as a major challenge. The role of the boundary spanner was highlighted as a possibility to match the incompatible organizational cultures. A shared vision was found to be a pre-condition for individual-based fast trust leading to intuitive decision-making and experimentation. The relationships were tentative and they were continuously re-evaluated through the key actors' sense making of the technology content, asymmetry and the dynamic environment. A multi-dimensional conceptualization for trust was created and propositions on the role and nature of trust for further research are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

COD discharges out of processes have increased in line with elevating brightness demands for mechanical pulp and papers. The share of lignin-like substances in COD discharges is on average 75%. In this thesis, a plant dynamic model was created and validated as a means to predict COD loading and discharges out of a mill. The assays were carried out in one paper mill integrate producing mechanical printing papers. The objective in the modeling of plant dynamics was to predict day averages of COD load and discharges out of mills. This means that online data, like 1) the level of large storage towers of pulp and white water 2) pulp dosages, 3) production rates and 4) internal white water flows and discharges were used to create transients into the balances of solids and white water, referred to as “plant dynamics”. A conversion coefficient was verified between TOC and COD. The conversion coefficient was used for predicting the flows from TOC to COD to the waste water treatment plant. The COD load was modeled with similar uncertainty as in reference TOC sampling. The water balance of waste water treatment was validated by the reference concentration of COD. The difference of COD predictions against references was within the same deviation of TOC-predictions. The modeled yield losses and retention values of TOC in pulping and bleaching processes and the modeled fixing of colloidal TOC to solids between the pulping plant and the aeration basin in the waste water treatment plant were similar to references presented in literature. The valid water balances of the waste water treatment plant and the reduction model of lignin-like substances produced a valid prediction of COD discharges out of the mill. A 30% increase in the release of lignin-like substances in the form of production problems was observed in pulping and bleaching processes. The same increase was observed in COD discharges out of waste water treatment. In the prediction of annual COD discharge, it was noticed that the reduction of lignin has a wide deviation from year to year and from one mill to another. This made it difficult to compare the parameters of COD discharges validated in plant dynamic simulation with another mill producing mechanical printing papers. However, a trend of moving from unbleached towards high-brightness TMP in COD discharges was valid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic diversity is one of the levels of biodiversity that the World Conservation Union (IUCN) has recognized as being important to preserve. This is because genetic diversity is fundamental to the future evolution and to the adaptive flexibility of a species to respond to the inherently dynamic nature of the natural world. Therefore, the key to maintaining biodiversity and healthy ecosystems is to identify, monitor and maintain locally-adapted populations, along with their unique gene pools, upon which future adaptation depends. Thus, conservation genetics deals with the genetic factors that affect extinction risk and the genetic management regimes required to minimize the risk. The conservation of exploited species, such as salmonid fishes, is particularly challenging due to the conflicts between different interest groups. In this thesis, I conduct a series of conservation genetic studies on primarily Finnish populations of two salmonid fish species (European grayling, Thymallus thymallus, and lake-run brown trout, Salmo trutta) which are popular recreational game fishes in Finland. The general aim of these studies was to apply and develop population genetic approaches to assist conservation and sustainable harvest of these populations. The approaches applied included: i) the characterization of population genetic structure at national and local scales; ii) the identification of management units and the prioritization of populations for conservation based on evolutionary forces shaping indigenous gene pools; iii) the detection of population declines and the testing of the assumptions underlying these tests; and iv) the evaluation of the contribution of natural populations to a mixed stock fishery. Based on microsatellite analyses, clear genetic structuring of exploited Finnish grayling and brown trout populations was detected at both national and local scales. Finnish grayling were clustered into three genetically distinct groups, corresponding to northern, Baltic and south-eastern geographic areas of Finland. The genetic differentiation among and within population groups of grayling ranged from moderate to high levels. Such strong genetic structuring combined with low genetic diversity strongly indicates that genetic drift plays a major role in the evolution of grayling populations. Further analyses of European grayling covering the majority of the species’ distribution range indicated a strong global footprint of population decline. Using a coalescent approach the beginning of population reduction was dated back to 1 000-10 000 years ago (ca. 200-2 000 generations). Forward simulations demonstrated that the bottleneck footprints measured using the M ratio can persist within small populations much longer than previously anticipated in the face of low levels of gene flow. In contrast to the M ratio, two alternative methods for genetic bottleneck detection identified recent bottlenecks in six grayling populations that warrant future monitoring. Consistent with the predominant role of random genetic drift, the effective population size (Ne) estimates of all grayling populations were very low with the majority of Ne estimates below 50. Taken together, highly structured local populations, limited gene flow and the small Ne of grayling populations indicates that grayling populations are vulnerable to overexploitation and, hence, monitoring and careful management using the precautionary principles is required not only in Finland but throughout Europe. Population genetic analyses of lake-run brown trout populations in the Inari basin (northernmost Finland) revealed hierarchical population structure where individual populations were clustered into three population groups largely corresponding to different geographic regions of the basin. Similar to my earlier work with European grayling, the genetic differentiation among and within population groups of lake-run brown trout was relatively high. Such strong differentiation indicated that the power to determine the relative contribution of populations in mixed fisheries should be relatively high. Consistent with these expectations, high accuracy and precision in mixed stock analysis (MSA) simulations were observed. Application of MSA to indigenous fish caught in the Inari basin identified altogether twelve populations that contributed significantly to mixed stock fisheries with the Ivalojoki river system being the major contributor (70%) to the total catch. When the contribution of wild trout populations to the fisheries was evaluated regionally, geographically nearby populations were the main contributors to the local catches. MSA also revealed a clear separation between the lower and upper reaches of Ivalojoki river system – in contrast to lower reaches of the Ivalojoki river that contributed considerably to the catch, populations from the upper reaches of the Ivalojoki river system (>140 km from the river mouth) did not contribute significantly to the fishery. This could be related to the available habitat size but also associated with a resident type life history and increased cost of migration. The studies in my thesis highlight the importance of dense sampling and wide population coverage at the scale being studied and also demonstrate the importance of critical evaluation of the underlying assumptions of the population genetic models and methods used. These results have important implications for conservation and sustainable fisheries management of Finnish populations of European grayling and brown trout in the Inari basin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, general approach is devised to model electrolyte sorption from aqueous solutions on solid materials. Electrolyte sorption is often considered as unwanted phenomenon in ion exchange and its potential as an independent separation method has not been fully explored. The solid sorbents studied here are porous and non-porous organic or inorganic materials with or without specific functional groups attached on the solid matrix. Accordingly, the sorption mechanisms include physical adsorption, chemisorption on the functional groups and partition restricted by electrostatic or steric factors. The model is tested in four Cases Studies dealing with chelating adsorption of transition metal mixtures, physical adsorption of metal and metalloid complexes from chloride solutions, size exclusion of electrolytes in nano-porous materials and electrolyte exclusion of electrolyte/non-electrolyte mixtures. The model parameters are estimated using experimental data from equilibrium and batch kinetic measurements, and they are used to simulate actual single-column fixed-bed separations. Phase equilibrium between the solution and solid phases is described using thermodynamic Gibbs-Donnan model and various adsorption models depending on the properties of the sorbent. The 3-dimensional thermodynamic approach is used for volume sorption in gel-type ion exchangers and in nano-porous adsorbents, and satisfactory correlation is obtained provided that both mixing and exclusion effects are adequately taken into account. 2-Dimensional surface adsorption models are successfully applied to physical adsorption of complex species and to chelating adsorption of transition metal salts. In the latter case, comparison is also made with complex formation models. Results of the mass transport studies show that uptake rates even in a competitive high-affinity system can be described by constant diffusion coefficients, when the adsorbent structure and the phase equilibrium conditions are adequately included in the model. Furthermore, a simplified solution based on the linear driving force approximation and the shrinking-core model is developed for very non-linear adsorption systems. In each Case Study, the actual separation is carried out batch-wise in fixed-beds and the experimental data are simulated/correlated using the parameters derived from equilibrium and kinetic data. Good agreement between the calculated and experimental break-through curves is usually obtained indicating that the proposed approach is useful in systems, which at first sight are very different. For example, the important improvement in copper separation from concentrated zinc sulfate solution at elevated temperatures can be correctly predicted by the model. In some cases, however, re-adjustment of model parameters is needed due to e.g. high solution viscosity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct-driven permanent magnet synchronous generator is one of the most promising topologies for megawatt-range wind power applications. The rotational speed of the direct-driven generator is very low compared with the traditional electrical machines. The low rotational speed requires high torque to produce megawatt-range power. The special features of the direct-driven generators caused by the low speed and high torque are discussed in this doctoral thesis. Low speed and high torque set high demands on the torque quality. The cogging torque and the load torque ripple must be as low as possible to prevent mechanical failures. In this doctoral thesis, various methods to improve the torque quality are compared with each other. The rotor surface shaping, magnet skew, magnet shaping, and the asymmetrical placement of magnets and stator slots are studied not only by means of torque quality, but also the effects on the electromagnetic performance and manufacturability of the machine are discussed. The heat transfer of the direct-driven generator must be designed to handle the copper losses of the stator winding carrying high current density and to keep the temperature of the magnets low enough. The cooling system of the direct-driven generator applying the doubly radial air cooling with numerous radial cooling ducts was modeled with a lumped-parameter-based thermal network. The performance of the cooling system was discussed during the steady and transient states. The effect of the number and width of radial cooling ducts was explored. The large number of radial cooling ducts drastically increases the impact of the stack end area effects, because the stator stack consists of numerous substacks. The effects of the radial cooling ducts on the effective axial length of the machine were studied by analyzing the crosssection of the machine in the axial direction. The method to compensate the magnet end area leakage was considered. The effect of the cooling ducts and the stack end area effects on the no-load voltages and inductances of the machine were explored by using numerical analysis tools based on the three-dimensional finite element method. The electrical efficiency of the permanent magnet machine with different control methods was estimated analytically over the whole speed and torque range. The electrical efficiencies achieved with the most common control methods were compared with each other. The stator voltage increase caused by the armature reaction was analyzed. The effect of inductance saturation as a function of load current was implemented to the analytical efficiency calculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glass is a unique material with a long history. Several glass products are used daily in our everyday life, often unnoticed. Glass can be found not only in obvious applications such as tableware, windows, and light bulbs, but also in tennis rackets, windmill turbine blades, optical devices, and medical implants. The glasses used at present as implants are inorganic silica-based melt-derived compositions mainly for hard-tissue repair as bone graft substitute in dentistry and orthopedics. The degree of glass reactivity desired varies according to implantation situation and it is vital that the ion release from any glasses used in medical applications is controlled. Understanding the in vitro dissolution rate of glasses provides a first approximation of their behavior in vivo. Specific studies concerning dissolution properties of bioactive glasses have been relatively scarce and mostly concentrated to static condition studies. The motivation behind this work was to develop a simple and accurate method for quantifying the in vitro dissolution rate of highly different types of glass compositions with interest for future clinical applications. By combining information from various experimental conditions, a better knowledge of glass dissolution and the suitability of different glasses for different medical applications can be obtained. Thus, two traditional and one novel approach were utilized in this thesis to study glass dissolution. The chemical durability of silicate glasses was tested in water and TRIS-buffered solution at static and dynamic conditions. The traditional in vitro testing with a TRISbuffered solution under static conditions works well with bioactive or with readily dissolving glasses, and it is easy to follow the ion dissolution reactions. However, in the buffered solution no marked differences between the more durable glasses were observed. The hydrolytic resistance of the glasses was studied using the standard procedure ISO 719. The relative scale given by the standard failed to provide any relevant information when bioactive glasses were studied. However, the clear differences in the hydrolytic resistance values imply that the method could be used as a rapid test to get an overall idea of the biodegradability of glasses. The standard method combined with the ion concentration and pH measurements gives a better estimate of the hydrolytic resistance because of the high silicon amount released from a glass. A sensitive on-line analysis method utilizing inductively coupled plasma optical emission spectrometer and a flow-through micro-volume pH electrode was developed to study the initial dissolution of biocompatible glasses. This approach was found suitable for compositions within a large range of chemical durability. With this approach, the initial dissolution of all ions could be measured simultaneously and quantitatively, which gave a good overall idea of the initial dissolution rates for the individual ions and the dissolution mechanism. These types of results with glass dissolution were presented for the first time during the course of writing this thesis. Based on the initial dissolution patterns obtained with the novel approach using TRIS, the experimental glasses could be divided into four distinct categories. The initial dissolution patterns of glasses correlated well with the anticipated bioactivity. Moreover, the normalized surface-specific mass loss rates and the different in vivo models and the actual in vivo data correlated well. The results suggest that this type of approach can be used for prescreening the suitability of novel glass compositions for future clinical applications. Furthermore, the results shed light on the possible bioactivity of glasses. An additional goal in this thesis was to gain insight into the phase changes occurring during various heat treatments of glasses with three selected compositions. Engineering-type T-T-T curves for glasses 1-98 and 13-93 were stablished. The information gained is essential in manufacturing amorphous porous implants or for drawing of continuous fibers of the glasses. Although both glasses can be hot worked to amorphous products at carefully controlled conditions, 1-98 showed one magnitude greater nucleation and crystal growth rate than 13-93. Thus, 13-93 is better suited than 1-98 for working processes which require long residence times at high temperatures. It was also shown that amorphous and partially crystalline porous implants can be sintered from bioactive glass S53P4. Surface crystallization of S53P4, forming Na2O∙CaO∙2SiO2, was observed to start at 650°C. The secondary crystals of Na2Ca4(PO4)2SiO4, reported for the first time in this thesis, were detected at higher temperatures, from 850°C to 1000°C. The crystal phases formed affected the dissolution behavior of the implants in simulated body fluid. This study opens up new possibilities for using S53P4 to manufacture various structures, while tailoring their bioactivity by controlling the proportions of the different phases. The results obtained in this thesis give valuable additional information and tools to the state of the art for designing glasses with respect to future clinical applications. With the knowledge gained we can identify different dissolution patters and use this information to improve the tuning of glass compositions. In addition, the novel online analysis approach provides an excellent opportunity to further enhance our knowledge of glass behavior in simulated body conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuel cells are a promising alternative for clean and efficient energy production. A fuel cell is probably the most demanding of all distributed generation power sources. It resembles a solar cell in many ways, but sets strict limits to current ripple, common mode voltages and load variations. The typically low output voltage from the fuel cell stack needs to be boosted to a higher voltage level for grid interfacing. Due to the high electrical efficiency of the fuel cell, there is a need for high efficiency power converters, and in the case of low voltage, high current and galvanic isolation, the implementation of such converters is not a trivial task. This thesis presents galvanically isolated DC-DC converter topologies that have favorable characteristics for fuel cell usage and reviews the topologies from the viewpoint of electrical efficiency and cost efficiency. The focus is on evaluating the design issues when considering a single converter module having large current stresses. The dominating loss mechanism in low voltage, high current applications is conduction losses. In the case of MOSFETs, the conduction losses can be efficiently reduced by paralleling, but in the case of diodes, the effectiveness of paralleling depends strongly on the semiconductor material, diode parameters and output configuration. The transformer winding losses can be a major source of losses if the windings are not optimized according to the topology and the operating conditions. Transformer prototyping can be expensive and time consuming, and thus it is preferable to utilize various calculation methods during the design process in order to evaluate the performance of the transformer. This thesis reviews calculation methods for solid wire, litz wire and copper foil winding losses, and in order to evaluate the applicability of the methods, the calculations are compared against measurements and FEM simulations. By selecting a proper calculation method for each winding type, the winding losses can be predicted quite accurately before actually constructing the transformer. The transformer leakage inductance, the amount of which can also be calculated with reasonable accuracy, has a significant impact on the semiconductor switching losses. Therefore, the leakage inductance effects should also be taken into account when considering the overall efficiency of the converter. It is demonstrated in this thesis that although there are some distinctive differences in the loss distributions between the converter topologies, the differences in the overall efficiency can remain within a range of a few percentage points. However, the optimization effort required in order to achieve the high efficiencies is quite different in each topology. In the presence of practical constraints such as manufacturing complexity or cost, the question of topology selection can become crucial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The general trend towards increasing e ciency and energy density drives the industry to high-speed technologies. Active Magnetic Bearings (AMBs) are one of the technologies that allow contactless support of a rotating body. Theoretically, there are no limitations on the rotational speed. The absence of friction, low maintenance cost, micrometer precision, and programmable sti ness have made AMBs a viable choice for highdemanding applications. Along with the advances in power electronics, such as signi cantly improved reliability and cost, AMB systems have gained a wide adoption in the industry. The AMB system is a complex, open-loop unstable system with multiple inputs and outputs. For normal operation, such a system requires a feedback control. To meet the high demands for performance and robustness, model-based control techniques should be applied. These techniques require an accurate plant model description and uncertainty estimations. The advanced control methods require more e ort at the commissioning stage. In this work, a methodology is developed for an automatic commissioning of a subcritical, rigid gas blower machine. The commissioning process includes open-loop tuning of separate parts such as sensors and actuators. The next step is to apply a system identi cation procedure to obtain a model for the controller synthesis. Finally, a robust model-based controller is synthesized and experimentally evaluated in the full operating range of the system. The commissioning procedure is developed by applying only the system components available and a priori knowledge without any additional hardware. Thus, the work provides an intelligent system with a self-diagnostics feature and an automatic commissioning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scanning optics create different types of phenomena and limitation to cladding process compared to cladding with static optics. This work concentrates on identifying and explaining the special features of laser cladding with scanning optics. Scanner optics changes cladding process energy input mechanics. Laser energy is introduced into the process through a relatively small laser spot which moves rapidly back and forth, distributing the energy to a relatively large area. The moving laser spot was noticed to cause dynamic movement in the melt pool. Due to different energy input mechanism scanner optic can make cladding process unstable if parameter selection is not done carefully. Especially laser beam intensity and scanning frequency have significant role in the process stability. The laser beam scanning frequency determines how long the laser beam affects with specific place local specific energy input. It was determined that if the scanning frequency in too low, under 40 Hz, scanned beam can start to vaporize material. The intensity in turn determines on how large package this energy is brought and if the intensity of the laser beam was too high, over 191 kW/cm2, laser beam started to vaporize material. If there was vapor formation noticed in the melt pool, the process starts to resample more laser alloying due to deep penetration of laser beam in to the substrate. Scanner optics enables more flexibility to the process than static optics. The numerical adjustment of scanning amplitude enables clad bead width adjustment. In turn scanner power modulation (where laser power is adjusted according to where the scanner is pointing) enables modification of clad bead cross-section geometry when laser power can be adjusted locally and thus affect how much laser beam melts material in each sector. Power modulation is also an important factor in terms of process stability. When a linear scanner is used, oscillating the scanning mirror causes a dwell time in scanning amplitude border area, where the scanning mirror changes the direction of movement. This can cause excessive energy input to this area which in turn can cause vaporization and process instability. This process instability can be avoided by decreasing energy in this region by power modulation. Powder feeding parameters have a significant role in terms of process stability. It was determined that with certain powder feeding parameter combinations powder cloud behavior became unstable, due to the vaporizing powder material in powder cloud. Mainly this was noticed, when either or both the scanning frequency or powder feeding gas flow was low or steep powder feeding angle was used. When powder material vaporization occurred, it created vapor flow, which prevented powder material to reach the melt pool and thus dilution increased. Also powder material vaporization was noticed to produce emission of light at wavelength range of visible light. This emission intensity was noticed to be correlated with the amount of vaporization in the powder cloud.