1000 resultados para väitöskirja


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The possibility and the usefulness of applying plasma keyhole welding to structural steels with different compositions and material thicknesses, and in various welding positions has been examinated. Single pass butt welding with I groove in flat, horizontal vertical and vertical positions and root welding with V , Y and U grooves of thick plate material in flat position have been studied and the welds with high quality has been obtained. The technological conditions for successful welding are presented. The single and interactive effects of welding parameters on weld quality, especially on surface weld defects, geometrical form errors, internal defects and mechanical properties (strength, ductility, impact toughness, hardness and bendability) of weld joint, are presented. Welding parameter combinations providing the best quality welds are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fatal and permanently disabling accidents form only one per I cent of all occupational accidents but in many branches of industry they account for more than half the accident costs. Furthermore the human suffering of the victim and his family is greater in severe accidents than in slight ones. For both human and economic reasons the severe accident risks should be identified befor injuries occur. It is for this purpose that different safety analysis methods have been developed . This study shows two new possible approaches to the problem.. The first is the hypothesis that it is possible to estimate the potential severity of accidents independent of the actual severity. The second is the hypothesis that when workers are also asked to report near accidents, they are particularly prone to report potentially severe near accidents on the basis of their own subjective risk assessment. A field study was carried out in a steel factory. The results supported both the hypotheses. The reliability and the validity of post incident estimates of an accident's potential severity were reasonable. About 10 % of accidents were estimated to be potentially critical; they could have led to death or very severe permanent disability. Reported near accidents were significantly more severe, about 60 $ of them were estimated to be critical. Furthermore the validity of workers subjective risk assessment, manifested in the near accident reports, proved to be reasonable. The studied new methods require further development and testing. They could be used both in routine usage in work places and in research for identifying and setting the priorities of accident risks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis includes several thermal hydraulic analyses related to the Loviisa WER 440 nuclear power plant units. The work consists of experimental studies, analysis of the experiments, analysis of some plant transits and development of a calculational model for calculation of boric acid concentrations in the reactor. In the first part of the thesis, in the case of won of boric acid solution behaviour during long term cooling period of LOCAs, experiments were performed in scaled down test facilities. The experimental data together with the results of RELAPS/MOD3 simulations were used to develop a model for calculations of boric acid concentrations in the reactor during LOCAs. The results of calculations showed that margins to critical concentrations that would lead to boric acid crystallization were large, both in the reactor core and in the lower plenum. This was mainly caused by the fact that water in the primary cooling circuit includes borax (Na)BsO,.IOHZO), which enters the reactor when ECC water is taken from the sump and greatly increases boric acid solubility in water. In the second part, in the case of simulation of horizontal steam generators, experiments were performed with PACTEL integral test loop to simulate loss of feedwater transients. The PACTEL experiments, as well as earlier REWET III natural circulation tests, were analyzed with RELAPS/MOD3 Version Sm5 code. The analysis showed that the code was capable of simulating the main events during the experiments. However, in the case of loss of secondary side feedwater the code was not completely capable to simulate steam superheating in the secondary side of the steam generators. The third part of the work consists of simulations of Loviisa VVER reactor pump trip transients with RELAPSlMODI Eur, RELAPS/MOD3 and CATHARE codes. All three codes were capable to simulate the two selected pump trip transients and no significant differences were found between the results of different codes. Comparison of the calculated results with the data measured in the Loviisa plant also showed good agreement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanofiltration performance was studied with effluents from the pulp and paper industry and with model substances. The effect of filtration conditions and membrane properties on nanofiltration flux, retention, and fouling was investigated. Generally, the aim was to determine the parameters that influence nanofiltration efficiency and study how to carry out nanofiltration without fouling by controlling these parameters. The retentions of the nanofiltration membranes studied were considerably higher than those of tight ultrafiltration membranes, and the permeate fluxes obtained were approximately the same as those of tight ultrafiltration membranes. Generally, about 80% retentions of total carbon and conductivity were obtained during the nanofiltration experiments. Depending on the membrane and the filtration conditions, the retentions of monovalent ions (chloride) were between 80 and 95% in the nanofiltrations. An increase in pH improved retentions considerably and also the flux to some degree. An increase in pressure improved retention, whereas an increase in temperature decreased retention if the membrane retained the solute by the solution diffusion mechanism. In this study, more open membranes fouled more than tighter membranes due to higher concentration polarization and plugging of the membrane material. More irreversible fouling was measured for hydrophobic membranes. Electrostatic repulsion between the membrane and the components in the solution reduced fouling but did not completely prevent it with the hydrophobic membranes. Nanofiltration could be carried out without fouling, at least with the laboratory scale apparatus used here when the flux was below the critical flux. Model substances had a strong form of the critical flux, but the effluents had only a weak form of the critical flux. With the effluents, some fouling always occurred immediately when the filtration was started. However, if the flux was below the critical flux, further fouling was not observed. The flow velocity and pH were probably the most important parameters, along with the membrane properties, that influenced the critical flux. Precleaning of the membranes had only a small effect on the critical flux and retentions, but it improved the permeability of the membranes significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents mathematical methods for evaluation of retail performance with special regard to product sourcing strategies. Forecast accuracy, process lead time, offshore / local sourcing mix and up front / replenishment buying mix are defined as critical success factors in connection with sourcing seasonal products with a fashion content. As success measures, this research focuses on service level, lost sales, product substitute percentage, gross margin, gross margin return on inventory and mark down rate. The accuracy of demand forecast is found to be a fundamental success factor. Forecast accuracy depends on lead time. Lead times are traditionally long and buying decisions are made seven to eight months prior to the start of the selling season. Forecast errors cause stockouts and lost sales. Some of the products bought for the selling season will not be sold and have to be marked down and sold at clearance, causing loss of gross margin. Gross margin percentage is not the best tool for evaluating sourcing decisions and in the context of this study gross margin return on inventory, which combines profitability and assets management, is used. The findings of this research suggest that there are more profitable ways of sourcing products than buying them from low cost offshore sources. Mixing up front and inseason replenishment deliveries, especially when point of sale information is used for improving forecast accuracy, results in better retail performance. Quick Response and Vendor Managed Inventory strategies yield better results than traditional up front buying from offshore even if local purchase prices are higher. Increasing the number of selling seasons, slight over buying for the season in order to

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Joints intended for welding frequently show variations in geometry and position, for which it is unfortunately not possible to apply a single set of operating parameters to ensure constant quality. The cause of this difficulty lies in a number of factors, including inaccurate joint preparation and joint fit up, tack welds, as well as thermal distortion of the workpiece. In plasma arc keyhole welding of butt joints, deviations in the gap width may cause weld defects such as an incomplete weld bead, excessive penetration and burn through. Manual adjustment of welding parameters to compensate for variations in the gap width is very difficult, and unsatisfactory weld quality is often obtained. In this study a control system for plasma arc keyhole welding has been developed and used to study the effects of the real time control of welding parameters on gap tolerance during welding of austenitic stainless steel AISI 304L. The welding tests demonstrated the beneficial effect of real time control on weld quality. Compared with welding using constant parameters, the maximum tolerable gap width with an acceptable weld quality was 47% higher when using the real time controlled parameters for a plate thickness of 5 mm. In addition, burn through occurred with significantly larger gap widths when parameters were controlled in real time. Increased gap tolerance enables joints to be prepared and fit up less accurately, saving time and preparation costs for welding. In addition to the control system, a novel technique for back face monitoring is described in this study. The test results showed that the technique could be successfully applied for penetration monitoring when welding non magnetic materials. The results also imply that it is possible to measure the dimensions of the plasma efflux or weld root, and use this information in a feedback control system and, thus, maintain the required weld quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amphiphilic nature of metal extractants causes the formation of micelles and other microscopic aggregates when in contact with water and an organic diluent. These phenomena and their effects on metal extraction were studied using carboxylic acid (Versatic 10) and organophosphorus acid (Cyanex 272) based extractants. Special emphasis was laid on the study of phase behaviour in a pre neutralisation stage when the extractant is transformed to a sodium or ammonium salt form. The pre neutralised extractants were used to extract nickel and to separate cobalt and nickel. Phase diagrams corresponding to the pre neutralisation stage in a metal extraction process were determined. The maximal solubilisation of the components in the system water(NH3)/extractant/isooctane takes place when the molar ratio between the ammonia salt form and the free form of the extractant is 0.5 for the carboxylic acid and 1 for the organophosphorus acid extractant. These values correspond to the complex stoichiometry of NH4A•HA and NIi4A, respectively. When such a solution is contacted with water a microemulsion is formed. If the aqueous phase contains also metal ions (e.g. Ni²+), complexation will take place on the microscopic interface of the micellar aggregates. Experimental evidence showing that the initial stage of nickel extraction with pre neutralised Versatic 10 is a fast pseudohomogeneous reaction was obtained. About 90% of the metal were extracted in the first 15 s after the initial contact. For nickel extraction with pre neutralised Versatic 10 it was found that the highest metal loading and the lowest residual ammonia and water contents in the organic phase are achieved when the feeds are balanced so that the stoichiometry is 2NH4+(org) = Nit2+(aq). In the case of Co/Ni separation using pre neutralised Cyanex 272 the highest separation is achieved when the Co/extractant molar ratio in the feeds is 1 : 4 and at the same time the optimal degree of neutralisation of the Cyanex 272 is about 50%. The adsorption of the extractants on solid surfaces may cause accumulation of solid fine particles at the interface between the aqueous and organic phases in metal extraction processes. Copper extraction processes are known to suffer of this problem. Experiments were carried out using model silica and mica particles. It was found that high copper loading, aromacity of the diluent, modification agents and the presence of aqueous phase decrease the adsorption of the hydroxyoxime on silica surfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Synchronous machines with an AC converter are used mainly in large drives, for example in ship propulsion drives as well as in rolling mill drives in steel industry. These motors are used because of their high efficiency, high overload capacity and good performance in the field weakening area. Present day drives for electrically excited synchronous motors are equipped with position sensors. Most drives for electrically excited synchronous motors will be equipped with position sensors also in future. This kind of drives with good dynamics are mainly used in metal industry. Drives without a position sensor can be used e.g. in ship propulsion and in large pump and blower drives. Nowadays, these drives are equipped with a position sensor, too. The tendency is to avoid a position sensor if possible, since a sensor reduces the reliability of the drive and increases costs (latter is not very significant for large drives). A new control technique for a synchronous motor drive is a combination of the Direct Flux Linkage Control (DFLC) based on a voltage model and a supervising method (e.g. current model). This combination is called Direct Torque Control method (DTC). In the case of the position sensorless drive, the DTC can be implemented by using other supervising methods that keep the stator flux linkage origin centered. In this thesis, a method for the observation of the drift of the real stator flux linkage in the DTC drive is introduced. It is also shown how this method can be used as a supervising method that keeps the stator flux linkage origin centered in the case of the DTC. In the position sensorless case, a synchronous motor can be started up with the DTC control, when a method for the determination of the initial rotor position presented in this thesis is used. The load characteristics of such a drive are not very good at low rotational speeds. Furthermore, continuous operation at a zero speed and at a low rotational speed is not possible, which is partly due to the problems related to the flux linkage estimate. For operation in a low speed area, a stator current control method based on the DFLC modulator (DMCQ is presented. With the DMCC, it is possible to start up and operate a synchronous motor at a zero speed and at low rotational speeds in general. The DMCC is necessary in situations where high torque (e.g. nominal torque) is required at the starting moment, or if the motor runs several seconds at a zero speed or at a low speed range (up to 2 Hz). The behaviour of the described methods is shown with test results. The test results are presented for the direct flux linkage and torque controlled test drive system with a 14.5 kVA, four pole salient pole synchronous motor with a damper winding and electric excitation. The static accuracy of the drive is verified by measuring the torque in a static load operation, and the dynamics of the drive is proven in load transient tests. The performance of the drive concept presented in this work is sufficient e.g. for ship propulsion and for large pump drives. Furthermore, the developed methods are almost independent of the machine parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Logistics management is increasingly being recognised by many companies to be of critical concern. The logistics function includes directly or indirectly many of the new areas for achieving or maintaining competitive advantage that companies have been forced to develop due to increasing competitive pressures. The key to achieving a competitive advantage is to manage the logistics function strategically which involves determining the most cost effective method of providing the necessary customer service levels from the many combinations of operating procedures in the areas of transportation, warehousing, order processing and information systems, production, and inventory management. In this thesis, a comprehensive distribution logistics strategic management process is formed by integrating the periodic strategic planning process with a continuous strategic issues management process. Strategic planning is used for defining the basic objectives for a company and assuring co operation and synergy between the different functions of a company while strategic issues management is used on a continuous basis in order to deal with environmental and internal turbulence. The strategic planning subprocess consists of the following main phases: (1) situational analyses, (2) defining the vision and strategic goals for the logistics function, (3) determining objectives and strategies, (4) drawing up tactical action plans, and (5) evaluating the implementation of the plans and making the needed adjustments. The aim of the strategic issues management subprocess is to continuously scan the environment and the organisation for early identification of the issues having a significant impact on the logistics function using the following steps: (1) the identification of trends, (2) assessing the impact and urgency of the identified trends, (3) assigning priorities to the issues, and (4) planning responses to the, issues. The Analytic Hierarchy Process (AHP) is a systematic procedure for structuring any problem. AHP is based on the following three principles: decomposition, comparative judgements, and synthesis of priorities. AHP starts by decomposing a complex, multicriteria problem into a hierarchy where each level consists of a few manageable elements which are then decomposed into another set of elements. The second step is to use a measurement methodology to establish priorities among the elements within each level of the hierarchy. The third step in using AHP is to synthesise the priorities of the elements to establish the overall priorities for the decision alternatives. In this thesis, decision support systems are developed for different areas of distribution logistics strategic management by applying the Analytic Hierarchy Process. The areas covered are: (1) logistics strategic issues management, (2) planning of logistic structure, (3) warehouse site selection, (4) inventory forecasting, (5) defining logistic action and development plans, (6) choosing a distribution logistics strategy, (7) analysing and selecting transport service providers, (8) defining the logistic vision and strategic goals, (9) benchmarking logistic performance, and (10) logistic service management. The thesis demonstrates the potential of AHP as a systematic and analytic approach to distribution logistics strategic management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multispectral images are becoming more common in the field of remote sensing, computer vision, and industrial applications. Due to the high accuracy of the multispectral information, it can be used as an important quality factor in the inspection of industrial products. Recently, the development on multispectral imaging systems and the computational analysis on the multispectral images have been the focus of a growing interest. In this thesis, three areas of multispectral image analysis are considered. First, a method for analyzing multispectral textured images was developed. The method is based on a spectral cooccurrence matrix, which contains information of the joint distribution of spectral classes in a spectral domain. Next, a procedure for estimating the illumination spectrum of the color images was developed. Proposed method can be used, for example, in color constancy, color correction, and in the content based search from color image databases. Finally, color filters for the optical pattern recognition were designed, and a prototype of a spectral vision system was constructed. The spectral vision system can be used to acquire a low dimensional component image set for the two dimensional spectral image reconstruction. The data obtained by the spectral vision system is small and therefore convenient for storing and transmitting a spectral image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän tutkimuksen tavoitteena oli määritellä strategiaprosessiin liittyvät kriittiset alueet, konsernijohdon tehtävät strategiaprosessissa sekä edellisten pohjalta kehittää konsernityyppiselle yritykselle normatiivinen strategiaprosessin malli kriittisten alueiden hallitsemiseksi. Tavoitteena oli myös lisätä strategisen ajattelun ja strategiaprosessien ymmärtämistä selittämällä niiden historiallista kehittymistä sekä niiden käsitteistöä ja käsitteiden sisältöä. Probleemaa lähestyttiin sekä doktriinin kautta että tulkitsemalla strategiaprosessissa ilmeneviä ongelmia ja analysoimalla niiden syy ja seuraussuhteita. Käsillä oleva teoreettis praktinen tutkimus toteutettiin osittain toiminta-analyyttisella tutkimusotteella, osittain toiminta analyyttisella tutkimusotteella case tutkimuksen ja komparatiivisen analyysin tukemana sekä osittain päätöksentekometodologisella tutkimusotteella. Työn teoreettinen osa tehtiin kirjallisuustutkimuksena. Siinä luotiin strategiaprosessin ja konsernijohtamisen käsitteellinen perusta ja tutkimuksen viitekehys. Konsernijohtaminen laajennettiin tutkimuksessa tulosten osalta yleistäen koskemaan muitakin hajautettuja yritysorganisaatioita kuin pelkän juridiikan pohjalta muodostuneita konserneja. Tutkimuksen aluksi tarkasteltiin strategisen ajattelun koulukuntia eri näkemyksineen sekä toisaalta strategia-ajattelun kehittymistrendeja 1950 luvulta nykyhetkeen. Samoin tarkasteltiin sitä, kuinka strategiaprosessit oval kehittyneet samara ajanjaksona. Huomion painopisteen todettiin siirtyneen strategisen johtamisen inhimilliseen puoleen strategisem johtajuuden samalla korostuessa ja strategisen ajattelun laajentuessa Empiirinen osuus toteutettiin case tutkimuksena. Sen kuluessa kartoitettiin strategiaprosessin keskeiset ongelma alueet ja analysoitiin niiden takana olevat syyt, jotta voidin määritellä strategiapmsessin kehittämisen suunnat ja painopisteaiueet. Teoreettisen ja empiirisen osan penisteella määriteltiin strategiaprosessin kriittiset alueet yleisellä tasolla. Kriittisellä alueella tarkoitetaan asiakokonaisuutta tai asiaa, jonka on oltava kunaossa, jotta strategiaprosessit toimisivat. Nämä alueet liittyvat itse strategiaprosessiin suoraan tai välillisesti muun johtamistyön kautta. Strategiaprosessin kriittisten alueiden määrittelyn yhteydessä asetettiin doktriiniin tukeutuen strategiaprosessin kehittämissuunnat konsernijohdon nakäkökulmasta tarkasteltuna. Näihin kehittämissuuntiin ja edelleen doktriiniin tukeutuen määriteltiin konsernijohdon strategiaprosessin substanssitehtävät, prosessia tukevat tehtävät sekä prosessin toteuttamis- ja kehittämistehtävtä. Konsernijohdon strategiaprosessin tehtävät eivät muodosta sekventiaalista ja hierarkista järjestelmää vaan ovat joukko aktiviteetteja, joita toteutetaan tarpeen mukaan. Konsernijohdon strategiaprosessi määriteltiin ja kuvattiin tutkimuksessa johdon työskentelyprosessiksi sellaisten toimeenpanokelpoisten strategioiden tuottamiseksi ja toimeenpanemiseksi, jotka lisäävät yrityksen (konsernin) arvoa omistajan näkökulmasta mutta huomioivat myös muiden keskeisten sidosryhmien vaatimukset, tavoitteet ja rajoitteet. Konsernijohdon strategiaprosessi nähdään tässä jatkuvana konsernitasoisena päämäärä- ja keinopuolen tarkasteluna. Siinä konsernijohto tiedostaa konsernin ulkoisesta ja sisäisestä ymparistostä tulevat signaalit sekä pitää yllä näkemystä konsernin strategisesta asemasta. Tiedon massan näkemyksen saavutettua kriittisen rajansa se pakottaa konsernijohdon aivioimaan aiempia ratkaisuja uudessa valossa. Tämä validointi perustuu jatkuvasti esitettyihin neljään kysymykseen: onko ympäristö , premissi ja toimeenpanoseurannasta kertyneen tietämyksen perusteella nähtävissä vaikutuksia välittömiin toimenpiteisiin, vaikutuksia toimintasuunnitelmiin tai kriittisiin seurannan kohteisiin, vaikutuksia suunnanvalintoihin tai vaikutuksia perususkomuksiin? Konsernijohdon strategiaprosessi etenee jatkuvana prosessina päätösten ja ajan virrassa.