52 resultados para Gear efficiency and gear selectivity
Resumo:
This research report presents an application of systems theory to evaluating intellectual capital (IC) as organization's ability for self-renewal. As renewal ability is a dynamic capability of an organization as a whole, rather than a static asset or an atomistic competence of separate individuals within the organization, it needs to be understood systemically. Consequently, renewal ability has to be measured with systemic methods that are based on a thorough conceptual analysis of systemic characteristics of organizations. The aim of this report is to demonstrate the theory and analysis methodology for grasping companies' systemic efficiency and renewal ability. The volume is divided into three parts. The first deals with the theory of organizations as self-renewing systems. In the second part, the principles of quantitative analysis of organizations are laid down. Finally, the detailed mathematics of the renewal indices are presented. We also assert that the indices produced by the analysis are an effective tool for the management and valuation of knowledge-intensive companies.
Resumo:
VVALOSADE is a research project of professor Anita Lukka's VALORE research team in the Lappeenranta University of Technology. The VALOSADE includes the ELO technology program of Tekes. SMILE is one of four subprojects of the VALOSADE. The SMILE study focuses on the case of the company network that is composed of small and micro-sized mechanical maintenance service providers and forest industry as large-scale customers. The basic principle of the SMILE study is the communication and ebusiness in supply and demand networks. The aim of the study is to develop ebusiness strategy, ebusiness model and e-processes among the SME local service providers, and onthe other hand, between the local service provider network and the forest industry customers in a maintenance and operations service business. A literature review, interviews and benchmarking are used as research methods in this qualitative case study. The first SMILE report, 'Ebusiness between Global Company and Its Local SME Supplier Network', concentrated on creating background for the SMILE study by studying general trends of ebusiness in supply chains and networks of different industries. This second phase of the study concentrates on case network background, such as business relationships, information systems and business objectives; core processes in maintenance and operations service network; development needs in communication among the network participants; and ICT solutions to respond needs in changing environment. In the theory part of the report, different ebusiness models and frameworks are introduced. Those models and frameworks are compared to empirical case data. From that analysis of the empirical data, therecommendations for the development of the network information system are derived. In process industry such as the forest industry, it is crucial to achieve a high level of operational efficiency and reliability, which sets up great requirements for maintenance and operations. Therefore, partnerships or strategic alliances are needed between the network participants. In partnerships and alliances, deep communication is important, and therefore the information systems in the network also are critical. Communication, coordination and collaboration will increase in the case network in the future, because network resources must be optimised to improve competitive capability of the forest industry customers and theefficiency of their service providers. At present, ebusiness systems are not usual in this maintenance network. A network information system among the forest industry customers and their local service providers actually is the only genuinenetwork information system in this total network. However, the utilisation of that system has been quite insignificant. The current system does not add value enough either to the customers or to the local service providers. At present, thenetwork information system is the infomediary that share static information forthe network partners. The network information system should be the transaction intermediary, which integrates internal processes of the network companies; the network information system, which provides common standardised processes for thelocal service providers; and the infomediary, which share static and dynamic information on right time, on right partner, on right costs, on right format and on right quality. This study provides recommendations how to develop this system in the future to add value to the network companies. Ebusiness scenarios, vision, objectives, strategies, application architecture, ebusiness model, core processes and development strategy must be considered when the network information system will be developed in the next development step. The core processes in the case network are demand/capacity management, customer/supplier relationship management, service delivery management, knowledge management and cash flow management. Most benefits from ebusiness solutions come from the electrifying of operational level processes, such as service delivery management and cash flow management.
Resumo:
The aim of this thesis was to produce information for the estimation of the flow balance of wood resin in mechanical pulping and to demonstrate the possibilities for improving the efficiency of deresination in practice. It was observed that chemical changes in wood resin take place only during peroxide bleaching, a significant amount of water dispersed wood resin is retained in the pulp mat during dewatering and the amount of wood resin in the solid phase of the process filtrates is very small. On the basis of this information there exist three parameters related to behaviour of wood resin that determine the flow balance in the process: 1. The liberation of wood resin to the pulp water phase 2. Theretention of water dispersed wood resin in dewatering 3. The proportion of wood resin degraded in the peroxide bleaching The effect of different factors on these parameters was evaluated with the help of laboratory studies and a literature survey. Also, information related to the values of these parameters in existing processes was obtained in mill measurements. With the help of this information, it was possible to evaluate the deresination efficiency and the effect of different factors on this efficiency in a pulping plant that produced low-freeness mechanical pulp. This evaluation showed that the wood resin content of mechanical pulp can be significantly decreased if there exists, in the process, a peroxide bleaching and subsequent washing stage. In the case of an optimal process configuration, as high as a 85 percent deresination efficiency seems to be possible with a water usage level of 8 m3/o.d.t.
Resumo:
Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.
Resumo:
This thesis presents an alternative approach to the analytical design of surface-mounted axialflux permanent-magnet machines. Emphasis has been placed on the design of axial-flux machines with a one-rotor-two-stators configuration. The design model developed in this study incorporates facilities to include both the electromagnetic design and thermal design of the machine as well as to take into consideration the complexity of the permanent-magnet shapes, which is a typical requirement for the design of high-performance permanent-magnet motors. A prototype machine with rated 5 kW output power at 300 min-1 rotation speed has been designed and constructed for the purposesof ascertaining the results obtained from the analytical design model. A comparative study of low-speed axial-flux and low-speed radial-flux permanent-magnet machines is presented. The comparative study concentrates on 55 kW machines with rotation speeds 150 min-1, 300 min-1 and 600 min-1 and is based on calculated designs. A novel comparison method is introduced. The method takes into account the mechanical constraints of the machine and enables comparison of the designed machines, with respect to the volume, efficiency and cost aspects of each machine. It is shown that an axial-flux permanent-magnet machine with one-rotor-two-stators configuration has generally a weaker efficiency than a radial-flux permanent-magnet machine if for all designs the same electric loading, air-gap flux density and current density have been applied. On the other hand, axial-flux machines are usually smaller in volume, especially when compared to radial-flux machines for which the length ratio (axial length of stator stack vs. air-gap diameter)is below 0.5. The comparison results show also that radial-flux machines with alow number of pole pairs, p < 4, outperform the corresponding axial-flux machines.
Resumo:
Turbokoneet ja etenkin höyryturbiinit ovat usein suunniteltu ja optimoitu toimimaan tietyssä toimintapisteessä jossa häviöt on minimoitu ja hyötysuhde maksimoitu. Joissakin tapauksissa on kuitenkin tarpeellista käyttää turbiinia toimintapisteen ulkopuolella. Tällöin turbiinin läpi virtaava massavirta muuttuu ja yleensä heikentää hyötysuhdetta. Turbokoneiden suorituskykyä voidaan parantaa käyttämällä kolmidimensionaalisesti muotoiltuja siipiä. Työssä on vertailtu laskennallisesti kahta kohtuullisesti muotoiltua suutinta (Compound lean ja Controlled flow) niiden suunnitellun toimintapisteen ulkopuolella. Kolmas suutin, ilman kolmidimensionaalista muotoilua on mukana vertailukohteena. Suutinten suorituskykyä tutkitaan laskennallisen virtausmekaniikan avulla olosuhteissa, jotka ovat toimintapisteen ulkopuolella. Virtauksen muutoksia tutkitaan kokonaispainehäviön, isentrooppisen hyötysuhteen ja virtauspinnan yhdenmukaisuuden avulla. Virtauspintoja verrataan ulosvirtauskulman, massavirran ja toisiovirtausvektoreiden jakauman avulla. Erot suutinten suorituskykyvyssä korostavat ylikuormalla. Kun massavirran arvoa on kohotettu eniten, Compound lean suuttimilla hyötysuhde laskee Controlled flow suuttimeen verrattuna vähemmän. Alikuormalla, kun massavirran arvoa lasketaan, erot suuttimien suorituskyvyssä pienenevät ja tutkittujen suuttimien ulosvirtaus on samankaltainen.
Resumo:
After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real-life regulatory frameworks by the developed methodology and further develop regulation model from the viewpoint of the directing signals. The results of this study have played a key role in the development of the Finnish regulatory model.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
The purpose of this work was to study the characteristics of the most commonly used filter aid materials and their influences on the design of proportioning, mixing, and feeding system for polishing filter family. Based on the literature survey and hands-on experience a system was designed with defined equipment and capital and operating costs. The system was designed to serve precoating and bodyfeeding applications and is easily extended to be used in multiple filter processes. Also a test procedure was carried out where influences of flux and filter cloths to accumulated cake were studied. Filter aid is needed in challenging conditions to improve filtration efficiency and cleaning, and thus extend the operating life of the filter media. Filter aid preparation and feeding system was designed for the use of two different filter aids; precoat and bodyfeed. Precoating is used before the filtration step initiates. If the solids in the filterable solution have a tendency to clog the filter bag easily, precoat is used on the filter bag to obtain better filtration efficiency and quality. Diatomite or perlite is usually used as a precoating substance. The intention is to create a uniform cake to the overall surface of the filter cloth, with predetermined thickness, 2 – 5 mm. This ensures that the clogging of the filter cloth is reduced and the filtration efficiency is increased. Bodyfeed is used if the solids in the filterable solution have a tendency to form a sticky impermeable filter cake. The cake properties are enhanced by maintaining the permeability of the accumulating cake by using the filter aid substance as bodyfeed during the filtration process.
Resumo:
Nanofiltration performance was studied with effluents from the pulp and paper industry and with model substances. The effect of filtration conditions and membrane properties on nanofiltration flux, retention, and fouling was investigated. Generally, the aim was to determine the parameters that influence nanofiltration efficiency and study how to carry out nanofiltration without fouling by controlling these parameters. The retentions of the nanofiltration membranes studied were considerably higher than those of tight ultrafiltration membranes, and the permeate fluxes obtained were approximately the same as those of tight ultrafiltration membranes. Generally, about 80% retentions of total carbon and conductivity were obtained during the nanofiltration experiments. Depending on the membrane and the filtration conditions, the retentions of monovalent ions (chloride) were between 80 and 95% in the nanofiltrations. An increase in pH improved retentions considerably and also the flux to some degree. An increase in pressure improved retention, whereas an increase in temperature decreased retention if the membrane retained the solute by the solution diffusion mechanism. In this study, more open membranes fouled more than tighter membranes due to higher concentration polarization and plugging of the membrane material. More irreversible fouling was measured for hydrophobic membranes. Electrostatic repulsion between the membrane and the components in the solution reduced fouling but did not completely prevent it with the hydrophobic membranes. Nanofiltration could be carried out without fouling, at least with the laboratory scale apparatus used here when the flux was below the critical flux. Model substances had a strong form of the critical flux, but the effluents had only a weak form of the critical flux. With the effluents, some fouling always occurred immediately when the filtration was started. However, if the flux was below the critical flux, further fouling was not observed. The flow velocity and pH were probably the most important parameters, along with the membrane properties, that influenced the critical flux. Precleaning of the membranes had only a small effect on the critical flux and retentions, but it improved the permeability of the membranes significantly.
Resumo:
As a result of the recent regulatory amendments and other development trends in the electricity distribution business, the sector is currently witnessing radical restructuring that will eventually impact the business logics of the sector. This report represents upcoming changes in the electricity distribution industry and concentrates on the factors that are expected to be the most fundamental ones. Electricity network companies nowadays struggle with legislative and regulatory requirements that focus on both the operational efficiency and the reliability of electricity distribution networks. The forces that have an impact on the distribution network companies can be put into three main categories that define the transformation at a general level. Those are: (1) a requirement for a more functional marketplace for energy, (2) environmental aspects (combating climate change etc.), and (3) a strongly emphasized requirement for the security of energy supply. The first point arises from the legislators’ attempt to increase competition in electricity retail markets, the second one concerns both environmental protection and human safety issues, and the third one indicates societies’ reduced willingness to accept interruptions in electricity supply. In the future, regulation of electricity distribution business may lower the threshold for building more weather-resistant networks, which in turn means increased underground cabling. This development pattern is reinforced by tightening safety and environmental regulations that ultimately make the overhead lines expensive to build and maintain. The changes will require new approaches particularly in network planning, construction, and maintenance. The concept for planning, constructing, and maintaining cable networks is necessary because the interdependencies between network operations are strong, in other words, the nature of the operation requires a linkage to other operations.
Resumo:
Energy efficiency and saving energy are the main question marks when thinking of reducing carbon dioxide emissions or cutting costs. The objective of thesis is to evaluate policy instruments concerning end-use energy efficiency of heavy industry in European Union. These policy instruments may be divided in various ways, but in this thesis the division is to administrative, financial, informative and voluntary instruments. Administrative instruments introduced in this thesis are Directive on Integrated Pollution Prevention and Control, Directive on Energy End-use Efficiency and Energy Services, and Climate and Energy Package. Financial means include energy and emission taxation, EU Emission Trading Scheme and diverse support systems. Informative instruments consist of horizontal BAT Reference Document for Energy Efficiency, as well as substantial EU documents including Green Paper on Energy Efficiency, Action Plan for Energy Efficiency and An Energy Policy for Europe. And finally, voluntary instruments include environmental managements systems like ISO 14001 and EMAS, energy auditing and benchmarking. The efficiency of different policy instruments vary quite a lot. Informative instruments lack the commitment from industry and are thus almost ineffective, contrary to EU Emission Trading Scheme, which is said to be the solution to climate problems. The efficiency of administrative means can be placed between those mentioned and voluntary instruments are still quite fresh to be examined fruitfully. However, each instrument has their potential and challenges. Cases from corporate world strengthen the results from theoretical part. Cases were written mainly on the basis of interviews. The interviewees praised the energy efficiency contract of Finnish industry, but the EU ETS takes the leading role of policy instruments. However, for industry the reductions do not come easily.
Resumo:
The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Resumo:
In summary the main findings of the study are that there seems to be is no universal definition of value in the context of industrial relationships, but a notion that it is context-, time-, and actor dependent. Value co-creation is a suitable concept in the context of buyerseller relationships. The evolution of a relationship from a transactional to a partnership is long and eventful - a process where the outcome is impossible to estimate in advance. The process is filled with differenttypes of events and also conflicts, which as a matter of fact can be seen as constructive forces in relationship development. The perceived value of a relationship is an antecedent to pursuing a high-involvement strategy; once a partnership exists, the value co-creation potential is realizable through exploiting interdependencies. Those interdependencies are the trigger for value co-creation potential. The value cocreation potential is realized though different processes of value co-creation either to achieve efficiency in exchange or effective use of resources. The logic of buyer-seller partnerships is to create and exploit interdependencies in order to create both efficiency and effective use of resources. (Summary of main findings p. 176)