846 resultados para Post-Operating Performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of contrast media in post-mortem radiology differs from clinical approaches in living patients. Post-mortem changes in the vascular system and the absence of blood flow lead to specific problems that have to be considered for the performance of post-mortem angiography. In addition, interpreting the images is challenging due to technique-related and post-mortem artefacts that have to be known and that are specific for each applied technique. Although the idea of injecting contrast media is old, classic methods are not simply transferable to modern radiological techniques in forensic medicine, as they are mostly dedicated to single-organ studies or applicable only shortly after death. With the introduction of modern imaging techniques, such as post-mortem computed tomography (PMCT) and post-mortem magnetic resonance (PMMR), to forensic death investigations, intensive research started to explore their advantages and limitations compared to conventional autopsy. PMCT has already become a routine investigation in several centres, and different techniques have been developed to better visualise the vascular system and organ parenchyma in PMCT. In contrast, the use of PMMR is still limited due to practical issues, and research is now starting in the field of PMMR angiography. This article gives an overview of the problems in post-mortem contrast media application, the various classic and modern techniques, and the issues to consider by using different media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clúster format per una màquina principal HEAD Node més 19 nodes de càlcul de la gama SGI13 Altix14 XE Servers and Clusters, unides en una topologia de màster subordinat, amb un total de 40 processadors Dual Core i aproximadament 160Gb de RAM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Neonatal brain injuries are the main cause of visual deficit produced by damage to posterior visual pathways.While there are several studies of visual function in low-risk preterm infants or older children with brain injuries, research in children of early age is lacking. Aim: To assess several aspects of visual function in preterm infants with brain injuries and to compare them with another group of low-risk preterm infants of the same age. Study design and subjects: Forty-eight preterm infants with brain injuries and 56 low-risk preterm infants. Outcome measures: The ML Leonhardt Battery of Optotypes was used to assess visual functions. This test was previously validated at a post-menstrual age of 40 weeks in newborns and at 30-plus weeks in preterm infants. Results: The group of preterminfants with brain lesions showed a delayed pattern of visual functions in alertness, fixation, visual attention and tracking behavior compared to infants in the healthy preterm group. The differences between both groups, in the visual behaviors analyzed were around 30%. These visual functions could be identified from the first weeks of life. Conclusion: Our results confirm the importance of using a straightforward screening test with preterminfants in order to assess altered visual function, especially in infants with brain injuries. The findings also highlight the need to provide visual stimulation very early on in life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adjusting behavior following the detection of inappropriate actions allows flexible adaptation to task demands and environmental contingencies during goal-directed behaviors. Post-error behavioral adjustments typically consist in adopting more cautious response mode, which manifests as a slowing down of response speed. Although converging evidence involves the dorsolateral prefrontal cortex (DLPFC) in post-error behavioral adjustment, whether and when the left or right DLPFC is critical for post-error slowing (PES), as well as the underlying brain mechanisms, remain highly debated. To resolve these issues, we used single-pulse transcranial magnetic stimulation in healthy human adults to disrupt the left or right DLPFC selectively at various delays within the 30-180ms interval following false alarms commission, while participants preformed a standard visual Go/NoGo task. PES significantly increased after TMS disruption of the right, but not the left DLPFC at 150ms post-FA response. We discuss these results in terms of an involvement of the right DLPFC in reducing the detrimental effects of error detection on subsequent behavioral performance, as opposed to implementing adaptative error-induced slowing down of response speed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: We investigated the changes in physiological and performance parameters after a Live High-Train Low (LHTL) altitude camp in normobaric (NH) or hypobaric hypoxia (HH) to reproduce the actual training practices of endurance athletes using a crossover-designed study. METHODS: Well-trained triathletes (n = 16) were split into two groups and completed two 18-day LTHL camps during which they trained at 1100-1200 m and lived at 2250 m (P i O2 = 111.9 ± 0.6 vs. 111.6 ± 0.6 mmHg) under NH (hypoxic chamber; FiO2 18.05 ± 0.03%) or HH (real altitude; barometric pressure 580.2 ± 2.9 mmHg) conditions. The subjects completed the NH and HH camps with a 1-year washout period. Measurements and protocol were identical for both phases of the crossover study. Oxygen saturation (S p O2) was constantly recorded nightly. P i O2 and training loads were matched daily. Blood samples and VO2max were measured before (Pre-) and 1 day after (Post-1) LHTL. A 3-km running-test was performed near sea level before and 1, 7, and 21 days after training camps. RESULTS: Total hypoxic exposure was lower for NH than for HH during LHTL (230 vs. 310 h; P < 0.001). Nocturnal S p O2 was higher in NH than in HH (92.4 ± 1.2 vs. 91.3 ± 1.0%, P < 0.001). VO2max increased to the same extent for NH and HH (4.9 ± 5.6 vs. 3.2 ± 5.1%). No difference was found in hematological parameters. The 3-km run time was significantly faster in both conditions 21 days after LHTL (4.5 ± 5.0 vs. 6.2 ± 6.4% for NH and HH), and no difference between conditions was found at any time. CONCLUSION: Increases in VO2max and performance enhancement were similar between NH and HH conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fundamental question in the transitional economies of the former Eastern Europe and Soviet Union has been whether privatisation and market liberalisation have had an effect on the performance of former state-owned enterprises. This study examines the effect of privatisation, capital market discipline, price liberalisation and international price exposure on the restructuring of large Russian enterprises. The performance indicators are sales, profitability, labour productivity and stock market valuations. The results do not show performance differences between state-owned and privatised enterprises. On the other hand, the expansion of the de novo private sector has been strong. New enterprises have significantly higher sales growth, profitability and labour productivity. However, the results indicate a diminishing effect of ownership. The international stock market listing has a significant positive effect on profitability, while the effect of domestic stock market listing is insignificant. The international price exposure has a significant positive increasing effect on profitability and labour productivity. International enterprises have higher profitability only when operating on price liberalised markets, however. The main results of the study are strong evidence on the positive effects of international linkages on the enterprise restructuring and the higher than expected role of new enterprises in the Russian economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Centrifugal compressors are widely used for example in refrigeration processes, the oil and gas industry, superchargers, and waste water treatment. In this work, five different vaneless diffusers and six different vaned diffusers are investigated numerically. The vaneless diffusers vary only by their diffuser width, so that four of the geometries have pinch implemented to them. Pinch means a decrease in the diffuser width. Four of the vaned diffusers have the same vane turning angle and a different number of vanes, and two have different vane turning angles. The flow solver used to solve the flow fields is Finflo, which is a Navier-Stokes solver. All the cases are modeled with the Chien's k – έ- turbulence model, and selected cases are modeled also with the k – ώ-SST turbulence model. All five vaneless diffusers and three vaned diffusers are investigated also experimentally. For each construction, the compressor operating map is measured according to relevant standards. In addition to this, the flow fields before and after the diffuser are measured with static and total pressure, flow angle and total temperature measurements. When comparing the computational results to the measured results, it is evident that the k – ώ-SST turbulence model predicts the flow fields better. The simulation results indicate that it is possible to improve the efficiency with the pinch, and according to the numerical results, the two best geometries are the ones with most pinch at the shroud. These geometries have approximately 4 percentage points higher efficiency than the unpinched vaneless diffusers. The hub pinch does not seem to have any major benefits. In general, the pinches make the flow fields before and after the diffuser more uniform. The pinch also seems to improve the impeller efficiency. This is down to two reasons. The major reason is that the pinch decreases the size of slow flow and possible backflow region located near the shroud after the impeller. Secondly, the pinches decrease the flow velocity in the tip clearance, leading to a smaller tip leakage flow and therefore slightly better impeller efficiency. Also some of the vaned diffusers improve the efficiency, the increment being 1...3 percentage points, when compared to the vaneless unpinched geometry. The measurement results confirm that the pinch is beneficial to the performance of the compressor. The flow fields are more uniform with the pinched cases, and the slow flow regions are smaller. The peak efficiency is approximately 2 percentage points and the design point efficiency approximately 4 percentage points higher with the pinched geometries than with the un- pinched geometry. According to the measurements, the two best geometries are the ones with the most pinch at the shroud, the case with the pinch only at the shroud being slightly better of the two. The vaned diffusers also have better efficiency than the vaneless unpinched geometries. However, the pinched cases have even better efficiencies. The vaned diffusers narrow the operating range considerably, whilst the pinch has no significant effect on the operating range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two experimental studies evaluated the effect of aerobic and membrane aeration changes on sludge properties, biological nutrient removal and filtration processes in a pilot plant membrane bioreactor. The optimal operating conditions were found at an aerobic dissolved oxygen set-point (DO) of 0.5mgO2L-1 and a membrane specific aeration demand (SADm) of 1mh-1, where membrane aeration can be used for nitrification. Under these conditions, a total flow reduction of 42% was achieved (75% energy reduction) without compromising nutrient removal efficiencies, maintaining sludge characteristics and controlled filtration. Below these optimal operating conditions, the nutrient removal efficiency was reduced, increasing 20% for soluble microbial products, 14% for capillarity suction time and reducing a 15% for filterability. Below this DO set-point, fouling increased with a transmembrane pressure 75% higher. SADm below 1mh-1 doubled the values of transmembrane pressure, without recovery after achieving the initial conditions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many industries, such as petroleum production, and the petrochemical, metal, food and cosmetics industries, wastewaters containing an emulsion of oil in water are often produced. The emulsions consist of water (up to 90%), oils (mineral, animal, vegetable and synthetic), surfactants and other contaminates. In view of its toxic nature and its deleterious effects on the surrounding environment (soil, water) such wastewater needs to be treated before release into natural water ways. Membrane-based processes have successfully been applied in industrial applications and are considered as possible candidates for the treatment of oily wastewaters. Easy operation, lower cost, and in some cases, the ability to reduce contaminants below existing pollution limits are the main advantages of these systems. The main drawback of membranes is flux decline due tofouling and concentration polarisation. The complexity of oil-containing systems demands complementary studies on issues related to the mitigation of fouling and concentration polarisation in membranebased ultrafiltration. In this thesis the effect of different operating conditions (factors) on ultrafiltration of oily water is studied. Important factors are normally correlated and, therefore, their effect should be studied simultaneously. This work uses a novel approach to study different operating conditions, like pressure, flow velocity, and temperature, and solution properties, like oil concentration (cutting oil, diesel, kerosene), pH, and salt concentration (CaCl2 and NaCl)) in the ultrafiltration of oily water, simultaneously and in a systematic way using an experimental design approach. A hypothesis is developed to describe the interaction between the oil drops, salt and the membrane surface. The optimum conditions for ultrafiltration and the contribution of each factor in the ultrafiltration of oily water are evaluated. It is found that the effect on permeate flux of the various factors studied strongly depended on the type of oil, the type of membrane and the amount of salts. The thesis demonstrates that a system containing oil is very complex, and that fouling and flux decline can be observed even at very low pressures. This means that only the weak form of the critical flux exists for such systems. The cleaning of the fouled membranes and the influence of different parameters (flow velocity, temperature, time, pressure, and chemical concentration (SDS, NaOH)) were evaluated in this study. It was observed that fouling, and consequently cleaning, behaved differently for the studied membranes. Of the membranes studied, the membrane with the lowest propensity for fouling and the most easily cleaned was the regenerated cellulose membrane (C100H). In order to get more information about the interaction between the membrane and the components of the emulsion, a streaming potential study was performed on the membrane. The experiments were carried out at different pH and oil concentration. It was seen that oily water changed the surface charge of the membrane significantly. The surface charge and the streaming potential during different stages of filtration were measured and analysed being a new method for fouling of oil in this thesis. The surface charge varied in different stages of filtration. It was found that the surface charge of a cleaned membrane was not the same as initially; however, the permeability was equal to that of a virgin membrane. The effect of filtration mode was studied by performing the filtration in both cross-flow and deadend mode. The effect of salt on performance was considered in both studies. It was found that salt decreased the permeate flux even at low concentration. To test the effect of hydrophilicity change, the commercial membranes used in this thesis were modified by grafting (PNIPAAm) on their surfaces. A new technique (corona treatment) was used for this modification. The effect of modification on permeate flux and retention was evaluated. The modified membranes changed their pore size around 33oC resulting in different retention and permeability. The obtained results in this thesis can be applied to optimise the operation of a membrane plant under normal or shock conditions or to modify the process such that it becomes more efficient or effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the study the recently developed concept of strategic entrepreneurship was addressed with the aim to investigate the underlying factors and components constituting the concept and their influence on firm performance. As the result of analysis of existing literature and empirical studies the model of strategic entrepreneurship for the current study is developed with the emphasis on exploration and exploitation parts of the concept. The research model is tested on the data collected in the project ―Factors of growth and success of entrepreneurial firms in Russia‖ by Center for Entrepreneurship of GSOM in 2007 containing answers of owners and managers of 500 firms operating in St. Petersburg and Moscow. Multiple regression analysis showed that exploration and exploitation presented by entrepreneurial values, investments in internal resources, knowledge management and developmental changes are significant factors constituting strategic entrepreneurship and having positive relation to firm performance. The theoretical contribution of the work is linked to development and testing of the model of strategic entrepreneurship. The results can be implemented in management practices of companies willing to engage in strategic entrepreneurship and increase their firm performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brazil is doing a major effort to find alternatives to diesel oil as combustible. Some study lines are oriented to the development of vegetable oils used as fuel, as a source of getting cheaper and have higher energy density than the converted vegetable oils, and less risk of environmental contamination. The aim of this study was to evaluate the performance, the useful life of the lubricant and some components of a Diesel Cycle engine, with an electronic injection system, in a long-term test operating with a preheated blend (65°C) of 50% (v v-1) of soybean oil in petrodiesel. There was a reduction of the useful life of the injectors which presented failure because of high wear with 264 hours of operation and showed an increase in emissions of particulate matter (opacity) which may be assigned to the failures occurred in the injection system. An increase in the useful life of the lubricant, when compared with the literature was also observed. The electronic injection system may favor the burning of the tested fuel. The test was interrupted with 264 hours because of failures in the injection system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unsuccessful mergers are unfortunately the rule rather than the exception. Therefore it is necessary to gain an enhanced understanding of mergers and post-merger integrations (PMI) as well as learning more about how mergers and PMIs of information systems (IS) and people can be facilitated. Studies on PMI of IS are scarce and public sector mergers are even less studied. There is nothing however to indicate that public sector mergers are any more successful than those in the private sector. This thesis covers five studies carried out between 2008 and 2011 in two organizations in higher education that merged in January 2010. The most recent study was carried out two years after the new university was established. The longitudinal case-study focused on the administrators and their opinions of the IS, the work situation and the merger in general. These issues were investigated before, during and after the merger. Both surveys and interviews were used to collect data, to which were added documents that both describe and guide the merger process; in this way we aimed at a triangulation of findings. Administrators were chosen as the focus of the study since public organizations are highly dependent on this staff category, forming the backbone of the organization and whose performance is a key success factor for the organization. Reliable and effective IS are also critical for maintaining a functional and effective organization, and this makes administrators highly dependent on their organizations’ IS for the ability to carry out their duties as intended. The case-study has confirmed the administrators’ dependency on IS that work well. A merger is likely to lead to changes in the IS and the routines associated with the administrators’ work. Hence it was especially interesting to study how the administrators viewed the merger and its consequences for IS and the work situation. The overall research objective is to find key issues for successful mergers and PMIs. The first explorative study in 2008 showed that the administrators were confident of their skills and knowledge of IS and had no fear of having to learn new IS due to the merger. Most administrators had an academic background and were not anxious about whether IS training would be given or not. Before the merger the administrators were positive and enthusiastic towards the merger and also to the changes that they expected. The studies carried out before the merger showed that these administrators were very satisfied with the information provided about the merger. This information was disseminated through various channels and even negative information and postponed decisions were quickly distributed. The study conflicts with the theories that have found that resistance to change is inevitable in a merger. Shortly after the merger the (third) study showed disappointment with the fact that fewer changes than expected had been implemented even if the changes that actually were carried out sometimes led to a more problematic work situation. This was seen to be more prominent for routine changes than IS changes. Still the administrators showed a clear willingness to change and to share their knowledge with new colleagues. This knowledge sharing (also tacit) worked well in the merger and the PMI. The majority reported that the most common way to learn to use new ISs and to apply new routines was by asking help from colleagues. They also needed to take responsibility for their own training and development. Five months after the merger (the fourth study) the administrators had become worried about the changes in communication strategy that had been implemented in the new university. This was perceived as being more anonymous. Furthermore, it was harder to get to know what was happening and to contact the new decision makers. The administrators found that decisions, and the authority to make decisions, had been moved to a higher administrative level than they were accustomed to. A directive management style is recommended in mergers in order to achieve a quick transition without distracting from the core business. A merger process may be tiresome and require considerable effort from the participants. In addition, not everyone can make their voice heard during a merger and consensus is not possible in every question. It is important to find out what is best for the new organization instead of simply claiming that the tried and tested methods of doing things should be implemented. A major problem turned out to be the lack of management continuity during the merger process. Especially problematic was the situation in the IS-department with many substitute managers during the whole merger process (even after the merger was carried out). This meant that no one was in charge of IS-issues and the PMI of IS. Moreover, the top managers were appointed very late in the process; in some cases after the merger was carried out. This led to missed opportunities for building trust and management credibility was heavily affected. The administrators felt neglected and that their competences and knowledge no longer counted. This, together with a reduced and altered information flow, led to rumours and distrust. Before the merger the administrators were convinced that their achievements contributed value to their organizations and that they worked effectively. After the merger they were less sure of their value contribution and effectiveness even if these factors were not totally discounted. The fifth study in November 2011 found that the administrators were still satisfied with their IS as they had been throughout the whole study. Furthermore, they believed that the IS department had done a good job despite challenging circumstances. Both the former organizations lacked IS strategies, which badly affected the IS strategizing during the merger and the PMI. IS strategies deal with issues like system ownership; namely who should pay and who is responsible for maintenance and system development, for organizing system training for new IS, and for effectively run IS even during changing circumstances (e.g. more users). A proactive approach is recommended for IS strategizing to work. This is particularly true during a merger and PMI for handling issues about what ISs should be adopted and implemented in the new organization, issues of integration and reengineering of IS-related processes. In the new university an ITstrategy had still not been decided 26 months after the new university was established. The study shows the importance of the decisive management of IS in a merger requiring that IS issues are addressed in the merger process and that IS decisions are made early. Moreover, the new management needs to be appointed early in order to work actively with the IS-strategizing. It is also necessary to build trust and to plan and make decisions about integration of IS and people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuel cells are a promising alternative for clean and efficient energy production. A fuel cell is probably the most demanding of all distributed generation power sources. It resembles a solar cell in many ways, but sets strict limits to current ripple, common mode voltages and load variations. The typically low output voltage from the fuel cell stack needs to be boosted to a higher voltage level for grid interfacing. Due to the high electrical efficiency of the fuel cell, there is a need for high efficiency power converters, and in the case of low voltage, high current and galvanic isolation, the implementation of such converters is not a trivial task. This thesis presents galvanically isolated DC-DC converter topologies that have favorable characteristics for fuel cell usage and reviews the topologies from the viewpoint of electrical efficiency and cost efficiency. The focus is on evaluating the design issues when considering a single converter module having large current stresses. The dominating loss mechanism in low voltage, high current applications is conduction losses. In the case of MOSFETs, the conduction losses can be efficiently reduced by paralleling, but in the case of diodes, the effectiveness of paralleling depends strongly on the semiconductor material, diode parameters and output configuration. The transformer winding losses can be a major source of losses if the windings are not optimized according to the topology and the operating conditions. Transformer prototyping can be expensive and time consuming, and thus it is preferable to utilize various calculation methods during the design process in order to evaluate the performance of the transformer. This thesis reviews calculation methods for solid wire, litz wire and copper foil winding losses, and in order to evaluate the applicability of the methods, the calculations are compared against measurements and FEM simulations. By selecting a proper calculation method for each winding type, the winding losses can be predicted quite accurately before actually constructing the transformer. The transformer leakage inductance, the amount of which can also be calculated with reasonable accuracy, has a significant impact on the semiconductor switching losses. Therefore, the leakage inductance effects should also be taken into account when considering the overall efficiency of the converter. It is demonstrated in this thesis that although there are some distinctive differences in the loss distributions between the converter topologies, the differences in the overall efficiency can remain within a range of a few percentage points. However, the optimization effort required in order to achieve the high efficiencies is quite different in each topology. In the presence of practical constraints such as manufacturing complexity or cost, the question of topology selection can become crucial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.