912 resultados para HIGH-VOLTAGE AND HIGH-CURRENT


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes an investigation of methods by which both repetitive and non-repetitive electrical transients in an HVDC converter station may be controlled for minimum overall cost. Several methods of inrush control are proposed and studied. The preferred method, whose development is reported in this thesis, would utilize two magnetic materials, one of which is assumed to be lossless and the other has controlled eddy-current losses. Mathematical studies are performed to assess the optimum characteristics of these materials, such that inrush current is suitably controlled for a minimum saturation flux requirement. Subsequent evaluation of the cost of hardware and capitalized losses of the proposed inrush control, indicate that a cost reduction of approximately 50% is achieved, in comparison with the inrush control hardware for the Sellindge converter station. Further mathematical studies are carried out to prove the adequacy of the proposed inrush control characteristics for controlling voltage and current transients during both repetitive and non-repetitive operating conditions. The results of these proving studies indicate that no change in the proposed characteristics is required to ensure that integrity of the thyristors is maintained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With economic activity in emerging markets growing at 40 percent, and with 10 percent and more of the firms in the Global Fortune 500 now headquartered in emerging economies, intense interest lies in the globalization of business activities, including the sales function. This systematic review of the international sales literature in a selection of the most influential journals explains, consolidates, and analyzes current knowledge. This paper also explores the challenges inherent in conducting international sales research, including conceptualization, research management, and data collection issues. Finally, we suggest ways to move forward for researchers in this field, including pertinent topics and how methodological and practical constraints might be addressed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim - The aim of the study was to determine the potential for KV1 potassium channel blockers as inhibitors of human neoinitimal hyperplasia. Methods and results - Blood vessels were obtained from patients or mice and studied in culture. Reverse transcriptasepolymerase chain reaction and immunocytochemistry were used to detect gene expression. Whole-cell patch-clamp, intracellular calcium measurement, cell migration assays, and organ culture were used to assess channel function.  KV1.3 was unique among the  KV1 channels in showing preserved and up-regulated expression when the vascular smooth muscle cells switched to the proliferating phenotype. There was strong expression in neointimal formations. Voltage-dependent potassium current in proliferating cells was sensitive to three different blockers of  KV1.3 channels. Calcium entry was also inhibited. All three blockers reduced vascular smooth muscle cell migration and the effects were non-additive. One of the blockers (margatoxin) was highly potent, suppressing cell migration with an IC of 85 pM. Two of the blockers were tested in organ-cultured human vein samples and both inhibited neointimal hyperplasia. Conclusion - KV1.3 potassium channels are functional in proliferating mouse and human vascular smooth muscle cells and have positive effects on cell migration. Blockers of the channels may be useful as inhibitors of neointimal hyperplasia and other unwanted vascular remodelling events. © 2010 The Author.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction-The design of the UK MPharm curriculum is driven by the Royal Pharmaceutical Society of Great Britain (RPSGB) accreditation process and the EU directive (85/432/EEC).[1] Although the RPSGB is informed about teaching activity in UK Schools of Pharmacy (SOPs), there is no database which aggregates information to provide the whole picture of pharmacy education within the UK. The aim of the teaching, learning and assessment study [2] was to document and map current programmes in the 16 established SOPs. Recent developments in programme delivery have resulted in a focus on deep learning (for example, through problem based learning approaches) and on being more student centred and less didactic through lectures. The specific objectives of this part of the study were (a) to quantify the content and modes of delivery of material as described in course documentation and (b) having categorised the range of teaching methods, ask students to rate how important they perceived each one for their own learning (using a three point Likert scale: very important, fairly important or not important). Material and methods-The study design compared three datasets: (1) quantitative course document review, (2) qualitative staff interview and (3) quantitative student self completion survey. All 16 SOPs provided a set of their undergraduate course documentation for the year 2003/4. The documentation variables were entered into Excel tables. A self-completion questionnaire was administered to all year four undergraduates, using a pragmatic mixture of methods, (n=1847) in 15 SOPs within Great Britain. The survey data were analysed (n=741) using SPSS, excluding non-UK students who may have undertaken part of their studies within a non-UK university. Results and discussion-Interviews showed that individual teachers and course module leaders determine the choice of teaching methods used. Content review of the documentary evidence showed that 51% of the taught element of the course was delivered using lectures, 31% using practicals (includes computer aided learning) and 18% small group or interactive teaching. There was high uniformity across the schools for the first three years; variation in the final year was due to the project. The average number of hours per year across 15 schools (data for one school were not available) was: year 1: 408 hours; year 2: 401 hours; year 3: 387 hours; year 4: 401 hours. The survey showed that students perceived lectures to be the most important method of teaching after dispensing or clinical practicals. Taking the very important rating only: 94% (n=694) dispensing or clinical practicals; 75% (n=558) lectures; 52% (n=386) workshops, 50% (n=369) tutorials, 43% (n=318) directed study. Scientific laboratory practices were rated very important by only 31% (n=227). The study shows that teaching of pharmacy to undergraduates in the UK is still essentially didactic through a high proportion of formal lectures and with high levels of staff-student contact. Schools consider lectures still to be the most cost effective means of delivering the core syllabus to large cohorts of students. However, this does limit the scope for any optionality within teaching, the scope for small group work is reduced as is the opportunity to develop multi-professional learning or practice placements. Although novel teaching and learning techniques such as e-learning have expanded considerably over the past decade, schools of pharmacy have concentrated on lectures as the best way of coping with the huge expansion in student numbers. References [1] Council Directive. Concerning the coordination of provisions laid down by law, regulation or administrative action in respect of certain activities in the field of pharmacy. Official Journal of the European Communities 1985;85/432/EEC. [2] Wilson K, Jesson J, Langley C, Clarke L, Hatfield K. MPharm Programmes: Where are we now? Report commissioned by the Pharmacy Practice Research Trust., 2005.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is part of a bigger project which aims to research the potential development of commercial opportunities for the re-use of batteries after their use in low carbon vehicles on an electricity grid or microgrid system. There are three main revenue streams (peak load lopping on the distribution Network to allow for network re-enforcement deferral, National Grid primary/ secondary/ high frequency response, customer energy management optimization). These incomes streams are dependent on the grid system being present. However, there is additional opportunity to be gained from also using these batteries to provide UPS backup when the grid is no longer present. Most UPS or ESS on the market use new batteries in conjunction with a two level converter interface. This produces a reliable backup solution in the case of loss of mains power, but may be expensive to implement. This paper introduces a modular multilevel cascade converter (MMCC) based ESS using second-life batteries for use on a grid independent industrial plant without any additional onsite generator as a potentially cheaper alternative. The number of modules has been designed for a given reliability target and these modules could be used to minimize/eliminate the output filter. An appropriate strategy to provide voltage and frequency control in a grid independent system is described and simulated under different disturbance conditions such as load switching, fault conditions or a large motor starting. A comparison of the results from the modular topology against a traditional two level converter is provided to prove similar performance criteria. The proposed ESS and control strategy is an acceptable way of providing backup power in the event of loss of grid. Additional financial benefit to the customer may be obtained by using a second life battery in this way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent advancement in the growth technology of InGaN/GaN has decently positioned InGaN based white LEDs to leap into the area of general or daily lighting. Monolithic white LEDs with multiple QWs were previously demonstrated by Damilano et al. [1] in 2001. However, there are several challenges yet to be overcome for InGaN based monolithic white LEDs to establish themselves as an alternative to other day-to-day lighting sources [2,3]. Alongside the key characteristics of luminous efficacy and EQE, colour rendering index (CRI) and correlated colour temperature (CCT) are important characteristics for these structures [2,4]. Investigated monolithic white structures were similar to that described in [5] and contained blue and green InGaN multiple QWs without short-period superlattice between them and emitting at 440 nm and 530 nm, respectively. The electroluminescence (EL) measurements were done in the CW and pulse current modes. An integration sphere (Labsphere “CDS 600” spectrometer) and a pulse generator (Agilent 8114A) were used to perform the measurements. The CCT and Green/Blue radiant flux ratio were investigated at extended operation currents from 100mA to 2A using current pulses from 100ns to 100μs with a duty cycle varying from 1% to 95%. The strong dependence of the CCT on the duty cycle value, with the CCT value decreasing by more than three times at high duty cycle values (shown at the 300 mA pulse operation current) was demonstrated (Fig. 1). The pulse width variation seems to have a negligible effect on the CCT (Fig. 1). To account for the joule heating, a duty cycle more than 1% was considered as an overheated mode. For the 1% duty cycle it was demonstrated that the CCT was tuneable in three times by modulating input current and pulse width (Fig. 2). It has also been demonstrated that there is a possibility of keeping luminous flux independent of pulse width variation for a constant value of current pulse (Fig. 3).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, quantum-dot (QD) semiconductor lasers attract significant interest in many practical applications due to their advantages such as high-power pulse generation because to the high gain efficiency. In this work, the pulse shape of an electrically pumped QD-laser under high current is analyzed. We find that the slow rise time of the pulsed pump may significantly affect the high intensity output pulse. It results in sharp power dropouts and deformation of the pulse profile. We address the effect to dynamical change of the phase-amplitude coupling in the proximity of the excited state (ES) threshold. Under 30ns pulse pumping, the output pulse shape strongly depends on pumping amplitude. At lower currents, which correspond to lasing in the ground state (GS), the pulse shape mimics that of the pump pulse. However, at higher currents the pulse shape becomes progressively unstable. The instability is greatest when in proximity to the secondary threshold which corresponds to the beginning of the ES lasing. After the slow rise stage, the output power sharply drops out. It is followed by a long-time power-off stage and large-scale amplitude fluctuations. We explain these observations by the dynamical change of the alpha-factor in the QD-laser and reveal the role of the slowly rising pumping processes in the pulse shaping and power dropouts at higher currents. The modeling is in very good agreement with the experimental observations. © 2014 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study InGaAs QD laser operating simultaneously at ground (GS) and excited (ES) states under 30ns pulsed-pumping and distinguish three regimes of operation depending on the pump current and the carrier relaxation pathways. An increased current leads to an increase in ES intensity and to a decrease in GS intensity (or saturation) for low pump range, as typical for the cascade-like pathway. Both the GS and ES intensities are steadily increased for high current ranges, which prove the dominance of the direct capture pathway. The relaxation oscillations are not pronounced for these ranges. For the mediate currents, the interplay between the both pathways leads to the damped large amplitude relaxation oscillations with significant deviation of the relaxation oscillation frequency from the initial value during the pulse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel dc-dc converter topology to achieve an ultrahigh step-up ratio while maintaining a high conversion efficiency. It adopts a three degree of freedom approach in the circuit design. It also demonstrates the flexibility of the proposed converter to combine with the features of modularity, electrical isolation, soft-switching, low voltage stress on switching devices, and is thus considered to be an improved topology over traditional dc-dc converters. New control strategies including the two-section output voltage control and cell idle control are also developed to improve the converter performance. With the cell idle control, the secondary winding inductance of the idle module is bypassed to decrease its power loss. A 400-W dc-dc converter is prototyped and tested to verify the proposed techniques, in addition to a simulation study. The step-up conversion ratio can reach 1:14 with a peak efficiency of 94% and the proposed techniques can be applied to a wide range of high voltage and high power distributed generation and dc power transmission.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of the Professionals Master in the field of science education is revealed by the recognition that they provide, to practicing teachers, in particular, training spaces for reflection and application of knowledge. This work appears in the context of the project "Research and training in teaching science and mathematics: a cutout of academic production in the northeast and overview of formative action in basic education" on the Centre for Education program, which main objective was to conduct studies description, analysis and evaluation of the academic production of Postgraduate Programs in Science Teaching of UFRPE, UFRN and UEPB and investigate the contribution of continuing education in stricto sensu level, of graduated teachers to improve the quality of basic education . We sought to examine a cut of academic production PPGECNM / UFRN, taking as reference dissertations of Natural Sciences, finished between the years 2005 and 2012, which have developed and applied educational products for high school students. More specifically we sought to conduct a general characterization of the dissertations analyzed for basic descriptors, to understand if and how the official documents governing the Brazilian education, especially science education, subsidized development of dissertations and identify current trends for science teaching are addressed and which ones are used in preparing the product of dissertations. The survey was based on documentary analysis, a type of qualitative approach in which the documents are objects of study in themselves. The results revealed that most of the work was developed in public schools, on subjects of physics and chemistry. During analytical reading of the text of the dissertations was observed that, in its construction, most of them addressed somehow, official documents governing the Brazilian educational system, that the products are basically teaching units and teaching approaches that are more focused on Experimentation and History and Philosophy of Science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transects of a Remotely Operated Vehicle (ROV) providing sea-bed videos and photographs were carried out during POLARSTERN expedition ANT-XIII/3 focussing on the ecology of benthic assemblages on the Antarctic shelf in the South-Eastern Weddell Sea. The ROV-system sprint 103 was equiped with two video- and one still camera, lights, flash-lights, compass, and parallel lasers providing a scale in the images, a tether-management system (TMS), a winch, and the board units. All cameras used the same main lense and could be tilted. Videos were recorded in Betacam-format and (film-)slides were made by decision of the scientific pilot. The latter were mainly made under the aspect to improve the identification of organisms depicted in the videos because the still photographs have a much higher optical resolution than the videos. In the photographs species larger than 3 mm, in the videos larger than 1 cm are recognisable and countable. Under optimum conditions the transects were strait; the speed and direction of the ROV were determined by the drift of the ship in the coastal current, since both, the ship and the ROV were used as a drifting system; the option to operate the vehicle actively was only used to avoide obstacles and to reach at best a distance of only approximately 30 cm to the sea-floor. As a consequence the width of the photographs in the foreground is approximately 50 cm. Deviations from this strategy resulted mainly from difficult ice- and weather conditions but also from high current velocity and local up-welling close to the sea-bed. The sea-bed images provide insights into the general composition of key species, higher systematic groups and ecological guilds. Within interdisciplinary approaches distributions of assemblages can be attributed to environmental conditions such as bathymetry, sediment characteristics, water masses and current regimes. The images also contain valuable information on how benthic species are associated to each other. Along the transects, small- to intermediate-scaled disturbances, e.g. by grounding icebergs were analysed and further impact to the entire benthic system by local succession of recolonisation was studied. This information can be used for models predicting the impact of climate change to benthic life in the Southern Ocean. All these approaches contribute to a better understanding of the fiunctioning of the benthic system and related components of the entire Antarctic marine ecosystem. Despite their scientific value the imaging methods meet concerns about the protection of sensitive Antarctic benthic systems since they are non-invasive and they also provide valuable material for education and outreach purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transects of a Remotely Operated Vehicle (ROV) providing sea-bed videos and photographs were carried out during POLARSTERN expedition ANT-XVII/3 focussing on the ecology of benthic assemblages on the Antarctic shelf in the South-Eastern Weddell Sea. The ROV-system sprint 103 was equiped with two video- and one still camera, lights, flash-lights, compass, and parallel lasers providing a scale in the images, a tether-management system (TMS), a winch, and the board units. All cameras used the same main lense and could be tilted. Videos were recorded in Betacam-format and (film-)slides were made by decision of the scientific pilot. The latter were mainly made under the aspect to improve the identification of organisms depicted in the videos because the still photographs have a much higher optical resolution than the videos. In the photographs species larger than 3 mm, in the videos larger than 1 cm are recognisable and countable. Under optimum conditions the transects were strait; the speed and direction of the ROV were determined by the drift of the ship in the coastal current, since both, the ship and the ROV were used as a drifting system; the option to operate the vehicle actively was only used to avoide obstacles and to reach at best a distance of only approximately 30 cm to the sea-floor. As a consequence the width of the photographs in the foreground is approximately 50 cm. Deviations from this strategy resulted mainly from difficult ice- and weather conditions but also from high current velocity and local up-welling close to the sea-bed. The sea-bed images provide insights into the general composition of key species, higher systematic groups and ecological guilds. Within interdisciplinary approaches distributions of assemblages can be attributed to environmental conditions such as bathymetry, sediment characteristics, water masses and current regimes. The images also contain valuable information on how benthic species are associated to each other. Along the transects, small- to intermediate-scaled disturbances, e.g. by grounding icebergs were analysed and further impact to the entire benthic system by local succession of recolonisation was studied. This information can be used for models predicting the impact of climate change to benthic life in the Southern Ocean. All these approaches contribute to a better understanding of the fiunctioning of the benthic system and related components of the entire Antarctic marine ecosystem. Despite their scientific value the imaging methods meet concerns about the protection of sensitive Antarctic benthic systems since they are non-invasive and they also provide valuable material for education and outreach purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sediments of 14 box cores and 7 gravity cores, mainly taken directly in front of the Filchner(-Ronne) ice shelf northwest of Berkner Island (Weddell Sea), allowed to distinguish six sediment types. On the one hand,the retreat of the at first grounded and then floated ice from the last glacial maximum is documented. On the other hand,the sediments give an insight into extensive Holocene sediment deposition and remobilization northwest of Berkner Island. The ortho till was deposited directly by the grounded ice sheet and is lacking any marine influence. After floating of the ice shelf, partly very weIl stratified, partly unstratified, non-bioturbated paratill is deposited beneath the ice shelf. Lack of IRD-content in the paratill immediately above the orthotill indicates freezing at the bottom of the ice, at least for a short period after the ice became afloat. The orthotill and paratill contain small amounts of fragmented Tertiary diatoms, which allow the conclusion, that glacial-marine sediments in the accumulation area of the Ronne ice shelf will be eroded and later deposited by ice in the investigation area. Starting of bioturbation and therefore change in sedimentation from paratill to bioturbated paratill,is caused by the retreat of the ice shelf to its actual position. Isostatic uplift of the sea-bed after the Ice Age causes minor water depths with higher current velocities. The fine-fraction is eroding and mean particle-size will increase. Maybe, also isostatic uplift is responsible for repeated great advances of the floated ice shelf as shown in an erosional horizon in some cores containing bioturbated paratill. Postglacial sediment-thicknesses exceed 3 m. Assuming floating of the ice 15.000 YBP, accumulation rates reach nearly 20cm/lOOO years. Following the theories about sediment input in front of wide ice shelves, this was not expected. In the shallower water depths of Berkner Bank, the oscillations of the ice shelf are recorded in the sediments. Sorting and redistribution by high current velocities from beneath the ice up to the calving line, lead to the deposition of the weIl to very weIl sorted sandy till. In front of the calving line the finer fraction will settle down. Remobilization is possible by bioturbation and increasing current-velocity. According to the intensity of mixing of the sandy till with the fine fraction, modified till or muddy till results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation investigates the effects of internationalization in two gaps related to the capital structure that have not been discussed by the Brazilian literature yet. To this, were developed two independent sections. The first examined what the effects of internationalization on the deviation from the target capital structure. The second examined what the effects of internationalization on speed of adjustment (SOA) of the capital structure. It used data from Brazil, multinational and domestic companies, from 2006 to 2014. The results of the first analysis indicate that internationalization helps reduce the difference between the target and the current debt. That is, to the extent that the level of internationalization increases; whether only export or a combination of export, assets and employees abroad, the gap between the current structure and the target structure decreases. This reduction is given as a function of internationalization as a consequence of the upstream effect of the upstream-downstream hypothesis. Thus, as the Market Timing theory, it can be seen as an opportunity for adjustment of the capital structure, and with the reduction of deviation, there is also a reduction in the cost of capital of the firm. The result of the second analysis indicates that internationalization is able to significantly increase the speed adjustment, ensuring for the multinational a faster adjustment of its capital structure. Exports increase the SOA in 9 to 23%. And when also kept active assets and employees abroad the increase is 8 to 20%. In terms of time, while domestic company takes more than three years to reduce half of the deviation that has, while multinacional companies take on average one and a half year to reduce the same proportion of the deviation. The validity of the upstream-downstream hypothesis for the effect of internationalization in SOA was confirmed by comparing the results for US companies. Thus, the phenomenon of internationalization increases SOA when companies are from less stable markets, such as Brazil; and it has a less significcative effect when companies are derived from more stable markets, because they already have a high speed of adjustmennt. In addition, the adequacy analysis of the estimators also showed the model pooled OLS (Ordinary Least Squares) presents the highest quality in predicting the SOA than the system GMM (Generalized Method of Moments). For future studies it is suggested to analyze the effect of international event, by itself, and to validate the hypothesis using samples of different markets and the use of other estimators.