931 resultados para single stage power conversion
Resumo:
Theory suggests that the dimensions that are incorporated in the new product screening decision will differ according to the stage of the development process. The outcome of the application of different screening dimensions would be quicker, realistic and more reliable screening decisions. This research project builds on existing new product development and screening literature by investigating new product screening in international fast moving consumer goods companies. It further builds on the existing literature by measuring decision-making relating to projects in 'real time', as managers' responses refer to projects they are currently working on. The introduction of branded consumer products allows us to evolve scales used in new product research by further developing variables relating to branding, promotion and retailer power. The project uncovers multiple dimensions of new product screening and evaluation within this branded product sector. These dimensions are found to differ in their ability to discriminate between two groups of accepted and rejected projects at each of four stages of the new product development process. This investigation provides the intelligence with which managers can determine the likelihood of project acceptance and rejection at different stages of the development process. It highlights the need for managers to apply stage-specific dimensions in the new product screening decision and advocates the redefinition of new product screening from both an academic and managerial perspective. The screening decision should not be viewed as a single, early decision in a product development process, but as a series of stage specific decisions regarding future project potential.
Resumo:
This article analyses the complex process that deracialised and democratised South African football between the early 1970s and 1990s. Based mainly on archival documents, it argues that growing isolation from world sport, exemplified by South Africa's expulsion from the Olympic movement in 1970 and FIFA in 1976, and the reinvigoration of the liberation struggle with the Soweto youth uprising triggered a process of gradual desegregation in the South African professional game. While Pretoria viewed such changes as a potential bulwark against rising black militancy, white football and big business had their own reasons for eventually supporting racial integration, as seen in the founding of the National Soccer League. As negotiations for a new democratic South Africa began in earnest between the African National Congress (ANC) and the National Party (NP) in the latter half of the 1980s, transformations in football and politics paralleled and informed each other. Previously antagonistic football associations began a series of 'unity talks' between 1985 and 1986 that eventually culminated in the formation of a single, non-racial South African Football Association in December 1991, just a few days before the Convention for a Democratic South Africa (CODESA) opened the process of writing a new post-apartheid constitution. Finally, three decades of isolation came to an end as FIFA welcomed South Africa back into world football in 1992 - a powerful example of the seemingly boundless potential of a liberated and united South Africa ahead of the first democratic elections in 1994.
Resumo:
In this research the recovery of a DQPSK signal will be demonstrated using a single Mach-Zehnder Interferometer (MZI). By changing the phase delay in one of the arms it will be shown that different delays will produce different output levels. It will also be shown that with a certain level of phase shift the DQPSK signal can be converted into four different equally spaced optical power levels. With each decoded level representing one of the four possible bit permutations. By using this additional phase shift in one of the arms the number of MZIs required for decoding can be reduced from two to one.
Resumo:
To decouple interocular suppression and binocular summation we varied the relative phase of mask and target in a 2IFC contrast-masking paradigm. In Experiment I, dichoptic mask gratings had the same orientation and spatial frequency as the target. For in-phase masking, suppression was strong (a log-log slope of ∼1) and there was weak facilitation at low mask contrasts. Anti-phase masking was weaker (a log-log slope of ∼0.7) and there was no facilitation. A two-stage model of contrast gain control [Meese, T.S., Georgeson, M.A. and Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision, 6: 1224-1243] provided a good fit to the in-phase results and fixed its free parameters. It made successful predictions (with no free parameters) for the anti-phase results when (A) interocular suppression was phase-indifferent but (B) binocular summation was phase sensitive. Experiments II and III showed that interocular suppression comprised two components: (i) a tuned effect with an orientation bandwidth of ∼±33° and a spatial frequency bandwidth of >3 octaves, and (ii) an untuned effect that elevated threshold by a factor of between 2 and 4. Operationally, binocular summation was more tightly tuned, having an orientation bandwidth of ∼±8°, and a spatial frequency bandwidth of ∼0.5 octaves. Our results replicate the unusual shapes of the in-phase dichoptic tuning functions reported by Legge [Legge, G.E. (1979). Spatial frequency masking in human vision: Binocular interactions. Journal of the Optical Society of America, 69: 838-847]. These can now be seen as the envelope of the direct effects from interocular suppression and the indirect effect from binocular summation, which contaminates the signal channel with a mask that has been suppressed by the target. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
We investigate a 40 Gbit/s all-Raman amplified standard single mode fibre (SMF) transmission system with the mid-range amplifier spacing of 80-90 km. The impact of span configuration on double Rayleigh back scattering (DRBS) was studied. Four different span configurations were compared experimentally. A transmission distance of 1666 km in SMF has been achieved without forward error correcting (FEC) for the first time. The results demonstrate that the detrimental effects associated with high pump power Raman amplification in standard fibre can be minimised by dispersion map optimisation. © 2003 IEEE.
Resumo:
Single- and multi-core passive and active germanate and tellurite glass fibers represent a new class of fiber host for in-fiber photonics devices and applications in mid-IR wavelength range, which are in increasing demand. Fiber Bragg grating (FBG) structures have been proven as one of the most functional in-fiber devices and have been mass-produced in silicate fibers by UV-inscription for almost countless laser and sensor applications. However, because of the strong UV absorption in germanate and tellurite fibers, FBG structures cannot be produced by UVinscription. In recent years femtosecond (fs) lasers have been developed for laser machining and microstructuring in a variety of glass fibers and planar substrates. A number of papers have been reported on fabrication of FBGs and long-period gratings in optical fibers and also on the photosensitivity mechanism using 800nm fs lasers. In this paper, we demonstrate for the first time the fabrication of FBG structures created in passive and active single- and three-core germanate and tellurite glass fibers by using 800nm fs-inscription and phase mask technique. With a fs peak power intensity in the order of 1011W/cm2, the FBG spectra with 2nd and 3rd order resonances at 1540nm and 1033nm in a single-core germanate glass fiber and 2nd order resonances between ~1694nm and ~1677nm with strengths up to 14dB in all three cores of three-core passive and active tellurite fibers were observed. Thermal and strain properties of the FBGs made in these mid-IR glass fibers were characterized, showing an average temperature responsivity of ~20pm/°C and a strain sensitivity of 1.219±0.003pm/µe.
Resumo:
Reported are observations and measurements of the inscription of fibre Bragg gratings in two different types of microstructured polymer optical fibre: few-moded and endlessly single mode. Contrary to FBG inscription in silica microstructured fibre, where high energy laser pulses are a prerequisite, we have successfully used a low power CW laser source operating at 325nm to produce 1-cm long gratings with a reflection peak at 1570 nm. Peak reflectivities of more than 10% have been observed.
Resumo:
How does nearby motion affect the perceived speed of a target region? When a central drifting Gabor patch is surrounded by translating noise, its speed can be misperceived over a fourfold range. Typically, when a surround moves in the same direction, perceived centre speed is reduced; for opposite-direction surrounds it increases. Measuring this illusion for a variety of surround properties reveals that the motion context effects are a saturating function of surround speed (Experiment I) and contrast (Experiment II). Our analyses indicate that the effects are consistent with a subtractive process, rather than with speed being averaged over area. In Experiment III we exploit known properties of the motion system to ask where these surround effects impact. Using 2D plaid stimuli, we find that surround-induced shifts in perceived speed of one plaid component produce substantial shifts in perceived plaid direction. This indicates that surrounds exert their influence early in processing, before pattern motion direction is computed. These findings relate to ongoing investigations of surround suppression for direction discrimination, and are consistent with single-cell findings of direction-tuned suppressive and facilitatory interactions in primary visual cortex (V1).
Resumo:
The following thesis presents results obtained from both numerical simulation and laboratory experimentation (both of which were carried out by the author). When data is propagated along an optical transmission line some timing irregularities can occur such as timing jitter and phase wander. Traditionally these timing problems would have been corrected by converting the optical signal into the electrical domain and then compensating for the timing irregularity before converting the signal back into the optical domain. However, this thesis posses a potential solution to the problem by remaining completely in the optical domain, eliminating the need for electronics. This is desirable as not only does optical processing reduce the latency effect that their electronic counterpart have, it also holds the possibility of an increase in overall speed. A scheme was proposed which utilises the principle of wavelength conversion to dynamically convert timing irregularities (timing jitter and phase wander) into a change in wavelength (this occurs on a bit-by-bit level and so timing jitter and phase wander can be compensated for simultaneously). This was achieved by optically sampling a linearly chirped, locally generated clock source (the sampling function was achieved using a nonlinear optical loop mirror). The data, now with each bit or code word having a unique wavelength, is then propagated through a dispersion compensation module. The dispersion compensation effectively re-aligns the data in time and so thus, the timing irregularities are removed. The principle of operation was tested using computer simulation before being re-tested in a laboratory environment. A second stage was added to the device to create 3R regeneration. The second stage is used to simply convert the timing suppressed data back into a single wavelength. By controlling the relative timing displacement between stage one and stage two, the wavelength that is finally produced can be controlled.
Resumo:
Two-way power flow is nothing new and has been in practical use using line commutated converters for at least 50 years. With these types of converters, reversal of power flow can be achieved by increasing the firing angle of the devices beyond 90 degrees thus producing a negative DC voltage. Line commutated converters have several known disadvantages including: the direct current cannot be reversed, the power factor decreases when the firing angle increases and the harmonics are high on the line current. To tackle the above problems a forced commutated converter can be used. The power factor can be unity and the harmonics can be reduced. Many researchers have used PWM with different control techniques to serve the above purposes. In each converter arm, they used a forced commutated device with an antiparallel diode. Under the rectification mode of operation the current path is preponderantly through the diodes and under the inverter operation the current flows preponderantly through the forced commutated devices. Although their results were encouraging and gave a unity power factor with nearly sinusoidal current, the main disadvantage was that there were difficulties in controlling the power factor when the system is needed to operate at lagging or leading power factor. In this work, a new idea was introduced by connecting two GTOs antiparallel instead of a diode and a GTO. A single phase system using two GTO converters which are connected in series was built. One converter operates as a rectifier and the other converter operates as an inverter. In the case of the inversion mode and in each inverter arm one GTO is operated as a diode simply by switching it always on and the other antiparallel GTO is operated as a normal device to carry the inverter current. In case of the rectification mode, in each arm one GTO is always off and the other GTP is operated as a controlled device. The main advantage is that the system can be operated at lagging or leading power factor.
Resumo:
In this thesis various mathematical methods of studying the transient and dynamic stabiIity of practical power systems are presented. Certain long established methods are reviewed and refinements of some proposed. New methods are presented which remove some of the difficulties encountered in applying the powerful stability theories based on the concepts of Liapunov. Chapter 1 is concerned with numerical solution of the transient stability problem. Following a review and comparison of synchronous machine models the superiority of a particular model from the point of view of combined computing time and accuracy is demonstrated. A digital computer program incorporating all the synchronous machine models discussed, and an induction machine model, is described and results of a practical multi-machine transient stability study are presented. Chapter 2 reviews certain concepts and theorems due to Liapunov. In Chapter 3 transient stability regions of single, two and multi~machine systems are investigated through the use of energy type Liapunov functions. The treatment removes several mathematical difficulties encountered in earlier applications of the method. In Chapter 4 a simple criterion for the steady state stability of a multi-machine system is developed and compared with established criteria and a state space approach. In Chapters 5, 6 and 7 dynamic stability and small signal dynamic response are studied through a state space representation of the system. In Chapter 5 the state space equations are derived for single machine systems. An example is provided in which the dynamic stability limit curves are plotted for various synchronous machine representations. In Chapter 6 the state space approach is extended to multi~machine systems. To draw conclusions concerning dynamic stability or dynamic response the system eigenvalues must be properly interpreted, and a discussion concerning correct interpretation is included. Chapter 7 presents a discussion of the optimisation of power system small sjgnal performance through the use of Liapunov functions.
Resumo:
Various micro-radial compressor configurations were investigated using one-dimensional meanline and computational fluid dynamics (CFD) techniques for use in a micro gas turbine (MGT) domestic combined heat and power (DCHP) application. Blade backsweep, shaft speed, and blade height were varied at a constant pressure ratio. Shaft speeds were limited to 220 000 r/min, to enable the use of a turbocharger bearing platform. Off-design compressor performance was established and used to determine the MGT performance envelope; this in turn was used to assess potential cost and environmental savings in a heat-led DCHP operating scenario within the target market of a detached family home. A low target-stage pressure ratio provided an opportunity to reduce diffusion within the impeller. Critically for DCHP, this produced very regular flow, which improved impeller performance for a wider operating envelope. The best performing impeller was a low-speed, 170 000 r/min, low-backsweep, 15° configuration producing 71.76 per cent stage efficiency at a pressure ratio of 2.20. This produced an MGT design point system efficiency of 14.85 per cent at 993 W, matching prime movers in the latest commercial DCHP units. Cost and CO2 savings were 10.7 per cent and 6.3 per cent, respectively, for annual power demands of 17.4 MWht and 6.1 MWhe compared to a standard condensing boiler (with grid) installation. The maximum cost saving (on design point) was 14.2 per cent for annual power demands of 22.62 MWht and 6.1 MWhe corresponding to an 8.1 per cent CO2 saving. When sizing, maximum savings were found with larger heat demands. When sized, maximum savings could be made by encouraging more electricity export either by reducing household electricity consumption or by increasing machine efficiency.
Resumo:
This thesis presents a comparison of integrated biomass to electricity systems on the basis of their efficiency, capital cost and electricity production cost. Four systems are evaluated: combustion to raise steam for a steam cycle; atmospheric gasification to produce fuel gas for a dual fuel diesel engine; pressurised gasification to produce fuel gas for a gas turbine combined cycle; and fast pyrolysis to produce pyrolysis liquid for a dual fuel diesel engine. The feedstock in all cases is wood in chipped form. This is the first time that all three thermochemical conversion technologies have been compared in a single, consistent evaluation.The systems have been modelled from the transportation of the wood chips through pretreatment, thermochemical conversion and electricity generation. Equipment requirements during pretreatment are comprehensively modelled and include reception, storage, drying and communication. The de-coupling of the fast pyrolysis system is examined, where the fast pyrolysis and engine stages are carried out at separate locations. Relationships are also included to allow learning effects to be studied. The modelling is achieved through the use of multiple spreadsheets where each spreadsheet models part of the system in isolation and the spreadsheets are combined to give the cost and performance of a whole system.The use of the models has shown that on current costs the combustion system remains the most cost-effective generating route, despite its low efficiency. The novel systems only produce lower cost electricity if learning effects are included, implying that some sort of subsidy will be required during the early development of the gasification and fast pyrolysis systems to make them competitive with the established combustion approach. The use of decoupling in fast pyrolysis systems is a useful way of reducing system costs if electricity is required at several sites because• a single pyrolysis site can be used to supply all the generators, offering economies of scale at the conversion step. Overall, costs are much higher than conventional electricity generating costs for fossil fuels, due mainly to the small scales used. Biomass to electricity opportunities remain restricted to niche markets where electricity prices are high or feed costs are very low. It is highly recommended that further work examines possibilities for combined beat and power which is suitable for small scale systems and could increase revenues that could reduce electricity prices.
Resumo:
There is considerable concern over the increased effect of fossil fuel usage on the environment and this concern has resulted in an effort to find alternative, environmentally friendly energy sources. Biomass is an available alternative resource which may be converted by flash pyrolysis to produce a crude liquid product that can be used directly to substitute for conventional fossil fuels or upgraded to a higher quality fuel. Both the crude and upgraded products may be utilised for power generation. A computer program, BLUNT, has been developed to model the flash pyrolysis of biomass with subsequent upgrading, refining or power production. The program assesses and compares the economic and technical opportunities for biomass thermochemical conversion on the same basis. BLUNT works by building up a selected processing route from a number of process steps through which the material passes sequentially. Each process step has a step model that calculates the mass and energy balances, the utilities usage and the capital cost for that step of the process. The results of the step models are combined to determine the performance of the whole conversion route. Sample results from the modelling are presented in this thesis. Due to the large number of possible combinations of feeds, conversion processes, products and sensitivity analyses a complete set of results is impractical to present in a single publication. Variation of the production costs for the available products have been illustrated based on the cost of a wood feedstock. The effect of selected macroeconomic factors on the production costs of bio-diesel and gasoline are also given.
Resumo:
Since the oil crisis of 1973 considerable interest has been shown in the production of liquid fuels from alternative sources. In particular processes utilizing coal as the feedstock have received considerable interest. These processes can be divided into direct and indirect liquefaction and pyrolysis. This thesis describes the modelling of indirect coal liquefaction processes for the purpose of performing technical and economic assessment of the production of liquid fuels from coal and lignite, using a variety of gasification and synthesis gas liquefaction technologies. The technologies were modeled on a 'step model' basis where a step is defined as a combination of individual unit operations which together perform a significant function on the process streams, such as a methanol synthesis step or a gasification and physical gas cleaning step. Sample results of the modelling, covering a wide range of gasifiers, liquid synthesis processes and products are presented in this thesis. Due to the large number of combinations of gasifier, liquid synthesis processes, products and economic sensitivity cases, a complete set of results is impractical to present in a single publication. The main results show that methanol is the cheapest fuel to produce from coal followed by fuel alcohol, diesel from the Shell Middle Distillate Synthesis process,gasoline from Mobil Methanol to Gasoline (MTG) process, diesel from the Mobil Methanol Olefins Gasoline Diesel (MOGD) process and finally gasoline from the same process. Some variation in production costs of all the products was shown depending on type of gasifier chosen and feedstock.