958 resultados para master-oscillator power amplifier (MOPA)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The energy, position, and momentum eigenstates of a para-Bose oscillator system were considered in paper I. Here we consider the Bargmann or the analytic function description of the para-Bose system. This brings in, in a natural way, the coherent states ||z;alpha> defined as the eigenstates of the annihilation operator ?. The transformation functions relating this description to the energy, position, and momentum eigenstates are explicitly obtained. Possible resolution of the identity operator using coherent states is examined. A particular resolution contains two integrals, one containing the diagonal basis ||z;alpha><−z;alpha||. We briefly consider the normal and antinormal ordering of the operators and their diagonal and discrete diagonal coherent state approximations. The problem of constructing states with a minimum value of the product of the position and momentum uncertainties and the possible alpha dependence of this minimum value is considered. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standardised time series of fishery catch rates require collations of fishing power data on vessel characteristics. Linear mixed models were used to quantify fishing power trends and study the effect of missing data encountered when relying on commercial logbooks. For this, Australian eastern king prawn (Melicertus plebejus) harvests were analysed with historical (from vessel surveys) and current (from commercial logbooks) vessel data. Between 1989 and 2010, fishing power increased up to 76%. To date, both forward-filling and, alternatively, omitting records with missing vessel information from commercial logbooks produce broadly similar fishing power increases and standardised catch rates, due to the strong influence of years with complete vessel data (16 out of 23 years of data). However, if gaps in vessel information had not originated randomly and skippers from the most efficient vessels were the most diligent at filling in logbooks, considerable errors would be introduced. Also, the buffering effect of complete years would be short lived as years with missing data accumulate. Given ongoing changes in fleet profile with high-catching vessels fishing proportionately more of the fleet’s effort, compliance with logbook completion, or alternatively ongoing vessel gear surveys, is required for generating accurate estimates of fishing power and standardised catch rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland east coast trawl fishery is by far the largest prawn and scallop otter trawl fleet in Australia in terms of number of vessels, with 504 vessels licensed to fish for species including tiger prawns, endeavour prawns, red spot king prawns, eastern king prawns and saucer scallops by the end of 2004. The vessel fleet has gradually upgraded characteristics such as engine power and use of propeller nozzles, quad nets, global positioning systems (GPS) and computer mapping software. These changes, together with the ever-changing profile of the fleet, were analysed by linear mixed models to quantify annual efficiency increases of an average vessel at catching prawns or scallops. The analyses included vessel characteristics (treated as fixed effects) and vessel identifier codes (treated as random effects). For the period from 1989 to 2004 the models estimated overall fishing power increases of 6% in the northern tiger, 6% in the northern endeavour, 12% in the southern tiger, 18% in the red spot king, 46% in the eastern king prawn and 15% in the saucer scallop sector. The results illustrate the importance of ongoing monitoring of vessel and fleet characteristics and the need to use this information to standardise catch rate indices used in stock assessment and management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For complex disease genetics research in human populations, remarkable progress has been made in recent times with the publication of a number of genome-wide association scans (GWAS) and subsequent statistical replications. These studies have identified new genes and pathways implicated in disease, many of which were not known before. Given these early successes, more GWAS are being conducted and planned, both for disease and quantitative phenotypes. Many researchers and clinicians have DNA samples available on collections of families, including both cases and controls. Twin registries around the world have facilitated the collection of large numbers of families, with DNA and multiple quantitative phenotypes collected on twin pairs and their relatives. In the design of a new GWAS with a fixed budget for the number of chips, the question arises whether to include or exclude related individuals. It is commonly believed to be preferable to use unrelated individuals in the first stage of a GWAS because relatives are 'over-matched' for genotypes. In this study, we quantify that for GWAS of a quantitative phenotype, relative to a sample of unrelated individuals surprisingly little power is lost when using relatives. The advantages of using relatives are manifold, including the ability to perform more quality control, the choice to perform within-family tests of association that are robust to population stratification, and the ability to perform joint linkage and association analysis. Therefore, the advantages of using relatives in GWAS for quantitative traits may well outweigh the small disadvantage in terms of statistical power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contention-based multiple access is a crucial component of many wireless systems. Multiple-packet reception (MPR) schemes that use interference cancellation techniques to receive and decode multiple packets that arrive simultaneously are known to be very efficient. However, the MPR schemes proposed in the literature require complex receivers capable of performing advanced signal processing over significant amounts of soft undecodable information received over multiple contention steps. In this paper, we show that local channel knowledge and elementary received signal strength measurements, which are available to many receivers today, can actively facilitate multipacket reception and even simplify the interference canceling receiver¿s design. We introduce two variants of a simple algorithm called Dual Power Multiple Access (DPMA) that use local channel knowledge to limit the receive power levels to two values that facilitate successive interference cancellation. The resulting receiver structure is markedly simpler, as it needs to process only the immediate received signal without having to store and process signals received previously. Remarkably, using a set of three feedback messages, the first variant, DPMA-Lite, achieves a stable throughput of 0.6865 packets per slot. Using four possible feedback messages, the second variant, Turbo-DPMA, achieves a stable throughput of 0.793 packets per slot, which is better than all contention algorithms known to date.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In developing countries high rate of growth in demand of electric energy is felt, and so the addition of new generating units becomes necessary. In deregulated power systems private generating stations are encouraged to add new generations. Finding the appropriate location of new generator to be installed can be obtained by running repeated power flows, carrying system studies like analyzing the voltage profile, voltage stability, loss analysis etc. In this paper a new methodology is proposed which will mainly consider the existing network topology into account. A concept of T-index is introduced in this paper, which considers the electrical distances between generator and load nodes.This index is used for ranking significant new generation expansion locations and also indicates the amount of permissible generations that can be installed at these new locations. This concept facilitates for the medium and long term planning of power generation expansions within the available transmission corridors. Studies carried out on a sample 7-bus system, EHV equivalent 24-bus system and IEEE 39 bus system are presented for illustration purpose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a modified Heffron-Phillip's (K-constant) model is derived for the design of power system stabilizers. A knowledge of external system parameters, such as equivalent infinite bus voltage and external impedances or their equivalent estimated values is required for designing a conventional power system stabilizer. In the proposed method, information available at the secondary bus of the step-up transformer is used to set up a modified Heffron-Phillip's (ModHP) model. The PSS design based on this model utilizes signals available within the generating station. The efficacy of the proposed design technique and the performance of the stabilizer has been evaluated over a range of operating and system conditions. The simulation results have shown that the performance of the proposed stabilizer is comparable to that could be obtained by conventional design but without the need for the estimation and computation of external system parameters. The proposed design is thus well suited for practical applications to power system stabilization, including possibly the multi-machine applications where accurate system information is not readily available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new approach to enhance the transmission system distance relay co-ordination is presented. The approach depends on the apparent impedance loci seen by the distance relay during all possible disturbances. In a distance relay, the impedance loci seen at the relay location is obtained by extensive transient stability studies. Support vector machines (SVMs), a class of patterns classifiers are used in discriminating zone settings (zone-1, zone-2 and zone-3) using the signals to be used by the relay. Studies on a sample 9-bus are presented for illustrating the proposed scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are several reasons for increasing the usage of forest biomass for energy in Finland. Apart from the fact that forest biomass is a CO2 -neutral energy source, it is also a domestic resource distributed throughout the country. Usage of forest biomass in the form of logging residues decreases Finland’s dependence of energy import and increases both incomes and employment. Wood chips are mainly made from logging residues, which constitute 64 % of the raw material. A large-scale use of forest biomass requires heed also to the potential negative aspects. Forest bioenergy is used extensively, but its impacts on the forests soil nutrition and carbon balance has not been studied much. Nor have there been many studies on the heavy metal or chlorine content of logging residues. The goal of this study was to examine the content of carbon, macronutrients, heavy metals and other for the combustion harmful substances in Scots pine and Norway spruce wood chips, and to estimate the effect of harvesting of logging residues on the forests carbon and nutrient balance. Another goal was to examine the energy content of the clear cut remains. The Wood chips for this study were gathered from pine and spruce dominated clear cut sites in southern Finland, in the costal forests between Hankoo and Siuntio. The number of sample locations were 29, and the average area was 3,15 ha and the average timber volume 212,6 m3 ha -1. The average logged timber volume was for Scots pine timber 70 m3 ha -1 and for Norway spruce timber 124 m3 ha -1 and for deciduous timber (birch and alder) 18,5 m3 ha -1. The proportion of spruce in the logging residues and the stand-volume were relevant for how much nutrients were taken from the forest ecosystem when harvesting logging residues. In this study it was noted that the nutrient content of the logging residues clearly increased when the percentage of spruce in the timber volume increased. The S, K, Na and Cl -contents in the logging residues in this study increased with an increasing percentage of spruce, which is probably due to the fact that the spruce is an effective collector of atmospheric dry-deposition. The amounts of nutrients that were lost when harvesting logging residues were less than those referred to in the literature. Within a circulation period (100 years), the forest soil gets substantially more nutrients from atmospheric deposition, litter fall and weathering than is lost through harvesting of logging residues after a clear cut. Harvesting of the logging residues makes for a relatively modest increase of the quantity of carbon that is removed from the forest compared to traditional forestry. Due to the fact that the clear cut remains in my study showed a high content of chlorine, there is a risk of corrosion in connection to the incineration of the logging residues in power plants especially at coastal areas/forests. The risk of sulphur -related corrosion is probably rather small, because S concentrations are relatively low in woodchips. The clear cut remains showed rather high heavy metal contents. If the heavy metal contents in this study are representative for the clear cut remains in the coastal forests generally, there might be reason to exert some caution when using the ash for forest fertilizing purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines the law and policy concerning renewable energy electricity generation in Palestine, Jordan, and Abu Dhabi. The thesis gives greater attention to the promotion of solar power owing to the abundance and viability. It appears that energy security profoundly underpins the utilisation of renewable electricity, and the motivation of climate change mitigation also pays a role in the promotion of renewable energy in these jurisdictions. However, current policies and regulations are not fully able to promote the renewables in the power sector. The thesis submits that reforms of law and policy are necessary to enhance the achievement of environmental and energy goals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The decentralized power is characterised by generation of power nearer to the demand centers, focusing mainly on meeting local energy needs. A decentralized power system can function either in the presence of grid, where it can feed the surplus power generated to the grid, or as an independent/stand-alone isolated system exclusively meeting the local demands of remote locations. Further, decentralized power is also classified on the basis of type of energy resources used-non-renewable and renewable. These classifications along with a plethora of technological alternatives have made the whole prioritization process of decentralized power quite complicated for decision making. There is abundant literature, which has discussed various approaches that have been used to support decision making under such complex situations. We envisage that summarizing such literature and coming out with a review paper would greatly help the policy/decision makers and researchers in arriving at effective solutions. With such a felt need 102 articles were reviewed and features of several technological alternatives available for decentralized power, the studies on modeling and analysis of economic, environmental and technological asibilities of both grid-connected (GC) and stand-alone (SA) systems as decentralized power options are presented. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum Ohmic residual resistance of a thin disordered wire, approximated as a one-dimensional multichannel conductor, is known to scale exponentially with length. This nonadditivity is shown to imply (i) a low-frequency noise-power spectrum proportional to -ln(Ω)/Ω, and (ii) a dispersive capacitative impedance proportional to tanh(√iΩ )/ √iΩ. A deep connection to the quantum Brownian motion with linear dynamical frictional coupling to a harmonic-oscillator bath is pointed out and interpreted in physical terms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis integrates real-time feedback control into an optical tweezers instrument. The goal is to reduce the variance in the trapped bead s position, -effectively increasing the trap stiffness of the optical tweezers. Trap steering is done with acousto-optic deflectors and control algorithms are implemented with a field-programmable gate array card. When position clamp feedback control is on, the effective trap stiffness increases 12.1-times compared to the stiffness without control. This allows improved spatial control over trapped particles without increasing the trapping laser power.