925 resultados para Power method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 4-10 GHz, on-chip balun based current commutating mixer is proposed. Tunable resistive feedback is used at the transconductance stage for wideband response, and interlaced stacked transformer is adopted for good balance of the balun. Measurement results show that a conversion gain of 13.5 dB, an IIP3 of 4 dBm and a noise figure of 14 dB are achieved with 5.6 mW power consumption under 1.2 V supply. The simulated amplitude and phase imbalance is within 0.9 dB and ±2◦ over the band.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A grid-connected DFIG for wind power generation can affect power system small-signal angular stability in two ways: by changing the system load flow condition and dynamically interacting with synchronous generators (SGs). This paper presents the application of conventional method of damping torque analysis (DTA) to examine the effect of DFIG’s dynamic interactions with SGs on the small-signal angular stability. It shows that the effect is due to the dynamic variation of power exchange between the DFIG and power system and can be estimated approximately by the DTA. Consequently, if the DFIG is modelled as a constant power source when the effect of zero dynamic interactions is assumed, the impact of change of load flow brought about by the DFIG can be determined. Thus the total effect of DFIG can be estimated from the result of DTA added on that of constant power source model. Applications of the DTA method proposed in the paper are discussed. An example of multi-machine power systems with grid-connected DFIGs are presented to demonstrate and validate the DTA method proposed and conclusions obtained in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, Sr2Fe1.5Mo0.4Nb0.1O6-δ (SFMNb)-xSm0.2Ce0.8O2-δ (SDC) (x = 0, 20, 30, 40, 50 wt%) composite cathode materials were synthesized by a one-pot combustion method to improve the electrochemical performance of SFMNb cathode for intermediate temperature solid oxide fuel cells (IT-SOFCs). The fabrication of composite cathodes by adding SDC to SFMNb is conducive to providing extended electrochemical reaction zones for oxygen reduction reactions (ORR). X-ray diffraction (XRD) demonstrates that SFMNb is chemically compatible with SDC electrolytes at temperature up to 1100 °C. Scanning electron microscope (SEM) indicates that the SFMNb-SDC composite cathodes have a porous network nanostructure as well as the single phase SFMNb. The conductivity and thermal expansion coefficient of the composite cathodes decrease with the increased content of SDC, while the electrochemical impedance spectra (EIS) exhibits that SFMNb-40SDC composite cathode has optimal electrochemical performance with low polarization resistance (Rp) on the La0.9Sr0.1Ga0.8Mg0.2O3 electrolyte. The Rp of the SFMNb-40SDC composite cathode is about 0.047 Ω cm2 at 800 °C in air. A single cell with SFMNb-40SDC cathode also displays favorable discharge performance, whose maximum power density is 1.22 W cm-2 at 800 °C. All results indicate that SFMNb-40SDC composite material is a promising cathode candidate for IT-SOFCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cobalt-free composite cathodes consisting of Pr0.6Sr0.4FeO 3-δ -xCe0.9Pr0.1O 2-δ (PSFO-xCPO, x = 0-50 wt%) have been synthesized using a one-pot method. X-ray diffraction, scanning electron microscopy, thermal expansion coefficient, conductivity, and polarization resistance (R P ) have been used to characterize the PSFO-xCPO cathodes. Furthermore the discharge performance of the Ni-SSZ/SSZ/GDC/PSFO-xCPO cells has been measured. The experimental results indicate that the PSFO-xCPO composite materials fully consist of PSFO and CPO phases and posses a porous microstructure. The conductivity of PSFO-xCPO decreases with the increase of CPO content, but R P of PSFO-40CPO shows the smallest value amongst all the samples. The power density of single cells with a PSFO-40CPO composite cathode is significantly improved compared with that of the PSFO cathode, exhibiting 0.43, 0.75, 1.08 and 1.30 W cm-2 at 650, 700, 750 and 800 °C, respectively. In addition, single cells with the PSFO-40CPO composite cathode show a stable performance with no obvious degradation over 100 h when operating at 750 °C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops an integrated optimal power flow (OPF) tool for distribution networks in two spatial scales. In the local scale, the distribution network, the natural gas network, and the heat system are coordinated as a microgrid. In the urban scale, the impact of natural gas network is considered as constraints for the distribution network operation. The proposed approach incorporates unbalance three-phase electrical systems, natural gas systems, and combined cooling, heating, and power systems. The interactions among the above three energy systems are described by energy hub model combined with components capacity constraints. In order to efficiently accommodate the nonlinear constraint optimization problem, particle swarm optimization algorithm is employed to set the control variables in the OPF problem. Numerical studies indicate that by using the OPF method, the distribution network can be economically operated. Also, the tie-line power can be effectively managed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following report summarizes research activities on the project for the period December 1, 1986 to November 30, 1987. Research efforts for the second year deviated slightly from those described in the project proposal. By the end of the second year of testing, it was possible to begin evaluating how power plant operating conditions influenced the chemical and physical properties of fly ash obtained from one of the monitored power plants (Ottumwa Generating Station, OGS). Hence, several of the tasks initially assigned to the third year of the project (specifically tasks D, E, and F) were initiated during the second year of the project. Manpower constraints were balanced by delaying full scale implementation of the quantitative X-ray diffraction and differential thermal analysis tasks until the beginning of the third year of the project. Such changes should have little bearing on the outcome of the overall project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Koffein har sedan 1962 fram och tillbaka varit dopingklassat men är sedan 2004 ett tillåtet ergogent preparat inom idrotten. Flera studier har visat positiva resultat i uthållighetsidrotter och på maximal styrkeförmåga. Syftet med denna studie var att se om koffein har någon ergogen effekt på power i överkroppsmuskulaturen. I denna studie användes en dubbel-blindad, randomiserad cross-over design. En slumpmässig indelning inför det första testtillfället avgjorde ifall deltagarna började studien med koffein- eller placebosupplementering och bytte sedan supplementering inför testtillfälle två. Som supplementeringsmetod användes koffein i tuggummiform på grund av att det ger ett snabbare upptag. Power mättes genom testet sittande stöt med en kula på 5 kg. Resultatet visar på en mycket svag (ES=0,13) positiv förbättring av stötlängderna mellan interventionerna. En av anledningarna till det triviala resultatet tros kunna bero på att det urval som användes ej bestod av en homogengrupp vad gäller träningsbakgrund och prestationsnivå. På grund av detta är det svårt att dra några slutsatser om koffeinet har eller inte har några ergogena effekter på power i överkroppen.